qid
int64
46k
74.7M
question
stringlengths
54
37.8k
date
stringlengths
10
10
metadata
listlengths
3
3
response_j
stringlengths
17
26k
response_k
stringlengths
26
26k
4,149,274
Okay, I'm having one of those moments that makes me question my ability to use a computer. This is not the sort of question I imagined asking as my first SO post, but here goes. Started on Zed's new "Learn Python the Hard Way" since I've been looking to get back into programming after a 10 year hiatus and python was always what I wanted. This book has really spoken to me. That being said, I'm having a serious issue with pydoc from the command. I've got all the directories in c:/python26 in my system path and I can execute pydoc from the command line just fine regardless of pwd - but it accepts no arguments. Doesn't matter what I type, I just get the standard pydoc output telling me the acceptable arguments. Any ideas? For what it's worth, I installed ActivePython as per Zed's suggestion. ``` C:\Users\Chevee>pydoc file pydoc - the Python documentation tool pydoc.py <name> ... Show text documentation on something. <name> may be the name of a Python keyword, topic, function, module, or package, or a dotted reference to a class or function within a module or module in a package. If <name> contains a '\', it is used as the path to a Python source file to document. If name is 'keywords', 'topics', or 'modules', a listing of these things is displayed. pydoc.py -k <keyword> Search for a keyword in the synopsis lines of all available modules. pydoc.py -p <port> Start an HTTP server on the given port on the local machine. pydoc.py -g Pop up a graphical interface for finding and serving documentation. pydoc.py -w <name> ... Write out the HTML documentation for a module to a file in the current directory. If <name> contains a '\', it is treated as a filename; if it names a directory, documentation is written for all the contents. C:\Users\Chevee> ``` EDIT: New information, pydoc works just fine in PowerShell. As a linux user, I have no idea why I'm trying to use cmd anyways--but I'd still love to figure out what's up with pydoc and cmd. EDIT 2: More new information. In cmd... ``` c:\>python c:/python26/lib/pydoc.py file ``` ...works just fine. Everything works just fine with just pydoc in PowerShell without me worrying about pwd, or extensions or paths.
2010/11/10
[ "https://Stackoverflow.com/questions/4149274", "https://Stackoverflow.com", "https://Stackoverflow.com/users/502486/" ]
In Windows Powershell use: python -m pydoc Examples: python -m pydoc open python -m pydoc raw\_input python -m pydoc argv
Based on your second edit, you may have more than one copy of pydoc.py in your path, with the 'wrong' one first such that when it starts up it doesn't have the correct environment in which to execute.
4,149,274
Okay, I'm having one of those moments that makes me question my ability to use a computer. This is not the sort of question I imagined asking as my first SO post, but here goes. Started on Zed's new "Learn Python the Hard Way" since I've been looking to get back into programming after a 10 year hiatus and python was always what I wanted. This book has really spoken to me. That being said, I'm having a serious issue with pydoc from the command. I've got all the directories in c:/python26 in my system path and I can execute pydoc from the command line just fine regardless of pwd - but it accepts no arguments. Doesn't matter what I type, I just get the standard pydoc output telling me the acceptable arguments. Any ideas? For what it's worth, I installed ActivePython as per Zed's suggestion. ``` C:\Users\Chevee>pydoc file pydoc - the Python documentation tool pydoc.py <name> ... Show text documentation on something. <name> may be the name of a Python keyword, topic, function, module, or package, or a dotted reference to a class or function within a module or module in a package. If <name> contains a '\', it is used as the path to a Python source file to document. If name is 'keywords', 'topics', or 'modules', a listing of these things is displayed. pydoc.py -k <keyword> Search for a keyword in the synopsis lines of all available modules. pydoc.py -p <port> Start an HTTP server on the given port on the local machine. pydoc.py -g Pop up a graphical interface for finding and serving documentation. pydoc.py -w <name> ... Write out the HTML documentation for a module to a file in the current directory. If <name> contains a '\', it is treated as a filename; if it names a directory, documentation is written for all the contents. C:\Users\Chevee> ``` EDIT: New information, pydoc works just fine in PowerShell. As a linux user, I have no idea why I'm trying to use cmd anyways--but I'd still love to figure out what's up with pydoc and cmd. EDIT 2: More new information. In cmd... ``` c:\>python c:/python26/lib/pydoc.py file ``` ...works just fine. Everything works just fine with just pydoc in PowerShell without me worrying about pwd, or extensions or paths.
2010/11/10
[ "https://Stackoverflow.com/questions/4149274", "https://Stackoverflow.com", "https://Stackoverflow.com/users/502486/" ]
When you type the name of a file at the windows command prompt, cmd can check the windows registry for the default file association, and use that program to open it. So if the Inkscape installer associated .py files with its own version of python, cmd might preferentially run that and ignore the PATH entirely. See [this question](https://stackoverflow.com/questions/2199739/pydoc-fails-under-windows-and-python-2-6-4).
``` python -m pydoc -k/p/g/w <name> ```
4,149,274
Okay, I'm having one of those moments that makes me question my ability to use a computer. This is not the sort of question I imagined asking as my first SO post, but here goes. Started on Zed's new "Learn Python the Hard Way" since I've been looking to get back into programming after a 10 year hiatus and python was always what I wanted. This book has really spoken to me. That being said, I'm having a serious issue with pydoc from the command. I've got all the directories in c:/python26 in my system path and I can execute pydoc from the command line just fine regardless of pwd - but it accepts no arguments. Doesn't matter what I type, I just get the standard pydoc output telling me the acceptable arguments. Any ideas? For what it's worth, I installed ActivePython as per Zed's suggestion. ``` C:\Users\Chevee>pydoc file pydoc - the Python documentation tool pydoc.py <name> ... Show text documentation on something. <name> may be the name of a Python keyword, topic, function, module, or package, or a dotted reference to a class or function within a module or module in a package. If <name> contains a '\', it is used as the path to a Python source file to document. If name is 'keywords', 'topics', or 'modules', a listing of these things is displayed. pydoc.py -k <keyword> Search for a keyword in the synopsis lines of all available modules. pydoc.py -p <port> Start an HTTP server on the given port on the local machine. pydoc.py -g Pop up a graphical interface for finding and serving documentation. pydoc.py -w <name> ... Write out the HTML documentation for a module to a file in the current directory. If <name> contains a '\', it is treated as a filename; if it names a directory, documentation is written for all the contents. C:\Users\Chevee> ``` EDIT: New information, pydoc works just fine in PowerShell. As a linux user, I have no idea why I'm trying to use cmd anyways--but I'd still love to figure out what's up with pydoc and cmd. EDIT 2: More new information. In cmd... ``` c:\>python c:/python26/lib/pydoc.py file ``` ...works just fine. Everything works just fine with just pydoc in PowerShell without me worrying about pwd, or extensions or paths.
2010/11/10
[ "https://Stackoverflow.com/questions/4149274", "https://Stackoverflow.com", "https://Stackoverflow.com/users/502486/" ]
In Windows Powershell use: python -m pydoc Examples: python -m pydoc open python -m pydoc raw\_input python -m pydoc argv
When you type the name of a file at the windows command prompt, cmd can check the windows registry for the default file association, and use that program to open it. So if the Inkscape installer associated .py files with its own version of python, cmd might preferentially run that and ignore the PATH entirely. See [this question](https://stackoverflow.com/questions/2199739/pydoc-fails-under-windows-and-python-2-6-4).
4,149,274
Okay, I'm having one of those moments that makes me question my ability to use a computer. This is not the sort of question I imagined asking as my first SO post, but here goes. Started on Zed's new "Learn Python the Hard Way" since I've been looking to get back into programming after a 10 year hiatus and python was always what I wanted. This book has really spoken to me. That being said, I'm having a serious issue with pydoc from the command. I've got all the directories in c:/python26 in my system path and I can execute pydoc from the command line just fine regardless of pwd - but it accepts no arguments. Doesn't matter what I type, I just get the standard pydoc output telling me the acceptable arguments. Any ideas? For what it's worth, I installed ActivePython as per Zed's suggestion. ``` C:\Users\Chevee>pydoc file pydoc - the Python documentation tool pydoc.py <name> ... Show text documentation on something. <name> may be the name of a Python keyword, topic, function, module, or package, or a dotted reference to a class or function within a module or module in a package. If <name> contains a '\', it is used as the path to a Python source file to document. If name is 'keywords', 'topics', or 'modules', a listing of these things is displayed. pydoc.py -k <keyword> Search for a keyword in the synopsis lines of all available modules. pydoc.py -p <port> Start an HTTP server on the given port on the local machine. pydoc.py -g Pop up a graphical interface for finding and serving documentation. pydoc.py -w <name> ... Write out the HTML documentation for a module to a file in the current directory. If <name> contains a '\', it is treated as a filename; if it names a directory, documentation is written for all the contents. C:\Users\Chevee> ``` EDIT: New information, pydoc works just fine in PowerShell. As a linux user, I have no idea why I'm trying to use cmd anyways--but I'd still love to figure out what's up with pydoc and cmd. EDIT 2: More new information. In cmd... ``` c:\>python c:/python26/lib/pydoc.py file ``` ...works just fine. Everything works just fine with just pydoc in PowerShell without me worrying about pwd, or extensions or paths.
2010/11/10
[ "https://Stackoverflow.com/questions/4149274", "https://Stackoverflow.com", "https://Stackoverflow.com/users/502486/" ]
When you type the name of a file at the windows command prompt, cmd can check the windows registry for the default file association, and use that program to open it. So if the Inkscape installer associated .py files with its own version of python, cmd might preferentially run that and ignore the PATH entirely. See [this question](https://stackoverflow.com/questions/2199739/pydoc-fails-under-windows-and-python-2-6-4).
**Syntax for pydoc on windows:** alt1: > > C:\path\PythonXX\python.exe C:\path\PythonXX\Lib\pydoc.py -k/p/g/w X:\path\file\_to\_doc.py > > > alt2: > > python -m pydoc -k/p/g/w X:\path\file\_to\_doc.py > > > Of which the latter is the one to prefer, duh. However it requires your windows installation to have registered python to the environment variable "Path". **Setup windows environment variables:** Look at [this site](http://www.computerhope.com/issues/ch000549.htm) for a guide on where to find them. The one you'll be looking for is "Path". If you select Path and click Edit you will see a long row of paths pointing to different folders. The Path's you see here is what allows you to basically reach a veriety of programs in the command line by just entering the name of the program, instead of the whole path to it. So what you want to do here is to locate your Python installation and copy its full path like this: `X:\subfolders\PythonXX\` Then you add it to the very end of the long row of Path's like this: > > X:\earlier\path\to\something;X:\subfolders\PythonXX\ > > > Notice the ";" that seperates the different paths, make sure not to forget it. When done, click to confirm/Ok, then you would need to restart any cmd.exe that's already open. **The pydoc.py** The thing is that pydoc is a module of the standard python lib, and it's also powered by python. The windows environment, of what I understand, requires you to specify with which program you want to run a certain file with. So to run the pydoc.py-file we would use: **Open file in windows cmd.exe:** > > X:\subfolders\Python27\python.exe X:\subfolders\Python27\Lib\pydoc.py > > > **pydoc.py's arguments:** *pydoc.py* comes with a veriety of command line-based features that allows you to enter certain arguments: `-k/p/g/w` of which will trigger different behaviours of the *pydoc.py*-program. **Send arguments to a program through command line:** The syntax to enter these arguments is of what I know always after the `x:\pathtofile\filename.suffix`, seperated by a simple space. Which gives us the final: alt1: > > X:\subfolders\Python27\python.exe X:\subfolders\Python27\Lib\pydoc.py -w X:\path\file\_to\_doc.py > > > alt2 (with python registered to path): > > python -m pydoc -w X:\path\file\_to\_doc.py > > > The "*w*"-option will give you a HTML-documentation for the file you want to run documentation on. Notice that pydoc.py will (according to my tests) create the documentation-file in the current working directory. Meaning that you will need to place yourself in a folder of choice before you actually run the command. **The function of -m** Of what I can find, the -m seem to handle registry entries, atleast in the msiexec.exe. I guess it might be used for programs in general this way. So my speculative idea of it is that if "-m" is applied, the pursuing arguments paths will be rewritten so that the .exe-file will be used as a path-reference. But as said, rather speculative. -m Rewrites all required computer-specific registry entries. (in msiexec.exe) According to [Microsoft](http://technet.microsoft.com/sv-se/library/cc759262%28v=WS.10%29.aspx)
4,149,274
Okay, I'm having one of those moments that makes me question my ability to use a computer. This is not the sort of question I imagined asking as my first SO post, but here goes. Started on Zed's new "Learn Python the Hard Way" since I've been looking to get back into programming after a 10 year hiatus and python was always what I wanted. This book has really spoken to me. That being said, I'm having a serious issue with pydoc from the command. I've got all the directories in c:/python26 in my system path and I can execute pydoc from the command line just fine regardless of pwd - but it accepts no arguments. Doesn't matter what I type, I just get the standard pydoc output telling me the acceptable arguments. Any ideas? For what it's worth, I installed ActivePython as per Zed's suggestion. ``` C:\Users\Chevee>pydoc file pydoc - the Python documentation tool pydoc.py <name> ... Show text documentation on something. <name> may be the name of a Python keyword, topic, function, module, or package, or a dotted reference to a class or function within a module or module in a package. If <name> contains a '\', it is used as the path to a Python source file to document. If name is 'keywords', 'topics', or 'modules', a listing of these things is displayed. pydoc.py -k <keyword> Search for a keyword in the synopsis lines of all available modules. pydoc.py -p <port> Start an HTTP server on the given port on the local machine. pydoc.py -g Pop up a graphical interface for finding and serving documentation. pydoc.py -w <name> ... Write out the HTML documentation for a module to a file in the current directory. If <name> contains a '\', it is treated as a filename; if it names a directory, documentation is written for all the contents. C:\Users\Chevee> ``` EDIT: New information, pydoc works just fine in PowerShell. As a linux user, I have no idea why I'm trying to use cmd anyways--but I'd still love to figure out what's up with pydoc and cmd. EDIT 2: More new information. In cmd... ``` c:\>python c:/python26/lib/pydoc.py file ``` ...works just fine. Everything works just fine with just pydoc in PowerShell without me worrying about pwd, or extensions or paths.
2010/11/10
[ "https://Stackoverflow.com/questions/4149274", "https://Stackoverflow.com", "https://Stackoverflow.com/users/502486/" ]
In Windows Powershell use: python -m pydoc Examples: python -m pydoc open python -m pydoc raw\_input python -m pydoc argv
``` python -m pydoc -k/p/g/w <name> ```
4,149,274
Okay, I'm having one of those moments that makes me question my ability to use a computer. This is not the sort of question I imagined asking as my first SO post, but here goes. Started on Zed's new "Learn Python the Hard Way" since I've been looking to get back into programming after a 10 year hiatus and python was always what I wanted. This book has really spoken to me. That being said, I'm having a serious issue with pydoc from the command. I've got all the directories in c:/python26 in my system path and I can execute pydoc from the command line just fine regardless of pwd - but it accepts no arguments. Doesn't matter what I type, I just get the standard pydoc output telling me the acceptable arguments. Any ideas? For what it's worth, I installed ActivePython as per Zed's suggestion. ``` C:\Users\Chevee>pydoc file pydoc - the Python documentation tool pydoc.py <name> ... Show text documentation on something. <name> may be the name of a Python keyword, topic, function, module, or package, or a dotted reference to a class or function within a module or module in a package. If <name> contains a '\', it is used as the path to a Python source file to document. If name is 'keywords', 'topics', or 'modules', a listing of these things is displayed. pydoc.py -k <keyword> Search for a keyword in the synopsis lines of all available modules. pydoc.py -p <port> Start an HTTP server on the given port on the local machine. pydoc.py -g Pop up a graphical interface for finding and serving documentation. pydoc.py -w <name> ... Write out the HTML documentation for a module to a file in the current directory. If <name> contains a '\', it is treated as a filename; if it names a directory, documentation is written for all the contents. C:\Users\Chevee> ``` EDIT: New information, pydoc works just fine in PowerShell. As a linux user, I have no idea why I'm trying to use cmd anyways--but I'd still love to figure out what's up with pydoc and cmd. EDIT 2: More new information. In cmd... ``` c:\>python c:/python26/lib/pydoc.py file ``` ...works just fine. Everything works just fine with just pydoc in PowerShell without me worrying about pwd, or extensions or paths.
2010/11/10
[ "https://Stackoverflow.com/questions/4149274", "https://Stackoverflow.com", "https://Stackoverflow.com/users/502486/" ]
In Windows Powershell use: python -m pydoc Examples: python -m pydoc open python -m pydoc raw\_input python -m pydoc argv
**Syntax for pydoc on windows:** alt1: > > C:\path\PythonXX\python.exe C:\path\PythonXX\Lib\pydoc.py -k/p/g/w X:\path\file\_to\_doc.py > > > alt2: > > python -m pydoc -k/p/g/w X:\path\file\_to\_doc.py > > > Of which the latter is the one to prefer, duh. However it requires your windows installation to have registered python to the environment variable "Path". **Setup windows environment variables:** Look at [this site](http://www.computerhope.com/issues/ch000549.htm) for a guide on where to find them. The one you'll be looking for is "Path". If you select Path and click Edit you will see a long row of paths pointing to different folders. The Path's you see here is what allows you to basically reach a veriety of programs in the command line by just entering the name of the program, instead of the whole path to it. So what you want to do here is to locate your Python installation and copy its full path like this: `X:\subfolders\PythonXX\` Then you add it to the very end of the long row of Path's like this: > > X:\earlier\path\to\something;X:\subfolders\PythonXX\ > > > Notice the ";" that seperates the different paths, make sure not to forget it. When done, click to confirm/Ok, then you would need to restart any cmd.exe that's already open. **The pydoc.py** The thing is that pydoc is a module of the standard python lib, and it's also powered by python. The windows environment, of what I understand, requires you to specify with which program you want to run a certain file with. So to run the pydoc.py-file we would use: **Open file in windows cmd.exe:** > > X:\subfolders\Python27\python.exe X:\subfolders\Python27\Lib\pydoc.py > > > **pydoc.py's arguments:** *pydoc.py* comes with a veriety of command line-based features that allows you to enter certain arguments: `-k/p/g/w` of which will trigger different behaviours of the *pydoc.py*-program. **Send arguments to a program through command line:** The syntax to enter these arguments is of what I know always after the `x:\pathtofile\filename.suffix`, seperated by a simple space. Which gives us the final: alt1: > > X:\subfolders\Python27\python.exe X:\subfolders\Python27\Lib\pydoc.py -w X:\path\file\_to\_doc.py > > > alt2 (with python registered to path): > > python -m pydoc -w X:\path\file\_to\_doc.py > > > The "*w*"-option will give you a HTML-documentation for the file you want to run documentation on. Notice that pydoc.py will (according to my tests) create the documentation-file in the current working directory. Meaning that you will need to place yourself in a folder of choice before you actually run the command. **The function of -m** Of what I can find, the -m seem to handle registry entries, atleast in the msiexec.exe. I guess it might be used for programs in general this way. So my speculative idea of it is that if "-m" is applied, the pursuing arguments paths will be rewritten so that the .exe-file will be used as a path-reference. But as said, rather speculative. -m Rewrites all required computer-specific registry entries. (in msiexec.exe) According to [Microsoft](http://technet.microsoft.com/sv-se/library/cc759262%28v=WS.10%29.aspx)
68,272,509
I'm a Python beginner, and trying to improve my skill. Recently I read the source cord of some python packages, and found these codes. ``` while True: x = string_variable != -1 if x: time.sleep(1) else: break ``` So, what does the second line mean? You can find the original cord in 149th line of `__init__.py` from [here](https://github.com/linouk23/youtube_uploader_selenium). The original cord is below. ``` status_container = self.browser.find(By.XPATH, Constant.STATUS_CONTAINER) while True: in_process = status_container.text.find(Constant.UPLOADED) != -1 if in_process: time.sleep(Constant.USER_WAITING_TIME) else: break ```
2021/07/06
[ "https://Stackoverflow.com/questions/68272509", "https://Stackoverflow.com", "https://Stackoverflow.com/users/15340967/" ]
Just to add here, the actual code you are looking at is not a simple string variable test like your example: ``` in_process = status_container.text.find(Constant.UPLOADED) != -1 ``` is effectively doing this: ``` string_var.find("foo") != -1 ``` which is looking for an occurrence of "foo" inside the string\_var. The string.find() function returns the index of the substring inside the string. See <https://www.w3schools.com/python/ref_string_find.asp> So `if string_var.find("foo") != -1:` is basically saying "if foo is in string\_var". A more common (and readable!) way to do this in python is simply: ``` if "foo" in string_var: ```
This line has two operators ``` x = string_variable != -1 ``` `=` is assignment operator and `!=` is logical operator which means `NOT EQUALS` so, `x` will hold a boolean value based on the evaluation of the logical operator `!=`.
33,107,224
I have 5 astronomy images in python, each for a different wavelength, therefore they are of different angular resolutions and grid sizes and in order to compare them so that i can create temperature maps i need them to be the same angular resolution and grid size. I have managed to Gaussian convolve each image to the same angular resolution as the worst one, however i am having trouble finding a method to re-grid each image in python and wondered if anyone knew how to go about doing this? I wish to re-grid the images to the same grid size as the worst quality image and so i can use that as a reference image if required. Thank you
2015/10/13
[ "https://Stackoverflow.com/questions/33107224", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4596311/" ]
If the image headers have the correct World Coordinate System data, you can use the reproject package to resample the images: <http://reproject.readthedocs.org/en/stable/>
You can use FITS\_tools (<https://pypi.python.org/pypi/FITS_tools>, it can also be installed via `$ pip install FITS_TOOLS` in the anaconda distribution of python). Both images must have wcs in the header information. ``` import FITS_tools to_be_projected = 'fits_file_to_be_projected.fits' reference_fits = 'fits_file_serving_as_reference.fits' im1,im2 = FITS_tools.match_fits(to_be_projected,reference_fits) ``` It returns: Two images projected into the same space, and optionally the header used to project them. Once installed, you can do help(FITS\_tools.match\_fits) for more information.
9,577,252
In ipython, if I press 'esc' followed by 'enter' (and possibly other characters?), readline breaks. I can no longer search through command history using the 'up' key, and some commands (e.g., control-K) fail. Is there a way to reset readline within an ipython session? What is going on when I press these keys?
2012/03/06
[ "https://Stackoverflow.com/questions/9577252", "https://Stackoverflow.com", "https://Stackoverflow.com/users/814354/" ]
The poster's suggested answer doesn't seem to work for me in iPython 0.12+. I can run: ``` get_ipython().init_readline() ``` but that doesn't seem to help. However I noticed that I sometimes see similar problems in my iPython sessions. It appears that I had inadvertently switched from the default Emacs readline editing mode to vi-mode(vim-mode). According to the the [readline docs](https://www.gnu.org/software/bash/manual/html_node/Readline-vi-Mode.html) to switch between them you are supposed to be able to use the M-C-j key combination but that only seems to allow me to switch to vi-mode. To switch back to Emacs mode one can use C-e but that didn't appear to work for me - I had to instead do M-C-e - on my Mac (where `ESC` is used as the 'Meta' key) it is: `ESC`+`CTRL`+`e` The contents of my ~/.inputrc is as follows: ``` set meta-flag on set input-meta on set convert-meta off set output-meta on ```
Got impatient. Solution is: ``` IPython.InteractiveShell.init_readline(get_ipython()) ``` Looks like this might be a known bug too: <http://www.catonmat.net/blog/bash-vi-editing-mode-cheat-sheet/>
40,327,136
[This is how I setted up my python3 envirnoment on Ubuntu 16.04.](https://www.digitalocean.com/community/tutorials/how-to-install-python-3-and-set-up-a-local-programming-environment-on-ubuntu-16-04) And I installed TensorFlow 0.8 with Virtualenv installation. [As I wanted to start TensorFlow tutorial MNIST For ML Beginners the The MNIST Data part.](https://www.tensorflow.org/versions/r0.8/tutorials/mnist/beginners/index.html#the-mnist-data) This is the way I did it ``` $ cd environments ~/environments$ pyvenv my_env ~/environments$ ls my_env bin include lib lib64 pyvenv.cfg share ~/environments$ source my_env/bin/activate (my_env) :~/environments$ nano input_data.py (my_env) :~/environments$ python input_data.py Traceback (most recent call last): File "input_data.py", line 10, in <module> import numpy ImportError: No module named 'numpy' ``` The input\_data.py is from Github tensorflow/tensorflow/examples/tutorials/mnist/input\_data.py. So I installed numpy with ``` $ sudo apt-get install python3-numpy ``` But I still got the same output. Maybe there's something wrong with my installation or I use the wrong way with Python. I have stucked for days and need your help. I had upgraded TensorFlow to 0.11 version. I will try again later.
2016/10/30
[ "https://Stackoverflow.com/questions/40327136", "https://Stackoverflow.com", "https://Stackoverflow.com/users/7090901/" ]
Sorry for the very late update, I had update tensorflow to the latest version. And I think I had some mistakes back then. Since I install the tensorflow via Virtualenv installation, so I should load tensorflow and other packages in Virtualenv environment. ``` source ~/tensorflow/bin/activate ``` Then run the exist python code by ``` python3 filename.py ``` or directly type ``` python3 ``` even ``` ipython3 ``` to write your own code, and test with ``` import tensorflow as tf ``` It shouldn't have any error messages, unless you are using the GPU support version, add path by type ``` sudo nano ~/.bash_profile ``` open the bash profile and add ``` export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/usr/local/cuda-8.0/lib64 export CUDA_HOME=/usr/local/cuda export PATH=/usr/local/cuda-8.0/bin:$PATH ``` inside the file,aware this the default path for your cuda,then the last step. ``` source /.bash_profile ``` The ubuntu 16.04 really annoying sometimes. And thanks for your reponses.
Try installing it with PIP using ``` sudo apt-get install python-pip sudo pip install numpy==1.11.1 ``` or in your case use pip3 instead of pip for python 3 like ``` sudo apt-get install python3-pip sudo pip3 install numpy==1.11.1 ``` this will help
40,327,136
[This is how I setted up my python3 envirnoment on Ubuntu 16.04.](https://www.digitalocean.com/community/tutorials/how-to-install-python-3-and-set-up-a-local-programming-environment-on-ubuntu-16-04) And I installed TensorFlow 0.8 with Virtualenv installation. [As I wanted to start TensorFlow tutorial MNIST For ML Beginners the The MNIST Data part.](https://www.tensorflow.org/versions/r0.8/tutorials/mnist/beginners/index.html#the-mnist-data) This is the way I did it ``` $ cd environments ~/environments$ pyvenv my_env ~/environments$ ls my_env bin include lib lib64 pyvenv.cfg share ~/environments$ source my_env/bin/activate (my_env) :~/environments$ nano input_data.py (my_env) :~/environments$ python input_data.py Traceback (most recent call last): File "input_data.py", line 10, in <module> import numpy ImportError: No module named 'numpy' ``` The input\_data.py is from Github tensorflow/tensorflow/examples/tutorials/mnist/input\_data.py. So I installed numpy with ``` $ sudo apt-get install python3-numpy ``` But I still got the same output. Maybe there's something wrong with my installation or I use the wrong way with Python. I have stucked for days and need your help. I had upgraded TensorFlow to 0.11 version. I will try again later.
2016/10/30
[ "https://Stackoverflow.com/questions/40327136", "https://Stackoverflow.com", "https://Stackoverflow.com/users/7090901/" ]
Sorry for the very late update, I had update tensorflow to the latest version. And I think I had some mistakes back then. Since I install the tensorflow via Virtualenv installation, so I should load tensorflow and other packages in Virtualenv environment. ``` source ~/tensorflow/bin/activate ``` Then run the exist python code by ``` python3 filename.py ``` or directly type ``` python3 ``` even ``` ipython3 ``` to write your own code, and test with ``` import tensorflow as tf ``` It shouldn't have any error messages, unless you are using the GPU support version, add path by type ``` sudo nano ~/.bash_profile ``` open the bash profile and add ``` export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/usr/local/cuda-8.0/lib64 export CUDA_HOME=/usr/local/cuda export PATH=/usr/local/cuda-8.0/bin:$PATH ``` inside the file,aware this the default path for your cuda,then the last step. ``` source /.bash_profile ``` The ubuntu 16.04 really annoying sometimes. And thanks for your reponses.
I suggest first installing [Anaconda](https://docs.continuum.io/anaconda/install#linux-install) (or miniconda) and installing with conda. Then you will all python dependencies taken care of automatically! The downside with installing with Anaconda is that there is no GPU support (yet). But if you're fine running on CPU then I think Anaconda is by far the easiest way to instal TensorFlow.
12,197,806
I have this script which runs well in Python 2.7 but not in 2.6: ``` def main(): tempfile = '/tmp/tempfile' stats_URI="http://x.x.x.x/stats.json" hits_ = 0 advances_ = 0 requests = 0 failed = 0 as_data = urllib.urlopen(stats_URI).read() data = json.loads(as_data) for x, y in data['hits-seen'].iteritems(): hits_ += y # Total of failed vtop requests for x, y in data['vals-failed'].iteritems(): failed += y requests = data['requests'] advances_ = requests - failed f = open(tempfile,'w') line1 = "hits: " + str(hits_) + "\n" line2 = "advances: " + str(advances_) + "\n" f.write(line1) f.write(line2) f.close() return 0 ``` The error message that I am getting says: ``` Traceback (most recent call last): File "./json.test.py", line 14, in <module> main() File "./json.test.py", line 8, in main as_data = urllib.urlopen(stats_URI).read() File "/usr/lib/python2.6/urllib.py", line 86, in urlopen return opener.open(url) File "/usr/lib/python2.6/urllib.py", line 207, in open return getattr(self, name)(url) File "/usr/lib/python2.6/urllib.py", line 346, in open_http h.endheaders() File "/usr/lib/python2.6/httplib.py", line 908, in endheaders self._send_output() File "/usr/lib/python2.6/httplib.py", line 780, in _send_output self.send(msg) File "/usr/lib/python2.6/httplib.py", line 739, in send self.connect() File "/usr/lib/python2.6/httplib.py", line 720, in connect self.timeout) File "/usr/lib/python2.6/socket.py", line 561, in create_connection raise error, msg IOError: [Errno socket error] [Errno 110] Connection timed out ``` What am I missing here? Searches on the internet are not helping much :-(
2012/08/30
[ "https://Stackoverflow.com/questions/12197806", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1579819/" ]
When creating a `typedef` alias for a function pointer, the alias is in the function *name* position, so use: ``` typedef unsigned (__stdcall *task )(void *); ``` `task` is now a type alias for: *pointer to a function taking a `void` pointer and returning `unsigned`*.
Since hmjd's answer has been deleted... In C++11 a whole newsome *alias* syntax has been developed, to make such things much easier: ``` using task = unsigned (__stdcall*)(void*); ``` is equivalent the to `typedef unsigned (__stdcall* task)(void*);` (note the position of the alias in the middle of the function signature...). It can also be used for templates: ``` template <typename T> using Vec = std::vector<T>; int main() { Vec<int> x; } ``` This syntax is quite nicer than the old one (and for templates, actually makes template aliasing possible) but does require a quite newish compiler.
12,197,806
I have this script which runs well in Python 2.7 but not in 2.6: ``` def main(): tempfile = '/tmp/tempfile' stats_URI="http://x.x.x.x/stats.json" hits_ = 0 advances_ = 0 requests = 0 failed = 0 as_data = urllib.urlopen(stats_URI).read() data = json.loads(as_data) for x, y in data['hits-seen'].iteritems(): hits_ += y # Total of failed vtop requests for x, y in data['vals-failed'].iteritems(): failed += y requests = data['requests'] advances_ = requests - failed f = open(tempfile,'w') line1 = "hits: " + str(hits_) + "\n" line2 = "advances: " + str(advances_) + "\n" f.write(line1) f.write(line2) f.close() return 0 ``` The error message that I am getting says: ``` Traceback (most recent call last): File "./json.test.py", line 14, in <module> main() File "./json.test.py", line 8, in main as_data = urllib.urlopen(stats_URI).read() File "/usr/lib/python2.6/urllib.py", line 86, in urlopen return opener.open(url) File "/usr/lib/python2.6/urllib.py", line 207, in open return getattr(self, name)(url) File "/usr/lib/python2.6/urllib.py", line 346, in open_http h.endheaders() File "/usr/lib/python2.6/httplib.py", line 908, in endheaders self._send_output() File "/usr/lib/python2.6/httplib.py", line 780, in _send_output self.send(msg) File "/usr/lib/python2.6/httplib.py", line 739, in send self.connect() File "/usr/lib/python2.6/httplib.py", line 720, in connect self.timeout) File "/usr/lib/python2.6/socket.py", line 561, in create_connection raise error, msg IOError: [Errno socket error] [Errno 110] Connection timed out ``` What am I missing here? Searches on the internet are not helping much :-(
2012/08/30
[ "https://Stackoverflow.com/questions/12197806", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1579819/" ]
When creating a `typedef` alias for a function pointer, the alias is in the function *name* position, so use: ``` typedef unsigned (__stdcall *task )(void *); ``` `task` is now a type alias for: *pointer to a function taking a `void` pointer and returning `unsigned`*.
Sidenote: The `__stdcall` part might break your code under different compilers / compiler settings (unless the function is explicitly declared as `__stdcall`, too). I'd stick to the default calling convention only using proprietary compiler extensions having good reaons.
12,197,806
I have this script which runs well in Python 2.7 but not in 2.6: ``` def main(): tempfile = '/tmp/tempfile' stats_URI="http://x.x.x.x/stats.json" hits_ = 0 advances_ = 0 requests = 0 failed = 0 as_data = urllib.urlopen(stats_URI).read() data = json.loads(as_data) for x, y in data['hits-seen'].iteritems(): hits_ += y # Total of failed vtop requests for x, y in data['vals-failed'].iteritems(): failed += y requests = data['requests'] advances_ = requests - failed f = open(tempfile,'w') line1 = "hits: " + str(hits_) + "\n" line2 = "advances: " + str(advances_) + "\n" f.write(line1) f.write(line2) f.close() return 0 ``` The error message that I am getting says: ``` Traceback (most recent call last): File "./json.test.py", line 14, in <module> main() File "./json.test.py", line 8, in main as_data = urllib.urlopen(stats_URI).read() File "/usr/lib/python2.6/urllib.py", line 86, in urlopen return opener.open(url) File "/usr/lib/python2.6/urllib.py", line 207, in open return getattr(self, name)(url) File "/usr/lib/python2.6/urllib.py", line 346, in open_http h.endheaders() File "/usr/lib/python2.6/httplib.py", line 908, in endheaders self._send_output() File "/usr/lib/python2.6/httplib.py", line 780, in _send_output self.send(msg) File "/usr/lib/python2.6/httplib.py", line 739, in send self.connect() File "/usr/lib/python2.6/httplib.py", line 720, in connect self.timeout) File "/usr/lib/python2.6/socket.py", line 561, in create_connection raise error, msg IOError: [Errno socket error] [Errno 110] Connection timed out ``` What am I missing here? Searches on the internet are not helping much :-(
2012/08/30
[ "https://Stackoverflow.com/questions/12197806", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1579819/" ]
Since hmjd's answer has been deleted... In C++11 a whole newsome *alias* syntax has been developed, to make such things much easier: ``` using task = unsigned (__stdcall*)(void*); ``` is equivalent the to `typedef unsigned (__stdcall* task)(void*);` (note the position of the alias in the middle of the function signature...). It can also be used for templates: ``` template <typename T> using Vec = std::vector<T>; int main() { Vec<int> x; } ``` This syntax is quite nicer than the old one (and for templates, actually makes template aliasing possible) but does require a quite newish compiler.
Sidenote: The `__stdcall` part might break your code under different compilers / compiler settings (unless the function is explicitly declared as `__stdcall`, too). I'd stick to the default calling convention only using proprietary compiler extensions having good reaons.
55,249,689
I am using this Docker (FROM lambci/lambda:python3.6) and I need to install a private repository package. The problem is the Docker does not have git and I can not install git using apt-get or apk install because the Docker is not Linux. Is there any possible way to fix this installing git? Or is there any other better method I could use to install this private repository package?
2019/03/19
[ "https://Stackoverflow.com/questions/55249689", "https://Stackoverflow.com", "https://Stackoverflow.com/users/6763224/" ]
add this to makefile: ``` # makefile git clone REPO cd REPO_DIR; python setup.py bdist_wheel cp REPO_DIR/dist/* . rm -rf REPO_DIR/ ``` add this to dockerfile: ``` # dockerfile RUN pip install REPO*.whl ``` and then the package is successfully installed within docker
can you `pip install` the git repo next to your source code and mount it together with your code into the container? ``` cd WORKING_DIRECTORY pip install --target ./ GIT_URL ```
55,249,689
I am using this Docker (FROM lambci/lambda:python3.6) and I need to install a private repository package. The problem is the Docker does not have git and I can not install git using apt-get or apk install because the Docker is not Linux. Is there any possible way to fix this installing git? Or is there any other better method I could use to install this private repository package?
2019/03/19
[ "https://Stackoverflow.com/questions/55249689", "https://Stackoverflow.com", "https://Stackoverflow.com/users/6763224/" ]
``` RUN python -m pip install git+URL_OF_GIT_REPO ```
can you `pip install` the git repo next to your source code and mount it together with your code into the container? ``` cd WORKING_DIRECTORY pip install --target ./ GIT_URL ```
55,249,689
I am using this Docker (FROM lambci/lambda:python3.6) and I need to install a private repository package. The problem is the Docker does not have git and I can not install git using apt-get or apk install because the Docker is not Linux. Is there any possible way to fix this installing git? Or is there any other better method I could use to install this private repository package?
2019/03/19
[ "https://Stackoverflow.com/questions/55249689", "https://Stackoverflow.com", "https://Stackoverflow.com/users/6763224/" ]
in your dockerfile put this code before installing requirements and you will be fine: ``` RUN apt-get update && apt-get install -y git ```
can you `pip install` the git repo next to your source code and mount it together with your code into the container? ``` cd WORKING_DIRECTORY pip install --target ./ GIT_URL ```
55,249,689
I am using this Docker (FROM lambci/lambda:python3.6) and I need to install a private repository package. The problem is the Docker does not have git and I can not install git using apt-get or apk install because the Docker is not Linux. Is there any possible way to fix this installing git? Or is there any other better method I could use to install this private repository package?
2019/03/19
[ "https://Stackoverflow.com/questions/55249689", "https://Stackoverflow.com", "https://Stackoverflow.com/users/6763224/" ]
add this to makefile: ``` # makefile git clone REPO cd REPO_DIR; python setup.py bdist_wheel cp REPO_DIR/dist/* . rm -rf REPO_DIR/ ``` add this to dockerfile: ``` # dockerfile RUN pip install REPO*.whl ``` and then the package is successfully installed within docker
in your dockerfile put this code before installing requirements and you will be fine: ``` RUN apt-get update && apt-get install -y git ```
55,249,689
I am using this Docker (FROM lambci/lambda:python3.6) and I need to install a private repository package. The problem is the Docker does not have git and I can not install git using apt-get or apk install because the Docker is not Linux. Is there any possible way to fix this installing git? Or is there any other better method I could use to install this private repository package?
2019/03/19
[ "https://Stackoverflow.com/questions/55249689", "https://Stackoverflow.com", "https://Stackoverflow.com/users/6763224/" ]
add this to makefile: ``` # makefile git clone REPO cd REPO_DIR; python setup.py bdist_wheel cp REPO_DIR/dist/* . rm -rf REPO_DIR/ ``` add this to dockerfile: ``` # dockerfile RUN pip install REPO*.whl ``` and then the package is successfully installed within docker
``` FROM python:3.9 as builder RUN pip install --target /tmp/site-packages "git+GIT_URL" FROM public.ecr.aws/lambda/python:3.9 COPY --from=builder /tmp/site-packages /var/lang/lib/python3.9/site-packages ``` You could use python:3.9 which has git, or any other image to install dependancies and then copy it into the AWS image. To find the site-packages location, just use `python -m site` in a lambda container
55,249,689
I am using this Docker (FROM lambci/lambda:python3.6) and I need to install a private repository package. The problem is the Docker does not have git and I can not install git using apt-get or apk install because the Docker is not Linux. Is there any possible way to fix this installing git? Or is there any other better method I could use to install this private repository package?
2019/03/19
[ "https://Stackoverflow.com/questions/55249689", "https://Stackoverflow.com", "https://Stackoverflow.com/users/6763224/" ]
``` RUN python -m pip install git+URL_OF_GIT_REPO ```
in your dockerfile put this code before installing requirements and you will be fine: ``` RUN apt-get update && apt-get install -y git ```
55,249,689
I am using this Docker (FROM lambci/lambda:python3.6) and I need to install a private repository package. The problem is the Docker does not have git and I can not install git using apt-get or apk install because the Docker is not Linux. Is there any possible way to fix this installing git? Or is there any other better method I could use to install this private repository package?
2019/03/19
[ "https://Stackoverflow.com/questions/55249689", "https://Stackoverflow.com", "https://Stackoverflow.com/users/6763224/" ]
``` RUN python -m pip install git+URL_OF_GIT_REPO ```
``` FROM python:3.9 as builder RUN pip install --target /tmp/site-packages "git+GIT_URL" FROM public.ecr.aws/lambda/python:3.9 COPY --from=builder /tmp/site-packages /var/lang/lib/python3.9/site-packages ``` You could use python:3.9 which has git, or any other image to install dependancies and then copy it into the AWS image. To find the site-packages location, just use `python -m site` in a lambda container
55,249,689
I am using this Docker (FROM lambci/lambda:python3.6) and I need to install a private repository package. The problem is the Docker does not have git and I can not install git using apt-get or apk install because the Docker is not Linux. Is there any possible way to fix this installing git? Or is there any other better method I could use to install this private repository package?
2019/03/19
[ "https://Stackoverflow.com/questions/55249689", "https://Stackoverflow.com", "https://Stackoverflow.com/users/6763224/" ]
in your dockerfile put this code before installing requirements and you will be fine: ``` RUN apt-get update && apt-get install -y git ```
``` FROM python:3.9 as builder RUN pip install --target /tmp/site-packages "git+GIT_URL" FROM public.ecr.aws/lambda/python:3.9 COPY --from=builder /tmp/site-packages /var/lang/lib/python3.9/site-packages ``` You could use python:3.9 which has git, or any other image to install dependancies and then copy it into the AWS image. To find the site-packages location, just use `python -m site` in a lambda container
36,712,967
I want to add search box to a single select drop down option. **Code:** ``` <select id="widget_for" name="{{widget_for}}"> <option value="">select</option> {% for key, value in dr.items %} <input placeholder="This "> <option value="{% firstof value.id key %}" {% if key in selected_value %}selected{% endif %}>{% firstof value.name value %}</option> {% endfor %} </select> ``` Adding a input tags as above does not work out. I have tried using html5-datalist and it works. I want some other options as html5-datalist does not support scrollable option in chrome. Can anyone suggest any other searchbox options? They should be compatible with django-python framework.
2016/04/19
[ "https://Stackoverflow.com/questions/36712967", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5324868/" ]
Simply use select2 plugin to implement this feature Plugin link: [Select2](https://select2.github.io/examples.html) [![enter image description here](https://i.stack.imgur.com/6ora6.png)](https://i.stack.imgur.com/6ora6.png)
you can use **semantic ui** to implement this feature <https://semantic-ui.com/introduction/new.html>
36,712,967
I want to add search box to a single select drop down option. **Code:** ``` <select id="widget_for" name="{{widget_for}}"> <option value="">select</option> {% for key, value in dr.items %} <input placeholder="This "> <option value="{% firstof value.id key %}" {% if key in selected_value %}selected{% endif %}>{% firstof value.name value %}</option> {% endfor %} </select> ``` Adding a input tags as above does not work out. I have tried using html5-datalist and it works. I want some other options as html5-datalist does not support scrollable option in chrome. Can anyone suggest any other searchbox options? They should be compatible with django-python framework.
2016/04/19
[ "https://Stackoverflow.com/questions/36712967", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5324868/" ]
If you don't like external library, you can use the HTML5 element `datalist`. Example from [W3School](https://www.w3schools.com/tags/tryit.asp?filename=tryhtml5_datalist): ``` <input list="browsers" name="browser"> <datalist id="browsers"> <option value="Internet Explorer"> <option value="Firefox"> <option value="Chrome"> <option value="Opera"> <option value="Safari"> </datalist> ```
you can use **semantic ui** to implement this feature <https://semantic-ui.com/introduction/new.html>
36,712,967
I want to add search box to a single select drop down option. **Code:** ``` <select id="widget_for" name="{{widget_for}}"> <option value="">select</option> {% for key, value in dr.items %} <input placeholder="This "> <option value="{% firstof value.id key %}" {% if key in selected_value %}selected{% endif %}>{% firstof value.name value %}</option> {% endfor %} </select> ``` Adding a input tags as above does not work out. I have tried using html5-datalist and it works. I want some other options as html5-datalist does not support scrollable option in chrome. Can anyone suggest any other searchbox options? They should be compatible with django-python framework.
2016/04/19
[ "https://Stackoverflow.com/questions/36712967", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5324868/" ]
There's also a plain HTML solution You can use a [datalist element](https://developer.mozilla.org/en-US/docs/Web/HTML/Element/datalist) to display suggestions: ``` <div> <datalist id="suggestions"> <option>First option</option> <option>Second Option</option> </datalist> <input autoComplete="on" list="suggestions"/> </div> ``` *Note:Ensure that value of list attribute in input is same as the id of datalist.* Please be advised not all broswers have (full) support for the Datalist Element, as you can see on [Can I use](https://caniuse.com/#search=datalist).
I have made a custom dropdown using HTML and CSS with search tag on the top which is fixed at the top of dropdown.You can use the following HTML and CSS with bootstrap:- ``` <div class="dropdown dropdown-scroll"> <button class="btn btn-default event-button dropdown-toggle" type="button" id="dropdownMenu1" data-toggle="dropdown"> <span>Search</span> <span class="caret"></span> </button> <div class="dropdown-menu" role="menu" aria-labelledby="dropdownMenu1"> <div class="input-group input-group-sm search-control"> <span class="input-group-addon"> <span class="glyphicon glyphicon-search"></span> </span> <input type="text" class="form-control" placeholder="Search"> </div> <ul class="dropdown-list"> <li role="presentation" ng-repeat='item in items | filter:eventSearch'> <a>{{item}}</a> </li> </ul> </div> ``` And CSS for the above code is:- ``` .dropdown.dropdown-scroll .dropdown-list{ max-height: 233px; overflow-y: auto; list-style-type: none; padding-left: 10px; margin-bottom: 0px; } .dropdown-list li{ font-size: 14px; height: 20px; } .dropdown-list li > a{ color: black; } .dropdown-list a:hover{ color: black; } .dropdown-list li:hover{ background-color: lightgray; } ``` [Screenshot of the dropdown](https://i.stack.imgur.com/bTcfr.png)
36,712,967
I want to add search box to a single select drop down option. **Code:** ``` <select id="widget_for" name="{{widget_for}}"> <option value="">select</option> {% for key, value in dr.items %} <input placeholder="This "> <option value="{% firstof value.id key %}" {% if key in selected_value %}selected{% endif %}>{% firstof value.name value %}</option> {% endfor %} </select> ``` Adding a input tags as above does not work out. I have tried using html5-datalist and it works. I want some other options as html5-datalist does not support scrollable option in chrome. Can anyone suggest any other searchbox options? They should be compatible with django-python framework.
2016/04/19
[ "https://Stackoverflow.com/questions/36712967", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5324868/" ]
Simply use select2 plugin to implement this feature Plugin link: [Select2](https://select2.github.io/examples.html) [![enter image description here](https://i.stack.imgur.com/6ora6.png)](https://i.stack.imgur.com/6ora6.png)
There's also a plain HTML solution You can use a [datalist element](https://developer.mozilla.org/en-US/docs/Web/HTML/Element/datalist) to display suggestions: ``` <div> <datalist id="suggestions"> <option>First option</option> <option>Second Option</option> </datalist> <input autoComplete="on" list="suggestions"/> </div> ``` *Note:Ensure that value of list attribute in input is same as the id of datalist.* Please be advised not all broswers have (full) support for the Datalist Element, as you can see on [Can I use](https://caniuse.com/#search=datalist).
36,712,967
I want to add search box to a single select drop down option. **Code:** ``` <select id="widget_for" name="{{widget_for}}"> <option value="">select</option> {% for key, value in dr.items %} <input placeholder="This "> <option value="{% firstof value.id key %}" {% if key in selected_value %}selected{% endif %}>{% firstof value.name value %}</option> {% endfor %} </select> ``` Adding a input tags as above does not work out. I have tried using html5-datalist and it works. I want some other options as html5-datalist does not support scrollable option in chrome. Can anyone suggest any other searchbox options? They should be compatible with django-python framework.
2016/04/19
[ "https://Stackoverflow.com/questions/36712967", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5324868/" ]
There's also a plain HTML solution You can use a [datalist element](https://developer.mozilla.org/en-US/docs/Web/HTML/Element/datalist) to display suggestions: ``` <div> <datalist id="suggestions"> <option>First option</option> <option>Second Option</option> </datalist> <input autoComplete="on" list="suggestions"/> </div> ``` *Note:Ensure that value of list attribute in input is same as the id of datalist.* Please be advised not all broswers have (full) support for the Datalist Element, as you can see on [Can I use](https://caniuse.com/#search=datalist).
Use [Select2](https://select2.org/getting-started/basic-usage). For Installation Process [Click Here](https://select2.org/getting-started/installation)
36,712,967
I want to add search box to a single select drop down option. **Code:** ``` <select id="widget_for" name="{{widget_for}}"> <option value="">select</option> {% for key, value in dr.items %} <input placeholder="This "> <option value="{% firstof value.id key %}" {% if key in selected_value %}selected{% endif %}>{% firstof value.name value %}</option> {% endfor %} </select> ``` Adding a input tags as above does not work out. I have tried using html5-datalist and it works. I want some other options as html5-datalist does not support scrollable option in chrome. Can anyone suggest any other searchbox options? They should be compatible with django-python framework.
2016/04/19
[ "https://Stackoverflow.com/questions/36712967", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5324868/" ]
If you don't like external library, you can use the HTML5 element `datalist`. Example from [W3School](https://www.w3schools.com/tags/tryit.asp?filename=tryhtml5_datalist): ``` <input list="browsers" name="browser"> <datalist id="browsers"> <option value="Internet Explorer"> <option value="Firefox"> <option value="Chrome"> <option value="Opera"> <option value="Safari"> </datalist> ```
I have made a custom dropdown using HTML and CSS with search tag on the top which is fixed at the top of dropdown.You can use the following HTML and CSS with bootstrap:- ``` <div class="dropdown dropdown-scroll"> <button class="btn btn-default event-button dropdown-toggle" type="button" id="dropdownMenu1" data-toggle="dropdown"> <span>Search</span> <span class="caret"></span> </button> <div class="dropdown-menu" role="menu" aria-labelledby="dropdownMenu1"> <div class="input-group input-group-sm search-control"> <span class="input-group-addon"> <span class="glyphicon glyphicon-search"></span> </span> <input type="text" class="form-control" placeholder="Search"> </div> <ul class="dropdown-list"> <li role="presentation" ng-repeat='item in items | filter:eventSearch'> <a>{{item}}</a> </li> </ul> </div> ``` And CSS for the above code is:- ``` .dropdown.dropdown-scroll .dropdown-list{ max-height: 233px; overflow-y: auto; list-style-type: none; padding-left: 10px; margin-bottom: 0px; } .dropdown-list li{ font-size: 14px; height: 20px; } .dropdown-list li > a{ color: black; } .dropdown-list a:hover{ color: black; } .dropdown-list li:hover{ background-color: lightgray; } ``` [Screenshot of the dropdown](https://i.stack.imgur.com/bTcfr.png)
36,712,967
I want to add search box to a single select drop down option. **Code:** ``` <select id="widget_for" name="{{widget_for}}"> <option value="">select</option> {% for key, value in dr.items %} <input placeholder="This "> <option value="{% firstof value.id key %}" {% if key in selected_value %}selected{% endif %}>{% firstof value.name value %}</option> {% endfor %} </select> ``` Adding a input tags as above does not work out. I have tried using html5-datalist and it works. I want some other options as html5-datalist does not support scrollable option in chrome. Can anyone suggest any other searchbox options? They should be compatible with django-python framework.
2016/04/19
[ "https://Stackoverflow.com/questions/36712967", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5324868/" ]
If you don't like external library, you can use the HTML5 element `datalist`. Example from [W3School](https://www.w3schools.com/tags/tryit.asp?filename=tryhtml5_datalist): ``` <input list="browsers" name="browser"> <datalist id="browsers"> <option value="Internet Explorer"> <option value="Firefox"> <option value="Chrome"> <option value="Opera"> <option value="Safari"> </datalist> ```
Use [Select2](https://select2.org/getting-started/basic-usage). For Installation Process [Click Here](https://select2.org/getting-started/installation)
36,712,967
I want to add search box to a single select drop down option. **Code:** ``` <select id="widget_for" name="{{widget_for}}"> <option value="">select</option> {% for key, value in dr.items %} <input placeholder="This "> <option value="{% firstof value.id key %}" {% if key in selected_value %}selected{% endif %}>{% firstof value.name value %}</option> {% endfor %} </select> ``` Adding a input tags as above does not work out. I have tried using html5-datalist and it works. I want some other options as html5-datalist does not support scrollable option in chrome. Can anyone suggest any other searchbox options? They should be compatible with django-python framework.
2016/04/19
[ "https://Stackoverflow.com/questions/36712967", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5324868/" ]
I have made a custom dropdown using HTML and CSS with search tag on the top which is fixed at the top of dropdown.You can use the following HTML and CSS with bootstrap:- ``` <div class="dropdown dropdown-scroll"> <button class="btn btn-default event-button dropdown-toggle" type="button" id="dropdownMenu1" data-toggle="dropdown"> <span>Search</span> <span class="caret"></span> </button> <div class="dropdown-menu" role="menu" aria-labelledby="dropdownMenu1"> <div class="input-group input-group-sm search-control"> <span class="input-group-addon"> <span class="glyphicon glyphicon-search"></span> </span> <input type="text" class="form-control" placeholder="Search"> </div> <ul class="dropdown-list"> <li role="presentation" ng-repeat='item in items | filter:eventSearch'> <a>{{item}}</a> </li> </ul> </div> ``` And CSS for the above code is:- ``` .dropdown.dropdown-scroll .dropdown-list{ max-height: 233px; overflow-y: auto; list-style-type: none; padding-left: 10px; margin-bottom: 0px; } .dropdown-list li{ font-size: 14px; height: 20px; } .dropdown-list li > a{ color: black; } .dropdown-list a:hover{ color: black; } .dropdown-list li:hover{ background-color: lightgray; } ``` [Screenshot of the dropdown](https://i.stack.imgur.com/bTcfr.png)
you can use **semantic ui** to implement this feature <https://semantic-ui.com/introduction/new.html>
36,712,967
I want to add search box to a single select drop down option. **Code:** ``` <select id="widget_for" name="{{widget_for}}"> <option value="">select</option> {% for key, value in dr.items %} <input placeholder="This "> <option value="{% firstof value.id key %}" {% if key in selected_value %}selected{% endif %}>{% firstof value.name value %}</option> {% endfor %} </select> ``` Adding a input tags as above does not work out. I have tried using html5-datalist and it works. I want some other options as html5-datalist does not support scrollable option in chrome. Can anyone suggest any other searchbox options? They should be compatible with django-python framework.
2016/04/19
[ "https://Stackoverflow.com/questions/36712967", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5324868/" ]
Simply use select2 plugin to implement this feature Plugin link: [Select2](https://select2.github.io/examples.html) [![enter image description here](https://i.stack.imgur.com/6ora6.png)](https://i.stack.imgur.com/6ora6.png)
Use [Select2](https://select2.org/getting-started/basic-usage). For Installation Process [Click Here](https://select2.org/getting-started/installation)
36,712,967
I want to add search box to a single select drop down option. **Code:** ``` <select id="widget_for" name="{{widget_for}}"> <option value="">select</option> {% for key, value in dr.items %} <input placeholder="This "> <option value="{% firstof value.id key %}" {% if key in selected_value %}selected{% endif %}>{% firstof value.name value %}</option> {% endfor %} </select> ``` Adding a input tags as above does not work out. I have tried using html5-datalist and it works. I want some other options as html5-datalist does not support scrollable option in chrome. Can anyone suggest any other searchbox options? They should be compatible with django-python framework.
2016/04/19
[ "https://Stackoverflow.com/questions/36712967", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5324868/" ]
There's also a plain HTML solution You can use a [datalist element](https://developer.mozilla.org/en-US/docs/Web/HTML/Element/datalist) to display suggestions: ``` <div> <datalist id="suggestions"> <option>First option</option> <option>Second Option</option> </datalist> <input autoComplete="on" list="suggestions"/> </div> ``` *Note:Ensure that value of list attribute in input is same as the id of datalist.* Please be advised not all broswers have (full) support for the Datalist Element, as you can see on [Can I use](https://caniuse.com/#search=datalist).
you can use **semantic ui** to implement this feature <https://semantic-ui.com/introduction/new.html>
53,174,590
Im currently making my own model and all works fine with the tensorflow-for-poets-2 demo. I trained multiple pictures in different folders, and the app recognized it. Now I want to display a bounding box around the object. I found an example [here](https://github.com/tensorflow/tensorflow/tree/master/tensorflow/lite/examples/android) My problem is that my app is returning following error when I am adding my own tflite model: ``` E/AndroidRuntime: FATAL EXCEPTION: inference Process: org.tensorflow.lite.demo, PID: 3495 java.lang.IllegalArgumentException: Cannot copy between a TensorFlowLite tensor with shape [1, 6] and a Java object with shape [1, 10, 4]. at org.tensorflow.lite.Tensor.throwExceptionIfTypeIsIncompatible(Tensor.java:240) at org.tensorflow.lite.Tensor.copyTo(Tensor.java:116) at org.tensorflow.lite.NativeInterpreterWrapper.run(NativeInterpreterWrapper.java:157) at org.tensorflow.lite.Interpreter.runForMultipleInputsOutputs(Interpreter.java:229) at org.tensorflow.demo.TFLiteObjectDetectionAPIModel.recognizeImage(TFLiteObjectDetectionAPIModel.java:194) at org.tensorflow.demo.DetectorActivity$3.run(DetectorActivity.java:247) at android.os.Handler.handleCallback(Handler.java:789) at android.os.Handler.dispatchMessage(Handler.java:98) at android.os.Looper.loop(Looper.java:164) at android.os.HandlerThread.run(HandlerThread.java:65) ``` How I trained them: ``` python3 scripts/retrain.py \ --bottleneck_dir=bottlenecks \ --how_many_training_steps=500 \ --model_dir=inception \ --output_graph=tf_files/retrained_graph.pb \ --output_labels=tf_files/retrained_labels.txt \ --image_dir=tf_files/ \ --architecture mobilenet_1.0_224 ``` Generating: ``` toco \ --input_format=TENSORFLOW_GRAPHDEF \ --input_file=tf_files/retrained_graph.pb \ --output_format=TFLITE \ --output_file=tf_files/optimized_graph.lite \ --inference_type=FLOAT \ --inference_input_type=FLOAT \ --input_arrays=input \ --output_arrays=final_result \ --input_shapes=1,224,224,3\ --mean_values=128 \ --std_values=128 \ --default_ranges_min=0 \ --default_ranges_max=6 ``` DetectorActivity.java ``` // Configuration values for the prepackaged SSD model. private static final int TF_OD_API_INPUT_SIZE = 224; // 300 private static final boolean TF_OD_API_IS_QUANTIZED = false; // true private static final String TF_OD_API_MODEL_FILE = "optimized_graph.lite"; //detect.tflite private static final String TF_OD_API_LABELS_FILE = "file:///android_asset/retrained_labels.txt"; ```
2018/11/06
[ "https://Stackoverflow.com/questions/53174590", "https://Stackoverflow.com", "https://Stackoverflow.com/users/8531947/" ]
You need to set **tinymce.suffix = '.min';** before you init TinyMCE ``` tinymce.suffix = '.min'; tinymce.baseURL = '/js/tinymce'; tinymce.init({ selector: '#editor', menubar: false, plugins: 'code' }); ```
TinyMCE should work out to load minified (or non-minified) files based on which TinyMCE file you load (`tinymce.js` or `tinymce.min.js`). Not sure what's happening in your case but that logic appears to be failing. If you grab the DEV package from <https://www.tiny.cloud/get-tiny/self-hosted/> it would come with both the minified and non-minified versions of each file so the editor would find what it needs at runtime. ***Note***: The code in the minified and non-minified files functions just the same so from a functionality perspective it does not really matter if `theme.min.js` or `theme.js` code is loaded. It only add ~200K to the file size so even that is immaterial.
53,174,590
Im currently making my own model and all works fine with the tensorflow-for-poets-2 demo. I trained multiple pictures in different folders, and the app recognized it. Now I want to display a bounding box around the object. I found an example [here](https://github.com/tensorflow/tensorflow/tree/master/tensorflow/lite/examples/android) My problem is that my app is returning following error when I am adding my own tflite model: ``` E/AndroidRuntime: FATAL EXCEPTION: inference Process: org.tensorflow.lite.demo, PID: 3495 java.lang.IllegalArgumentException: Cannot copy between a TensorFlowLite tensor with shape [1, 6] and a Java object with shape [1, 10, 4]. at org.tensorflow.lite.Tensor.throwExceptionIfTypeIsIncompatible(Tensor.java:240) at org.tensorflow.lite.Tensor.copyTo(Tensor.java:116) at org.tensorflow.lite.NativeInterpreterWrapper.run(NativeInterpreterWrapper.java:157) at org.tensorflow.lite.Interpreter.runForMultipleInputsOutputs(Interpreter.java:229) at org.tensorflow.demo.TFLiteObjectDetectionAPIModel.recognizeImage(TFLiteObjectDetectionAPIModel.java:194) at org.tensorflow.demo.DetectorActivity$3.run(DetectorActivity.java:247) at android.os.Handler.handleCallback(Handler.java:789) at android.os.Handler.dispatchMessage(Handler.java:98) at android.os.Looper.loop(Looper.java:164) at android.os.HandlerThread.run(HandlerThread.java:65) ``` How I trained them: ``` python3 scripts/retrain.py \ --bottleneck_dir=bottlenecks \ --how_many_training_steps=500 \ --model_dir=inception \ --output_graph=tf_files/retrained_graph.pb \ --output_labels=tf_files/retrained_labels.txt \ --image_dir=tf_files/ \ --architecture mobilenet_1.0_224 ``` Generating: ``` toco \ --input_format=TENSORFLOW_GRAPHDEF \ --input_file=tf_files/retrained_graph.pb \ --output_format=TFLITE \ --output_file=tf_files/optimized_graph.lite \ --inference_type=FLOAT \ --inference_input_type=FLOAT \ --input_arrays=input \ --output_arrays=final_result \ --input_shapes=1,224,224,3\ --mean_values=128 \ --std_values=128 \ --default_ranges_min=0 \ --default_ranges_max=6 ``` DetectorActivity.java ``` // Configuration values for the prepackaged SSD model. private static final int TF_OD_API_INPUT_SIZE = 224; // 300 private static final boolean TF_OD_API_IS_QUANTIZED = false; // true private static final String TF_OD_API_MODEL_FILE = "optimized_graph.lite"; //detect.tflite private static final String TF_OD_API_LABELS_FILE = "file:///android_asset/retrained_labels.txt"; ```
2018/11/06
[ "https://Stackoverflow.com/questions/53174590", "https://Stackoverflow.com", "https://Stackoverflow.com/users/8531947/" ]
You need to set **tinymce.suffix = '.min';** before you init TinyMCE ``` tinymce.suffix = '.min'; tinymce.baseURL = '/js/tinymce'; tinymce.init({ selector: '#editor', menubar: false, plugins: 'code' }); ```
I think I found the solution for this. There is a property you can set in the tinymce object called "suffix" that resolved this for me. So, for the first part I set a rootURL property in the page calling TinyMCE, and then added the line tinymce.baseURL = rootURL + 'Scripts/libs/tinymce' just before the tinymce.init. Then, to get it to find the theme.min.js set tinymce.suffix = '.min' This seems to have resolved the issue. Not sure if this is a hack solution but it is working. If anyone has a better way to do it please let me know.
23,007,203
Write a program that prompts the user for the name of a file, opens the file for reading, and then outputs how many times each character of the alphabet appears in the file. ``` #!/usr/local/bin/python name=raw_input("Enter file name: ") input_file=open(name,"r") list=input_file.readlines() count = 0 counter = 0 for i in range(65,91): #from A to Z for j in range(0,len(list)): if(i == ord(j)): #problem: ord() takes j as an int here, I want it to get the char at j count = count + 1 print i, count count = 0 for k in range(97,123): #from a to z for l in range(0,len(list)): if(k == ord(l)): #problem: ord() takes l as an int here, I want it to get the char at l counter = counter + 1 print k, counter count = 0 ```
2014/04/11
[ "https://Stackoverflow.com/questions/23007203", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1836292/" ]
Create style like this- ``` <style name="Theme.MyAppTheme" parent="Theme.Sherlock.Light"> <item name="android:actionBarStyle">@style/Theme.MyAppTheme.ActionBar</item> <item name="actionBarStyle">@style/Theme.MyAppTheme.ActionBar</item> </style> <style name="Theme.MyAppTheme.ActionBar" parent="Widget.Sherlock.ActionBar"> <item name="android:background">#222222</item> <item name = "background">#222222</item> <item name="android:height">64dip</item> <item name="height">64dip</item> <item name="android:titleTextStyle">@style/Theme.MyAppTheme.ActionBar.TitleTextStyle</item> <item name="titleTextStyle">@style/Theme.MyAppTheme.ActionBar.TitleTextStyle</item> </style> <style name="Theme.MyAppTheme.ActionBar.TitleTextStyle" parent="TextAppearance.Sherlock.Widget.ActionBar.Title"> <item name="android:textColor">#fff</item> <item name="textColor">#fff</item> <item name="android:textStyle">bold</item> <item name="textStyle">bold</item> <item name="android:textSize">32sp</item> <item name="textSize">32sp</item> </style> ``` Change height according to your requirement. And In manifest use `android:theme="@style/Theme.MyAppTheme"`
Try to use this one: ``` <style name="Theme.white_style" parent="@android:style/Theme.Holo.Light.DarkActionBar"> <item name="android:actionBarSize">55dp</item> <item name="actionBarSize">55dp</item> </style> ```
12,735,852
The question is an attempt to get the exact instruction on how to do that. There were few attempts before, which don't seem to be full solutions: [solution to move the file inside the package](https://stackoverflow.com/questions/3071327/problem-accessing-config-files-within-a-python-egg) [solution to read as zip](https://stackoverflow.com/questions/3655352/how-to-access-files-inside-a-python-egg-file) [accessing meta info via get\_distribution](https://stackoverflow.com/questions/177910/accessing-python-eggs-own-metadata) The task at hand is to read the information about the egg the program is running from. There are few ways as i understand: 1. hard code the location of the egg and treat it as zip archive - will work, but not flexible enough, because it will need to be edited and recompiled in case if file is moved to another location 2. use `ResourceManager().resource_filename(__name__, filename)` - this seems to be limited in the fact that i cannot access the file that is inside the egg, but not inside the package. notations like "../../EGG-INFO/PKG-INFO" in filename don't work giving KeyError. So no good either. 3. use `dist = pkg_resources.get_distribution("dist_name")` and then use dist object to get information, but I cannot understand from the docs how should i specify my distribution name? It can't find it. So, i'm looking for correct solution about using `pkg_resources.get_distribution` plus it would be nice to finally have a full solution to read any file from inside the egg. Thanks!
2012/10/04
[ "https://Stackoverflow.com/questions/12735852", "https://Stackoverflow.com", "https://Stackoverflow.com/users/673423/" ]
Setuptools/distribute/pkg\_resources is designed to be a sort of transparent overlay to standard Python distutils, which are pretty limited and don't allow a good way of distributing code. eggs are just a way of putting together a bunch of python files, data files, and metadata, somewhat similar to Java JARs - but python packages can be installed from source even without en egg (which is a concept which does not exist in the standard distribution). So there two scenarios here: either you're a programmer which is trying to use some file inside a library, and in such case, in order to read any file from your distribution, you don't need its full path - you just need an open file object with its content, right? So you should do something like this: ``` from pkg_resources import resource_stream, Requirement resource_stream(Requirement.parse("restez==0.3.2"), "restez/httpconn.py") ``` That will return an open, readable file of the file you requested from your package distribution. If it's a zipped egg, it will be automatically be extracted. Please note that you should specify the package name inside (restez) because the distribution name may be different from the package (e.g. distribution Twisted then uses twisted package name). Requirements parsing use this syntax: <http://setuptools.readthedocs.io/en/latest/pkg_resources.html#requirements-parsing> This should suffice - you shouldn't need to know the path of the egg once you know how to fetch files from inside the egg. If you really want the full path and you're sure your egg is uncompressed, use resource\_filename instead of resource\_stream. Otherwise, if you're building a "packaging tool" and need to access the contents of your package, be it an egg or anything, you'll have to do that yourself by hand, just like pkg\_resources does [(pkg\_resources source)](https://bitbucket.org/tarek/distribute/src/tip/pkg_resources.py?at=default) . There's not a precise API for "querying an egg content" because there's no use case for that. If you're a programmer just using a library, use pkg\_resources just like I suggested. If you're building a packaging tool, you should know where to put your hands, and that's it.
The [`zipimporter`](http://docs.python.org/library/zipimport.html#zipimporter-objects) used to load a module can be accessed using the [`__loader__`](http://www.python.org/dev/peps/pep-0302/#specification-part-1-the-importer-protocol) attribute on the module, so accessing a file within the egg should be as simple as: ``` __loader__.get_data('path/within/the/egg') ```
59,821,535
I have a JSON file name.json with below contents, ``` { "Name": [{ "firstName": "John", "lastName": "Stark" }] } ``` Using python how to add the members of Stark family to the JSON file using following list `firstNameList=['Sansa','Arya','Brandon']` Expected Output ``` { "Name": [{ "firstName": "John", "lastName": "Stark" }, { "firstName": "Sansa", "lastName": "Stark" }, { "firstName": "Arya", "lastName": "Stark" }, { "firstName": "Brandon", "lastName": "Stark" } ] } ``` I tried: ``` firstNameList=['arya', 'sansa','brandon'] import json with open('/name.json', 'r+') as f: data = json.load(f) for item in firstNameList: Name['firstname']=item f.seek(0) #reset file position to the beginning. json.dump(data, f, indent=4) f.truncate() ```
2020/01/20
[ "https://Stackoverflow.com/questions/59821535", "https://Stackoverflow.com", "https://Stackoverflow.com/users/10522495/" ]
The following should do the trick: ``` import json with open('name.json', 'r+') as f: data = json.load(f) for name in firstNameList: data["Name"].append({"firstName": name, "lastName": "Stark"}) ```
``` dicts = { "Name": [{ "firstName": "John", "lastName": "Stark" } ] } firstNameList = ['Sansa','Arya','Brandon'] for j in firstNameList: dicts["Name"].append({'firstName': j, 'lastName': 'Stark'}) print(dicts) ``` And you will get ``` {'Name': [{'firstName': 'John', 'lastName': 'Stark'}, {'firstName': 'Sansa', 'lastName': 'Stark'}, {'firstName': 'Arya', 'lastName': 'Stark'}, {'firstName': 'Brandon', 'lastName': 'Stark'}]} ```
59,821,535
I have a JSON file name.json with below contents, ``` { "Name": [{ "firstName": "John", "lastName": "Stark" }] } ``` Using python how to add the members of Stark family to the JSON file using following list `firstNameList=['Sansa','Arya','Brandon']` Expected Output ``` { "Name": [{ "firstName": "John", "lastName": "Stark" }, { "firstName": "Sansa", "lastName": "Stark" }, { "firstName": "Arya", "lastName": "Stark" }, { "firstName": "Brandon", "lastName": "Stark" } ] } ``` I tried: ``` firstNameList=['arya', 'sansa','brandon'] import json with open('/name.json', 'r+') as f: data = json.load(f) for item in firstNameList: Name['firstname']=item f.seek(0) #reset file position to the beginning. json.dump(data, f, indent=4) f.truncate() ```
2020/01/20
[ "https://Stackoverflow.com/questions/59821535", "https://Stackoverflow.com", "https://Stackoverflow.com/users/10522495/" ]
You could try this: ``` import json test = '''{ "Name": [{ "firstName": "John", "lastName": "Stark" }] }''' data = json.loads(test) firstNameList = ['Sansa','Arya','Brandon'] for member in firstNameList: test_dict = {'firstName': member, 'lastName': 'Stark'} data['Name'].append(test_dict) dump = json.dumps(data) print(dump) ``` This uses the json library to parse into a dictionary, make the changes and dump back to json. Output ``` {"Name": [{"firstName": "John", "lastName": "Stark"}, {"firstName": "Sansa", "lastName": "Stark"}, {"firstName": "Arya", "lastName": "Stark"}, {"firstName": "Brandon", "lastName": "Stark"}]} ```
``` dicts = { "Name": [{ "firstName": "John", "lastName": "Stark" } ] } firstNameList = ['Sansa','Arya','Brandon'] for j in firstNameList: dicts["Name"].append({'firstName': j, 'lastName': 'Stark'}) print(dicts) ``` And you will get ``` {'Name': [{'firstName': 'John', 'lastName': 'Stark'}, {'firstName': 'Sansa', 'lastName': 'Stark'}, {'firstName': 'Arya', 'lastName': 'Stark'}, {'firstName': 'Brandon', 'lastName': 'Stark'}]} ```
7,610,001
Could you explain to me what the difference is between calling ``` python -m mymod1 mymod2.py args ``` and ``` python mymod1.py mymod2.py args ``` It seems in both cases `mymod1.py` is called and `sys.argv` is ``` ['mymod1.py', 'mymod2.py', 'args'] ``` So what is the `-m` switch for?
2011/09/30
[ "https://Stackoverflow.com/questions/7610001", "https://Stackoverflow.com", "https://Stackoverflow.com/users/621944/" ]
The first line of the `Rationale` section of [PEP 338](http://www.python.org/dev/peps/pep-0338/) says: > > Python 2.4 adds the command line switch -m to allow modules to be located using the Python module namespace for execution as scripts. The motivating examples were standard library modules such as pdb and profile, and the Python 2.4 implementation is fine for this limited purpose. > > > So you can specify any module in Python's search path this way, not just files in the current directory. You're correct that `python mymod1.py mymod2.py args` has exactly the same effect. The first line of the `Scope of this proposal` section states: > > In Python 2.4, a module located using -m is executed just as if its filename had been provided on the command line. > > > With `-m` more is possible, like working with modules which are part of a package, etc. That's what the rest of PEP 338 is about. Read it for more info.
It's worth mentioning **this only works if the package has a file `__main__.py`** Otherwise, this package can not be executed directly. ``` python -m some_package some_arguments ``` The python interpreter will looking for a `__main__.py` file in the package path to execute. It's equivalent to: ``` python path_to_package/__main__.py somearguments ``` It will execute the content after: ``` if __name__ == "__main__": ```
7,610,001
Could you explain to me what the difference is between calling ``` python -m mymod1 mymod2.py args ``` and ``` python mymod1.py mymod2.py args ``` It seems in both cases `mymod1.py` is called and `sys.argv` is ``` ['mymod1.py', 'mymod2.py', 'args'] ``` So what is the `-m` switch for?
2011/09/30
[ "https://Stackoverflow.com/questions/7610001", "https://Stackoverflow.com", "https://Stackoverflow.com/users/621944/" ]
The first line of the `Rationale` section of [PEP 338](http://www.python.org/dev/peps/pep-0338/) says: > > Python 2.4 adds the command line switch -m to allow modules to be located using the Python module namespace for execution as scripts. The motivating examples were standard library modules such as pdb and profile, and the Python 2.4 implementation is fine for this limited purpose. > > > So you can specify any module in Python's search path this way, not just files in the current directory. You're correct that `python mymod1.py mymod2.py args` has exactly the same effect. The first line of the `Scope of this proposal` section states: > > In Python 2.4, a module located using -m is executed just as if its filename had been provided on the command line. > > > With `-m` more is possible, like working with modules which are part of a package, etc. That's what the rest of PEP 338 is about. Read it for more info.
Despite this question having been asked and answered several times (e.g., [here](https://stackoverflow.com/q/52441280/1066291), [here](https://stackoverflow.com/q/22241420/1066291), [here](https://stackoverflow.com/q/50821312/1066291), and [here](https://stackoverflow.com/q/46319694/1066291)), in my opinion no existing answer fully or concisely captures all the implications of the `-m` flag. Therefore, the following will attempt to improve on what has come before. Introduction (TLDR) =================== The `-m` flag does a lot of things, not all of which will be needed all the time. In short it can be used to: (1) execute python code from the command line via modulename rather than filename (2) add a directory to `sys.path` for use in `import` resolution and (3) execute python code that contains relative imports from the command line. Preliminaries ============= To explain the `-m` flag we first need to explain a little terminology. Python's primary organizational unit is known as a [module](https://docs.python.org/3/glossary.html#term-module). Module's come in one of two flavors: code modules and package modules. A code module is any file that contains python executable code. A package module is a directory that contains other modules (either code modules or package modules). The most common type of code modules are `*.py` files while the most common type of package modules are directories containing an `__init__.py` file. Python allows modules to be uniquely identified in two distinct ways: modulename and filename. In general, modules are identified by modulename in Python code (e.g., `import <modulename>`) and by filename on the command line (e.g., `python <filename>`). All python interpreters are able to convert modulenames to filenames by following the same few, well-defined rules. These rules hinge on the `sys.path` variable. By altering this variable one can change how Python resolves modulenames into filenames (for more on how this is done see [PEP 302](http://python.org/dev/peps/pep-0302/)). All modules (both code and package) can be executed (i.e., code associated with the module will be evaluated by the Python interpreter). Depending on the execution method (and module type) what code gets evaluated, and when, can change quite a bit. For example, if one executes a package module via `python <filename>` then `<filename>/__main__.py` will be executed. On the other hand, if one executes that same package module via `import <modulename>` then only the package's `__init__.py` will be executed. Historical Development of `-m` ============================== The `-m` flag was first introduced in [Python 2.4.1](https://www.python.org/download/releases/2.4.1/notes/). Initially its only purpose was to provide an alternative means of identifying the python module to execute from the command line. That is, if we knew both the `<filename>` and `<modulename>` for a module then the following two commands were equivalent: `python <filename> <args>` and `python -m <modulename> <args>`. One constraint with this iteration, according to [PEP 338](https://www.python.org/dev/peps/pep-0338/#current-behaviour), was that `-m` only worked with top level modulenames (i.e., modules that could be found directly on `sys.path` without any intervening package modules). With the completion of [PEP 338](https://www.python.org/dev/peps/pep-0338/) the `-m` feature was extended to support `<modulename>` representations beyond the top level. This meant names such as `http.server` were now fully supported. This extension also meant that each parent package in modulename was now evaluated (i.e., all parent package `__init__.py` files were evaluated) in addition to the module referenced by the modulename itself. The final major feature enhancement for `-m` came with [PEP 366](https://www.python.org/dev/peps/pep-0366/). With this upgrade `-m` gained the ability to support not only absolute imports but also explicit relative imports when executing modules. This was achieved by changing `-m` so that it set the `__package__` variable to the parent module of the given modulename (in addition to everything else it already did). Use Cases ========= There are two notable use cases for the `-m` flag: 1. To execute modules from the command line for which one may not know their filename. This use case takes advantage of the fact that the Python interpreter knows how to convert modulenames to filenames. This is particularly advantageous when one wants to run stdlib modules or 3rd-party module from the command line. For example, very few people know the filename for the `http.server` module but most people do know its modulename so we can execute it from the command line using `python -m http.server`. 2. To execute a local package containing absolute or relative imports without needing to install it. This use case is detailed in [PEP 338](https://www.python.org/dev/peps/pep-0338/#import-statements-and-the-main-module) and leverages the fact that the current working directory is added to `sys.path` rather than the module's directory. This use case is very similar to using `pip install -e .` to install a package in develop/edit mode. Shortcomings ============ With all the enhancements made to `-m` over the years it still has one major shortcoming -- it can only execute modules written in Python (i.e., `*.py`). For example, if `-m` is used to execute a C compiled code module the following error will be produced, `No code object available for <modulename>` (see [here](https://stackoverflow.com/q/6165824/1066291) for more details). Detailed Comparisons ==================== **Module execution via import statement (i.e., `import <modulename>`):** * `sys.path` is *not* modified in any way * `__name__` is set to the absolute form of `<modulename>` * `__package__` is set to the immediate parent package in `<modulename>` * `__init__.py` is evaluated for all packages (including its own for package modules) * `__main__.py` is *not* evaluated for package modules; the code is evaluated for code modules **Module execution via command line with filename (i.e., `python <filename>`):** * `sys.path` is modified to include the final directory in `<filename>` * `__name__` is set to `'__main__'` * `__package__` is set to `None` * `__init__.py` is not evaluated for any package (including its own for package modules) * `__main__.py` is evaluated for package modules; the code is evaluated for code modules. **Module execution via command line with modulename (i.e., `python -m <modulename>`):** * `sys.path` is modified to include the current directory * `__name__` is set to `'__main__'` * `__package__` is set to the immediate parent package in `<modulename>` * `__init__.py` is evaluated for all packages (including its own for package modules) * `__main__.py` is evaluated for package modules; the code is evaluated for code modules Conclusion ---------- The `-m` flag is, at its simplest, a means to execute python scripts from the command line by using modulenames rather than filenames. The real power of `-m`, however, is in its ability to combine the power of `import` statements (e.g., support for explicit relative imports and automatic package `__init__` evaluation) with the convenience of the command line.
7,610,001
Could you explain to me what the difference is between calling ``` python -m mymod1 mymod2.py args ``` and ``` python mymod1.py mymod2.py args ``` It seems in both cases `mymod1.py` is called and `sys.argv` is ``` ['mymod1.py', 'mymod2.py', 'args'] ``` So what is the `-m` switch for?
2011/09/30
[ "https://Stackoverflow.com/questions/7610001", "https://Stackoverflow.com", "https://Stackoverflow.com/users/621944/" ]
The first line of the `Rationale` section of [PEP 338](http://www.python.org/dev/peps/pep-0338/) says: > > Python 2.4 adds the command line switch -m to allow modules to be located using the Python module namespace for execution as scripts. The motivating examples were standard library modules such as pdb and profile, and the Python 2.4 implementation is fine for this limited purpose. > > > So you can specify any module in Python's search path this way, not just files in the current directory. You're correct that `python mymod1.py mymod2.py args` has exactly the same effect. The first line of the `Scope of this proposal` section states: > > In Python 2.4, a module located using -m is executed just as if its filename had been provided on the command line. > > > With `-m` more is possible, like working with modules which are part of a package, etc. That's what the rest of PEP 338 is about. Read it for more info.
I just want to mention one potentially confusing case. Suppose you use `pip3` to install a package `foo`, which contains a `bar` module. So this means you can execute `python3 -m foo.bar` from any directory. On the other hand, you have a directory structure like this: ``` src | +-- foo | +-- __init__.py | +-- bar.py ``` You are at `src/`. When you run `python -m foo.bar`, you are running the `bar.py`, instead of the installed module. However, if you are calling `python -m foo.bar` from any other directory, you are using the installed module. This behavior certainly doesn't happen if you are using `python` instead of `python -m`, and can be confusing for beginners. The reason is the order how Python searches for modules.
7,610,001
Could you explain to me what the difference is between calling ``` python -m mymod1 mymod2.py args ``` and ``` python mymod1.py mymod2.py args ``` It seems in both cases `mymod1.py` is called and `sys.argv` is ``` ['mymod1.py', 'mymod2.py', 'args'] ``` So what is the `-m` switch for?
2011/09/30
[ "https://Stackoverflow.com/questions/7610001", "https://Stackoverflow.com", "https://Stackoverflow.com/users/621944/" ]
The first line of the `Rationale` section of [PEP 338](http://www.python.org/dev/peps/pep-0338/) says: > > Python 2.4 adds the command line switch -m to allow modules to be located using the Python module namespace for execution as scripts. The motivating examples were standard library modules such as pdb and profile, and the Python 2.4 implementation is fine for this limited purpose. > > > So you can specify any module in Python's search path this way, not just files in the current directory. You're correct that `python mymod1.py mymod2.py args` has exactly the same effect. The first line of the `Scope of this proposal` section states: > > In Python 2.4, a module located using -m is executed just as if its filename had been provided on the command line. > > > With `-m` more is possible, like working with modules which are part of a package, etc. That's what the rest of PEP 338 is about. Read it for more info.
Since this question comes up when you google `Use of "python -m"`, I just wanted to add a quick reference for those who like to modularize code without creating full [python packages](https://packaging.python.org/en/latest/) or [modifying](https://gist.github.com/TheProjectsGuy/a998c0a53313ee0bdc937018603d93a4) `PYTHONPATH` or `sys.path` every time. Setup ----- Let's setup the following file structure ``` . ├── f1 │   ├── f2 │   │   ├── __init__.py │   │   └── test2.py │   ├── __init__.py │   └── test1.py └── test.py ``` Let the present path be `m1`. Using `python -m` instead of `python ./*` ----------------------------------------- 1. Use `.` qualified *module* names for the files (because they're being treated as modules now). For example, to run the contents in `./f1/test1.py`, we do ```bash python -m f1.test1 ``` and not ```bash python ./f1/test1.py ``` 2. When using the *module* method, the `sys.path` in `test1.py` (when that is run) is `m1`. When using the `./` (relative file) method, the path is `m1/f1`. So we can access all files *in* `m1` (and assume that it is a full python package) using `-m`. This is because the path to `m1` is stored (as `PYTHONPATH`). 3. If we want to run deeply nested "modules", we can still use `.` (just as we do in `import` statements). ```bash # This can be done python -m f1.f2.test2 ``` And in `test2.py`, we can do `from f1.test1 import do_something` without using any [path gimmicks](https://gist.github.com/TheProjectsGuy/a998c0a53313ee0bdc937018603d93a4) in it. 4. Every time we do module imports this way, the `__init__.py` is automatically called. This is true even when we're nesting. ```bash python -m f1.f2.test2 ``` When we do that, the `./f1/__init__.py` is called, followed by `./f1/f2/__init__.py`.
7,610,001
Could you explain to me what the difference is between calling ``` python -m mymod1 mymod2.py args ``` and ``` python mymod1.py mymod2.py args ``` It seems in both cases `mymod1.py` is called and `sys.argv` is ``` ['mymod1.py', 'mymod2.py', 'args'] ``` So what is the `-m` switch for?
2011/09/30
[ "https://Stackoverflow.com/questions/7610001", "https://Stackoverflow.com", "https://Stackoverflow.com/users/621944/" ]
Despite this question having been asked and answered several times (e.g., [here](https://stackoverflow.com/q/52441280/1066291), [here](https://stackoverflow.com/q/22241420/1066291), [here](https://stackoverflow.com/q/50821312/1066291), and [here](https://stackoverflow.com/q/46319694/1066291)), in my opinion no existing answer fully or concisely captures all the implications of the `-m` flag. Therefore, the following will attempt to improve on what has come before. Introduction (TLDR) =================== The `-m` flag does a lot of things, not all of which will be needed all the time. In short it can be used to: (1) execute python code from the command line via modulename rather than filename (2) add a directory to `sys.path` for use in `import` resolution and (3) execute python code that contains relative imports from the command line. Preliminaries ============= To explain the `-m` flag we first need to explain a little terminology. Python's primary organizational unit is known as a [module](https://docs.python.org/3/glossary.html#term-module). Module's come in one of two flavors: code modules and package modules. A code module is any file that contains python executable code. A package module is a directory that contains other modules (either code modules or package modules). The most common type of code modules are `*.py` files while the most common type of package modules are directories containing an `__init__.py` file. Python allows modules to be uniquely identified in two distinct ways: modulename and filename. In general, modules are identified by modulename in Python code (e.g., `import <modulename>`) and by filename on the command line (e.g., `python <filename>`). All python interpreters are able to convert modulenames to filenames by following the same few, well-defined rules. These rules hinge on the `sys.path` variable. By altering this variable one can change how Python resolves modulenames into filenames (for more on how this is done see [PEP 302](http://python.org/dev/peps/pep-0302/)). All modules (both code and package) can be executed (i.e., code associated with the module will be evaluated by the Python interpreter). Depending on the execution method (and module type) what code gets evaluated, and when, can change quite a bit. For example, if one executes a package module via `python <filename>` then `<filename>/__main__.py` will be executed. On the other hand, if one executes that same package module via `import <modulename>` then only the package's `__init__.py` will be executed. Historical Development of `-m` ============================== The `-m` flag was first introduced in [Python 2.4.1](https://www.python.org/download/releases/2.4.1/notes/). Initially its only purpose was to provide an alternative means of identifying the python module to execute from the command line. That is, if we knew both the `<filename>` and `<modulename>` for a module then the following two commands were equivalent: `python <filename> <args>` and `python -m <modulename> <args>`. One constraint with this iteration, according to [PEP 338](https://www.python.org/dev/peps/pep-0338/#current-behaviour), was that `-m` only worked with top level modulenames (i.e., modules that could be found directly on `sys.path` without any intervening package modules). With the completion of [PEP 338](https://www.python.org/dev/peps/pep-0338/) the `-m` feature was extended to support `<modulename>` representations beyond the top level. This meant names such as `http.server` were now fully supported. This extension also meant that each parent package in modulename was now evaluated (i.e., all parent package `__init__.py` files were evaluated) in addition to the module referenced by the modulename itself. The final major feature enhancement for `-m` came with [PEP 366](https://www.python.org/dev/peps/pep-0366/). With this upgrade `-m` gained the ability to support not only absolute imports but also explicit relative imports when executing modules. This was achieved by changing `-m` so that it set the `__package__` variable to the parent module of the given modulename (in addition to everything else it already did). Use Cases ========= There are two notable use cases for the `-m` flag: 1. To execute modules from the command line for which one may not know their filename. This use case takes advantage of the fact that the Python interpreter knows how to convert modulenames to filenames. This is particularly advantageous when one wants to run stdlib modules or 3rd-party module from the command line. For example, very few people know the filename for the `http.server` module but most people do know its modulename so we can execute it from the command line using `python -m http.server`. 2. To execute a local package containing absolute or relative imports without needing to install it. This use case is detailed in [PEP 338](https://www.python.org/dev/peps/pep-0338/#import-statements-and-the-main-module) and leverages the fact that the current working directory is added to `sys.path` rather than the module's directory. This use case is very similar to using `pip install -e .` to install a package in develop/edit mode. Shortcomings ============ With all the enhancements made to `-m` over the years it still has one major shortcoming -- it can only execute modules written in Python (i.e., `*.py`). For example, if `-m` is used to execute a C compiled code module the following error will be produced, `No code object available for <modulename>` (see [here](https://stackoverflow.com/q/6165824/1066291) for more details). Detailed Comparisons ==================== **Module execution via import statement (i.e., `import <modulename>`):** * `sys.path` is *not* modified in any way * `__name__` is set to the absolute form of `<modulename>` * `__package__` is set to the immediate parent package in `<modulename>` * `__init__.py` is evaluated for all packages (including its own for package modules) * `__main__.py` is *not* evaluated for package modules; the code is evaluated for code modules **Module execution via command line with filename (i.e., `python <filename>`):** * `sys.path` is modified to include the final directory in `<filename>` * `__name__` is set to `'__main__'` * `__package__` is set to `None` * `__init__.py` is not evaluated for any package (including its own for package modules) * `__main__.py` is evaluated for package modules; the code is evaluated for code modules. **Module execution via command line with modulename (i.e., `python -m <modulename>`):** * `sys.path` is modified to include the current directory * `__name__` is set to `'__main__'` * `__package__` is set to the immediate parent package in `<modulename>` * `__init__.py` is evaluated for all packages (including its own for package modules) * `__main__.py` is evaluated for package modules; the code is evaluated for code modules Conclusion ---------- The `-m` flag is, at its simplest, a means to execute python scripts from the command line by using modulenames rather than filenames. The real power of `-m`, however, is in its ability to combine the power of `import` statements (e.g., support for explicit relative imports and automatic package `__init__` evaluation) with the convenience of the command line.
It's worth mentioning **this only works if the package has a file `__main__.py`** Otherwise, this package can not be executed directly. ``` python -m some_package some_arguments ``` The python interpreter will looking for a `__main__.py` file in the package path to execute. It's equivalent to: ``` python path_to_package/__main__.py somearguments ``` It will execute the content after: ``` if __name__ == "__main__": ```
7,610,001
Could you explain to me what the difference is between calling ``` python -m mymod1 mymod2.py args ``` and ``` python mymod1.py mymod2.py args ``` It seems in both cases `mymod1.py` is called and `sys.argv` is ``` ['mymod1.py', 'mymod2.py', 'args'] ``` So what is the `-m` switch for?
2011/09/30
[ "https://Stackoverflow.com/questions/7610001", "https://Stackoverflow.com", "https://Stackoverflow.com/users/621944/" ]
It's worth mentioning **this only works if the package has a file `__main__.py`** Otherwise, this package can not be executed directly. ``` python -m some_package some_arguments ``` The python interpreter will looking for a `__main__.py` file in the package path to execute. It's equivalent to: ``` python path_to_package/__main__.py somearguments ``` It will execute the content after: ``` if __name__ == "__main__": ```
I just want to mention one potentially confusing case. Suppose you use `pip3` to install a package `foo`, which contains a `bar` module. So this means you can execute `python3 -m foo.bar` from any directory. On the other hand, you have a directory structure like this: ``` src | +-- foo | +-- __init__.py | +-- bar.py ``` You are at `src/`. When you run `python -m foo.bar`, you are running the `bar.py`, instead of the installed module. However, if you are calling `python -m foo.bar` from any other directory, you are using the installed module. This behavior certainly doesn't happen if you are using `python` instead of `python -m`, and can be confusing for beginners. The reason is the order how Python searches for modules.
7,610,001
Could you explain to me what the difference is between calling ``` python -m mymod1 mymod2.py args ``` and ``` python mymod1.py mymod2.py args ``` It seems in both cases `mymod1.py` is called and `sys.argv` is ``` ['mymod1.py', 'mymod2.py', 'args'] ``` So what is the `-m` switch for?
2011/09/30
[ "https://Stackoverflow.com/questions/7610001", "https://Stackoverflow.com", "https://Stackoverflow.com/users/621944/" ]
It's worth mentioning **this only works if the package has a file `__main__.py`** Otherwise, this package can not be executed directly. ``` python -m some_package some_arguments ``` The python interpreter will looking for a `__main__.py` file in the package path to execute. It's equivalent to: ``` python path_to_package/__main__.py somearguments ``` It will execute the content after: ``` if __name__ == "__main__": ```
Since this question comes up when you google `Use of "python -m"`, I just wanted to add a quick reference for those who like to modularize code without creating full [python packages](https://packaging.python.org/en/latest/) or [modifying](https://gist.github.com/TheProjectsGuy/a998c0a53313ee0bdc937018603d93a4) `PYTHONPATH` or `sys.path` every time. Setup ----- Let's setup the following file structure ``` . ├── f1 │   ├── f2 │   │   ├── __init__.py │   │   └── test2.py │   ├── __init__.py │   └── test1.py └── test.py ``` Let the present path be `m1`. Using `python -m` instead of `python ./*` ----------------------------------------- 1. Use `.` qualified *module* names for the files (because they're being treated as modules now). For example, to run the contents in `./f1/test1.py`, we do ```bash python -m f1.test1 ``` and not ```bash python ./f1/test1.py ``` 2. When using the *module* method, the `sys.path` in `test1.py` (when that is run) is `m1`. When using the `./` (relative file) method, the path is `m1/f1`. So we can access all files *in* `m1` (and assume that it is a full python package) using `-m`. This is because the path to `m1` is stored (as `PYTHONPATH`). 3. If we want to run deeply nested "modules", we can still use `.` (just as we do in `import` statements). ```bash # This can be done python -m f1.f2.test2 ``` And in `test2.py`, we can do `from f1.test1 import do_something` without using any [path gimmicks](https://gist.github.com/TheProjectsGuy/a998c0a53313ee0bdc937018603d93a4) in it. 4. Every time we do module imports this way, the `__init__.py` is automatically called. This is true even when we're nesting. ```bash python -m f1.f2.test2 ``` When we do that, the `./f1/__init__.py` is called, followed by `./f1/f2/__init__.py`.
7,610,001
Could you explain to me what the difference is between calling ``` python -m mymod1 mymod2.py args ``` and ``` python mymod1.py mymod2.py args ``` It seems in both cases `mymod1.py` is called and `sys.argv` is ``` ['mymod1.py', 'mymod2.py', 'args'] ``` So what is the `-m` switch for?
2011/09/30
[ "https://Stackoverflow.com/questions/7610001", "https://Stackoverflow.com", "https://Stackoverflow.com/users/621944/" ]
Despite this question having been asked and answered several times (e.g., [here](https://stackoverflow.com/q/52441280/1066291), [here](https://stackoverflow.com/q/22241420/1066291), [here](https://stackoverflow.com/q/50821312/1066291), and [here](https://stackoverflow.com/q/46319694/1066291)), in my opinion no existing answer fully or concisely captures all the implications of the `-m` flag. Therefore, the following will attempt to improve on what has come before. Introduction (TLDR) =================== The `-m` flag does a lot of things, not all of which will be needed all the time. In short it can be used to: (1) execute python code from the command line via modulename rather than filename (2) add a directory to `sys.path` for use in `import` resolution and (3) execute python code that contains relative imports from the command line. Preliminaries ============= To explain the `-m` flag we first need to explain a little terminology. Python's primary organizational unit is known as a [module](https://docs.python.org/3/glossary.html#term-module). Module's come in one of two flavors: code modules and package modules. A code module is any file that contains python executable code. A package module is a directory that contains other modules (either code modules or package modules). The most common type of code modules are `*.py` files while the most common type of package modules are directories containing an `__init__.py` file. Python allows modules to be uniquely identified in two distinct ways: modulename and filename. In general, modules are identified by modulename in Python code (e.g., `import <modulename>`) and by filename on the command line (e.g., `python <filename>`). All python interpreters are able to convert modulenames to filenames by following the same few, well-defined rules. These rules hinge on the `sys.path` variable. By altering this variable one can change how Python resolves modulenames into filenames (for more on how this is done see [PEP 302](http://python.org/dev/peps/pep-0302/)). All modules (both code and package) can be executed (i.e., code associated with the module will be evaluated by the Python interpreter). Depending on the execution method (and module type) what code gets evaluated, and when, can change quite a bit. For example, if one executes a package module via `python <filename>` then `<filename>/__main__.py` will be executed. On the other hand, if one executes that same package module via `import <modulename>` then only the package's `__init__.py` will be executed. Historical Development of `-m` ============================== The `-m` flag was first introduced in [Python 2.4.1](https://www.python.org/download/releases/2.4.1/notes/). Initially its only purpose was to provide an alternative means of identifying the python module to execute from the command line. That is, if we knew both the `<filename>` and `<modulename>` for a module then the following two commands were equivalent: `python <filename> <args>` and `python -m <modulename> <args>`. One constraint with this iteration, according to [PEP 338](https://www.python.org/dev/peps/pep-0338/#current-behaviour), was that `-m` only worked with top level modulenames (i.e., modules that could be found directly on `sys.path` without any intervening package modules). With the completion of [PEP 338](https://www.python.org/dev/peps/pep-0338/) the `-m` feature was extended to support `<modulename>` representations beyond the top level. This meant names such as `http.server` were now fully supported. This extension also meant that each parent package in modulename was now evaluated (i.e., all parent package `__init__.py` files were evaluated) in addition to the module referenced by the modulename itself. The final major feature enhancement for `-m` came with [PEP 366](https://www.python.org/dev/peps/pep-0366/). With this upgrade `-m` gained the ability to support not only absolute imports but also explicit relative imports when executing modules. This was achieved by changing `-m` so that it set the `__package__` variable to the parent module of the given modulename (in addition to everything else it already did). Use Cases ========= There are two notable use cases for the `-m` flag: 1. To execute modules from the command line for which one may not know their filename. This use case takes advantage of the fact that the Python interpreter knows how to convert modulenames to filenames. This is particularly advantageous when one wants to run stdlib modules or 3rd-party module from the command line. For example, very few people know the filename for the `http.server` module but most people do know its modulename so we can execute it from the command line using `python -m http.server`. 2. To execute a local package containing absolute or relative imports without needing to install it. This use case is detailed in [PEP 338](https://www.python.org/dev/peps/pep-0338/#import-statements-and-the-main-module) and leverages the fact that the current working directory is added to `sys.path` rather than the module's directory. This use case is very similar to using `pip install -e .` to install a package in develop/edit mode. Shortcomings ============ With all the enhancements made to `-m` over the years it still has one major shortcoming -- it can only execute modules written in Python (i.e., `*.py`). For example, if `-m` is used to execute a C compiled code module the following error will be produced, `No code object available for <modulename>` (see [here](https://stackoverflow.com/q/6165824/1066291) for more details). Detailed Comparisons ==================== **Module execution via import statement (i.e., `import <modulename>`):** * `sys.path` is *not* modified in any way * `__name__` is set to the absolute form of `<modulename>` * `__package__` is set to the immediate parent package in `<modulename>` * `__init__.py` is evaluated for all packages (including its own for package modules) * `__main__.py` is *not* evaluated for package modules; the code is evaluated for code modules **Module execution via command line with filename (i.e., `python <filename>`):** * `sys.path` is modified to include the final directory in `<filename>` * `__name__` is set to `'__main__'` * `__package__` is set to `None` * `__init__.py` is not evaluated for any package (including its own for package modules) * `__main__.py` is evaluated for package modules; the code is evaluated for code modules. **Module execution via command line with modulename (i.e., `python -m <modulename>`):** * `sys.path` is modified to include the current directory * `__name__` is set to `'__main__'` * `__package__` is set to the immediate parent package in `<modulename>` * `__init__.py` is evaluated for all packages (including its own for package modules) * `__main__.py` is evaluated for package modules; the code is evaluated for code modules Conclusion ---------- The `-m` flag is, at its simplest, a means to execute python scripts from the command line by using modulenames rather than filenames. The real power of `-m`, however, is in its ability to combine the power of `import` statements (e.g., support for explicit relative imports and automatic package `__init__` evaluation) with the convenience of the command line.
I just want to mention one potentially confusing case. Suppose you use `pip3` to install a package `foo`, which contains a `bar` module. So this means you can execute `python3 -m foo.bar` from any directory. On the other hand, you have a directory structure like this: ``` src | +-- foo | +-- __init__.py | +-- bar.py ``` You are at `src/`. When you run `python -m foo.bar`, you are running the `bar.py`, instead of the installed module. However, if you are calling `python -m foo.bar` from any other directory, you are using the installed module. This behavior certainly doesn't happen if you are using `python` instead of `python -m`, and can be confusing for beginners. The reason is the order how Python searches for modules.
7,610,001
Could you explain to me what the difference is between calling ``` python -m mymod1 mymod2.py args ``` and ``` python mymod1.py mymod2.py args ``` It seems in both cases `mymod1.py` is called and `sys.argv` is ``` ['mymod1.py', 'mymod2.py', 'args'] ``` So what is the `-m` switch for?
2011/09/30
[ "https://Stackoverflow.com/questions/7610001", "https://Stackoverflow.com", "https://Stackoverflow.com/users/621944/" ]
Despite this question having been asked and answered several times (e.g., [here](https://stackoverflow.com/q/52441280/1066291), [here](https://stackoverflow.com/q/22241420/1066291), [here](https://stackoverflow.com/q/50821312/1066291), and [here](https://stackoverflow.com/q/46319694/1066291)), in my opinion no existing answer fully or concisely captures all the implications of the `-m` flag. Therefore, the following will attempt to improve on what has come before. Introduction (TLDR) =================== The `-m` flag does a lot of things, not all of which will be needed all the time. In short it can be used to: (1) execute python code from the command line via modulename rather than filename (2) add a directory to `sys.path` for use in `import` resolution and (3) execute python code that contains relative imports from the command line. Preliminaries ============= To explain the `-m` flag we first need to explain a little terminology. Python's primary organizational unit is known as a [module](https://docs.python.org/3/glossary.html#term-module). Module's come in one of two flavors: code modules and package modules. A code module is any file that contains python executable code. A package module is a directory that contains other modules (either code modules or package modules). The most common type of code modules are `*.py` files while the most common type of package modules are directories containing an `__init__.py` file. Python allows modules to be uniquely identified in two distinct ways: modulename and filename. In general, modules are identified by modulename in Python code (e.g., `import <modulename>`) and by filename on the command line (e.g., `python <filename>`). All python interpreters are able to convert modulenames to filenames by following the same few, well-defined rules. These rules hinge on the `sys.path` variable. By altering this variable one can change how Python resolves modulenames into filenames (for more on how this is done see [PEP 302](http://python.org/dev/peps/pep-0302/)). All modules (both code and package) can be executed (i.e., code associated with the module will be evaluated by the Python interpreter). Depending on the execution method (and module type) what code gets evaluated, and when, can change quite a bit. For example, if one executes a package module via `python <filename>` then `<filename>/__main__.py` will be executed. On the other hand, if one executes that same package module via `import <modulename>` then only the package's `__init__.py` will be executed. Historical Development of `-m` ============================== The `-m` flag was first introduced in [Python 2.4.1](https://www.python.org/download/releases/2.4.1/notes/). Initially its only purpose was to provide an alternative means of identifying the python module to execute from the command line. That is, if we knew both the `<filename>` and `<modulename>` for a module then the following two commands were equivalent: `python <filename> <args>` and `python -m <modulename> <args>`. One constraint with this iteration, according to [PEP 338](https://www.python.org/dev/peps/pep-0338/#current-behaviour), was that `-m` only worked with top level modulenames (i.e., modules that could be found directly on `sys.path` without any intervening package modules). With the completion of [PEP 338](https://www.python.org/dev/peps/pep-0338/) the `-m` feature was extended to support `<modulename>` representations beyond the top level. This meant names such as `http.server` were now fully supported. This extension also meant that each parent package in modulename was now evaluated (i.e., all parent package `__init__.py` files were evaluated) in addition to the module referenced by the modulename itself. The final major feature enhancement for `-m` came with [PEP 366](https://www.python.org/dev/peps/pep-0366/). With this upgrade `-m` gained the ability to support not only absolute imports but also explicit relative imports when executing modules. This was achieved by changing `-m` so that it set the `__package__` variable to the parent module of the given modulename (in addition to everything else it already did). Use Cases ========= There are two notable use cases for the `-m` flag: 1. To execute modules from the command line for which one may not know their filename. This use case takes advantage of the fact that the Python interpreter knows how to convert modulenames to filenames. This is particularly advantageous when one wants to run stdlib modules or 3rd-party module from the command line. For example, very few people know the filename for the `http.server` module but most people do know its modulename so we can execute it from the command line using `python -m http.server`. 2. To execute a local package containing absolute or relative imports without needing to install it. This use case is detailed in [PEP 338](https://www.python.org/dev/peps/pep-0338/#import-statements-and-the-main-module) and leverages the fact that the current working directory is added to `sys.path` rather than the module's directory. This use case is very similar to using `pip install -e .` to install a package in develop/edit mode. Shortcomings ============ With all the enhancements made to `-m` over the years it still has one major shortcoming -- it can only execute modules written in Python (i.e., `*.py`). For example, if `-m` is used to execute a C compiled code module the following error will be produced, `No code object available for <modulename>` (see [here](https://stackoverflow.com/q/6165824/1066291) for more details). Detailed Comparisons ==================== **Module execution via import statement (i.e., `import <modulename>`):** * `sys.path` is *not* modified in any way * `__name__` is set to the absolute form of `<modulename>` * `__package__` is set to the immediate parent package in `<modulename>` * `__init__.py` is evaluated for all packages (including its own for package modules) * `__main__.py` is *not* evaluated for package modules; the code is evaluated for code modules **Module execution via command line with filename (i.e., `python <filename>`):** * `sys.path` is modified to include the final directory in `<filename>` * `__name__` is set to `'__main__'` * `__package__` is set to `None` * `__init__.py` is not evaluated for any package (including its own for package modules) * `__main__.py` is evaluated for package modules; the code is evaluated for code modules. **Module execution via command line with modulename (i.e., `python -m <modulename>`):** * `sys.path` is modified to include the current directory * `__name__` is set to `'__main__'` * `__package__` is set to the immediate parent package in `<modulename>` * `__init__.py` is evaluated for all packages (including its own for package modules) * `__main__.py` is evaluated for package modules; the code is evaluated for code modules Conclusion ---------- The `-m` flag is, at its simplest, a means to execute python scripts from the command line by using modulenames rather than filenames. The real power of `-m`, however, is in its ability to combine the power of `import` statements (e.g., support for explicit relative imports and automatic package `__init__` evaluation) with the convenience of the command line.
Since this question comes up when you google `Use of "python -m"`, I just wanted to add a quick reference for those who like to modularize code without creating full [python packages](https://packaging.python.org/en/latest/) or [modifying](https://gist.github.com/TheProjectsGuy/a998c0a53313ee0bdc937018603d93a4) `PYTHONPATH` or `sys.path` every time. Setup ----- Let's setup the following file structure ``` . ├── f1 │   ├── f2 │   │   ├── __init__.py │   │   └── test2.py │   ├── __init__.py │   └── test1.py └── test.py ``` Let the present path be `m1`. Using `python -m` instead of `python ./*` ----------------------------------------- 1. Use `.` qualified *module* names for the files (because they're being treated as modules now). For example, to run the contents in `./f1/test1.py`, we do ```bash python -m f1.test1 ``` and not ```bash python ./f1/test1.py ``` 2. When using the *module* method, the `sys.path` in `test1.py` (when that is run) is `m1`. When using the `./` (relative file) method, the path is `m1/f1`. So we can access all files *in* `m1` (and assume that it is a full python package) using `-m`. This is because the path to `m1` is stored (as `PYTHONPATH`). 3. If we want to run deeply nested "modules", we can still use `.` (just as we do in `import` statements). ```bash # This can be done python -m f1.f2.test2 ``` And in `test2.py`, we can do `from f1.test1 import do_something` without using any [path gimmicks](https://gist.github.com/TheProjectsGuy/a998c0a53313ee0bdc937018603d93a4) in it. 4. Every time we do module imports this way, the `__init__.py` is automatically called. This is true even when we're nesting. ```bash python -m f1.f2.test2 ``` When we do that, the `./f1/__init__.py` is called, followed by `./f1/f2/__init__.py`.
7,610,001
Could you explain to me what the difference is between calling ``` python -m mymod1 mymod2.py args ``` and ``` python mymod1.py mymod2.py args ``` It seems in both cases `mymod1.py` is called and `sys.argv` is ``` ['mymod1.py', 'mymod2.py', 'args'] ``` So what is the `-m` switch for?
2011/09/30
[ "https://Stackoverflow.com/questions/7610001", "https://Stackoverflow.com", "https://Stackoverflow.com/users/621944/" ]
I just want to mention one potentially confusing case. Suppose you use `pip3` to install a package `foo`, which contains a `bar` module. So this means you can execute `python3 -m foo.bar` from any directory. On the other hand, you have a directory structure like this: ``` src | +-- foo | +-- __init__.py | +-- bar.py ``` You are at `src/`. When you run `python -m foo.bar`, you are running the `bar.py`, instead of the installed module. However, if you are calling `python -m foo.bar` from any other directory, you are using the installed module. This behavior certainly doesn't happen if you are using `python` instead of `python -m`, and can be confusing for beginners. The reason is the order how Python searches for modules.
Since this question comes up when you google `Use of "python -m"`, I just wanted to add a quick reference for those who like to modularize code without creating full [python packages](https://packaging.python.org/en/latest/) or [modifying](https://gist.github.com/TheProjectsGuy/a998c0a53313ee0bdc937018603d93a4) `PYTHONPATH` or `sys.path` every time. Setup ----- Let's setup the following file structure ``` . ├── f1 │   ├── f2 │   │   ├── __init__.py │   │   └── test2.py │   ├── __init__.py │   └── test1.py └── test.py ``` Let the present path be `m1`. Using `python -m` instead of `python ./*` ----------------------------------------- 1. Use `.` qualified *module* names for the files (because they're being treated as modules now). For example, to run the contents in `./f1/test1.py`, we do ```bash python -m f1.test1 ``` and not ```bash python ./f1/test1.py ``` 2. When using the *module* method, the `sys.path` in `test1.py` (when that is run) is `m1`. When using the `./` (relative file) method, the path is `m1/f1`. So we can access all files *in* `m1` (and assume that it is a full python package) using `-m`. This is because the path to `m1` is stored (as `PYTHONPATH`). 3. If we want to run deeply nested "modules", we can still use `.` (just as we do in `import` statements). ```bash # This can be done python -m f1.f2.test2 ``` And in `test2.py`, we can do `from f1.test1 import do_something` without using any [path gimmicks](https://gist.github.com/TheProjectsGuy/a998c0a53313ee0bdc937018603d93a4) in it. 4. Every time we do module imports this way, the `__init__.py` is automatically called. This is true even when we're nesting. ```bash python -m f1.f2.test2 ``` When we do that, the `./f1/__init__.py` is called, followed by `./f1/f2/__init__.py`.
11,021,405
I am designing a text processing program that will generate a list of keywords from a long itemized text document, and combine entries for words that are similar in meaning. There are metrics out there, however I have a new issue of dealing with words that are not in the dictionary that I am using. I am currently using nltk and python, but my issues here are of a much more abstracted nature. Given a word that is not in a dictionary, what would be an efficient way of resolving it to a word that is within your dictionary? My only current solution involves running through the words in the dictionary and picking the word with the shortest Levenshtein distance (editing distance) from the inputted word. Obviously this is a very slow and impractical method, and I don't actually need the absolute best match from within the dictionary, just so long as it is a contained word and it is pretty close. Efficiency is more important for me in the solution, but a basic level of accuracy would also be needed. Any ideas on how to generally resolve some unknown word to a known one in a dictionary?
2012/06/13
[ "https://Stackoverflow.com/questions/11021405", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1287834/" ]
Looks like you need a spelling corrector to match words in your dictionary. The code below works and taken directly from this blog <http://norvig.com/spell-correct.html> written by Peter Norvig, ``` import re, collections def words(text): return re.findall('[a-z]+', text.lower()) def train(features): model = collections.defaultdict(lambda: 1) for f in features: model[f] += 1 return model NWORDS = train(words(file('big.txt').read())) alphabet = 'abcdefghijklmnopqrstuvwxyz' def edits1(word): splits = [(word[:i], word[i:]) for i in range(len(word) + 1)] deletes = [a + b[1:] for a, b in splits if b] transposes = [a + b[1] + b[0] + b[2:] for a, b in splits if len(b)>1] replaces = [a + c + b[1:] for a, b in splits for c in alphabet if b] inserts = [a + c + b for a, b in splits for c in alphabet] return set(deletes + transposes + replaces + inserts) def known_edits2(word): return set(e2 for e1 in edits1(word) for e2 in edits1(e1) if e2 in NWORDS) def known(words): return set(w for w in words if w in NWORDS) def correct(word): candidates = known([word]) or known(edits1(word)) or known_edits2(word) or [word] return max(candidates, key=NWORDS.get) ``` big.txt is your dictionary containing known words.
Your task sounds like it's really just non-word spelling correction, so a relatively straight-forward solution would be to use an existing spell checker like aspell with a custom dictionary. A quick and dirty approach would be to just use a phonetic mapping like metaphone (which is one the algorithms used by aspell). For each possible code derived from your dictionary, choose a representative word (e.g., the most frequent word in the group) to suggest as the correction, and pick a default correction for the case where no matches are found. But you'd probably get better results using aspell. If you do want to calculate edit distances, you can do it relatively quickly by storing the dictionary and possible edit operations in tries, see [Brill and Moore (2000)](http://www.aclweb.org/anthology/P/P00/P00-1037.pdf). If you have a decent-sized corpus of spelling errors and their corrections and can implement Brill and Moore's whole approach, you would probably beat aspell by quite a bit, but it sounds like aspell (or any spell checker that lets you create your own dictionary) is sufficient for your task.
11,021,405
I am designing a text processing program that will generate a list of keywords from a long itemized text document, and combine entries for words that are similar in meaning. There are metrics out there, however I have a new issue of dealing with words that are not in the dictionary that I am using. I am currently using nltk and python, but my issues here are of a much more abstracted nature. Given a word that is not in a dictionary, what would be an efficient way of resolving it to a word that is within your dictionary? My only current solution involves running through the words in the dictionary and picking the word with the shortest Levenshtein distance (editing distance) from the inputted word. Obviously this is a very slow and impractical method, and I don't actually need the absolute best match from within the dictionary, just so long as it is a contained word and it is pretty close. Efficiency is more important for me in the solution, but a basic level of accuracy would also be needed. Any ideas on how to generally resolve some unknown word to a known one in a dictionary?
2012/06/13
[ "https://Stackoverflow.com/questions/11021405", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1287834/" ]
Looks like you need a spelling corrector to match words in your dictionary. The code below works and taken directly from this blog <http://norvig.com/spell-correct.html> written by Peter Norvig, ``` import re, collections def words(text): return re.findall('[a-z]+', text.lower()) def train(features): model = collections.defaultdict(lambda: 1) for f in features: model[f] += 1 return model NWORDS = train(words(file('big.txt').read())) alphabet = 'abcdefghijklmnopqrstuvwxyz' def edits1(word): splits = [(word[:i], word[i:]) for i in range(len(word) + 1)] deletes = [a + b[1:] for a, b in splits if b] transposes = [a + b[1] + b[0] + b[2:] for a, b in splits if len(b)>1] replaces = [a + c + b[1:] for a, b in splits for c in alphabet if b] inserts = [a + c + b for a, b in splits for c in alphabet] return set(deletes + transposes + replaces + inserts) def known_edits2(word): return set(e2 for e1 in edits1(word) for e2 in edits1(e1) if e2 in NWORDS) def known(words): return set(w for w in words if w in NWORDS) def correct(word): candidates = known([word]) or known(edits1(word)) or known_edits2(word) or [word] return max(candidates, key=NWORDS.get) ``` big.txt is your dictionary containing known words.
Hopefully this answer is not too vague: 1) It sounds like you might need to look at your tokenisation and phrase chunking layers first. This is is where you should discard symbolic phrase chunks before submitting them to any fuzzy spell checking. 2) I would still recommend edit distance to come up with alternatives to any 'mis-spelt' tokens after that, but this may return a list of equally close possibles. 3)When you have your list of possibles, you could then use co-occurence algorithms to select the most likey candidate from this list. I only have a java example of some software that could help ( <http://www.linguatools.de/disco/disco_en.html#was> ) . You can submit a word, and this will return the difinitive co-occuring words for that word. You can then compare this list to the context of your 'mis-spelt' word, and select the one with the most overlap from all potential edit distance words.
11,021,405
I am designing a text processing program that will generate a list of keywords from a long itemized text document, and combine entries for words that are similar in meaning. There are metrics out there, however I have a new issue of dealing with words that are not in the dictionary that I am using. I am currently using nltk and python, but my issues here are of a much more abstracted nature. Given a word that is not in a dictionary, what would be an efficient way of resolving it to a word that is within your dictionary? My only current solution involves running through the words in the dictionary and picking the word with the shortest Levenshtein distance (editing distance) from the inputted word. Obviously this is a very slow and impractical method, and I don't actually need the absolute best match from within the dictionary, just so long as it is a contained word and it is pretty close. Efficiency is more important for me in the solution, but a basic level of accuracy would also be needed. Any ideas on how to generally resolve some unknown word to a known one in a dictionary?
2012/06/13
[ "https://Stackoverflow.com/questions/11021405", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1287834/" ]
Looks like you need a spelling corrector to match words in your dictionary. The code below works and taken directly from this blog <http://norvig.com/spell-correct.html> written by Peter Norvig, ``` import re, collections def words(text): return re.findall('[a-z]+', text.lower()) def train(features): model = collections.defaultdict(lambda: 1) for f in features: model[f] += 1 return model NWORDS = train(words(file('big.txt').read())) alphabet = 'abcdefghijklmnopqrstuvwxyz' def edits1(word): splits = [(word[:i], word[i:]) for i in range(len(word) + 1)] deletes = [a + b[1:] for a, b in splits if b] transposes = [a + b[1] + b[0] + b[2:] for a, b in splits if len(b)>1] replaces = [a + c + b[1:] for a, b in splits for c in alphabet if b] inserts = [a + c + b for a, b in splits for c in alphabet] return set(deletes + transposes + replaces + inserts) def known_edits2(word): return set(e2 for e1 in edits1(word) for e2 in edits1(e1) if e2 in NWORDS) def known(words): return set(w for w in words if w in NWORDS) def correct(word): candidates = known([word]) or known(edits1(word)) or known_edits2(word) or [word] return max(candidates, key=NWORDS.get) ``` big.txt is your dictionary containing known words.
I do not see a reason to use Levenshtein distance to find a word similar in *meaning*. LD looks at form (you want to map "bus" to "truck" not to "bush"). The correct solution depends on what you want to do next. Unless you really need the information in those unknown words, I would simply map all of them to a single generic "UNKNOWN\_WORD" item. Obviously you can cluster the unknown words by their context and other features (say, do they start by a capital letter). For context clustering: since you are interested in meaning, I would use a larger window for those words (say +/- 50 words) and probably use a simple bag of words model. Then you simply find a known word whose vector in this space is closest to the unknown word using some distance metrics (say, cosine). Let me know if you need more information about this.
73,247,351
I found a way to query all global address in outlook with python, ``` import win32com.client import csv from datetime import datetime # Outlook outApp = win32com.client.gencache.EnsureDispatch("Outlook.Application") outGAL = outApp.Session.GetGlobalAddressList() entries = outGAL.AddressEntries # Create a dateID date_id = (datetime.today()).strftime('%Y%m%d') # Create empty list to store results data_set = list() # Iterate through Outlook address entries for entry in entries: if entry.Type == "EX": user = entry.GetExchangeUser() if user is not None: if len(user.FirstName) > 0 and len(user.LastName) > 0: row = list() row.append(date_id) row.append(user.Name) row.append(user.FirstName) row.append(user.LastName) row.append(user.JobTitle) row.append(user.City) row.append(user.PrimarySmtpAddress) try: row.append(entry.PropertyAccessor.GetProperty("http://schemas.microsoft.com/mapi/proptag/0x3a26001e")) except: row.append('None') # Store the user details in data_set data_set.append(row) # Print out the result to a csv with headers with open(date_id + 'outlookGALresults.csv', 'w', newline='', encoding='utf-8') as csv_file: headers = ['DateID', 'DisplayName', 'FirstName', 'LastName', 'JobTitle', 'City', 'PrimarySmtp', 'Country'] wr = csv.writer(csv_file, delimiter=',') wr.writerow(headers) for line in data_set: wr.writerow(line) ``` But it querys user one by one and it's very slow. I only need to query user from IT department out of 100,000 users. How can I write the filter to avoid querying all users?
2022/08/05
[ "https://Stackoverflow.com/questions/73247351", "https://Stackoverflow.com", "https://Stackoverflow.com/users/9821792/" ]
Actually your program works fine except that one problem with removing element from list, look: ``` l = [1,2,3] print(l.remove(1)) # None print(l) # [2,3] ``` So `list.remove()` modifies **mutable** list and doesn't return new one like in case of e.g. **immutable** strings ``` s = "abc" print(s.upper()) # ABC print(s) # abc ``` Saying that, move removing element from the list into separate line ``` three.remove(smallest_1) smallest_2 = min(three) ``` As a bonus, I am pretty sure you know it can be done better in many ways, but as simple code improvment - **why not to delete `max` from three?**
The only problem was the `three.remove(smallest_1)`. Change this and it becomes: ```py def sum_two_smallest_numbers(numbers, index=0, two=[]): if index == 0: two = [numbers[0], numbers[1]] index += 2 if index < len(numbers) and index >= 2: three = two + [numbers[index]] smallest_1 = min(three) three.remove(smallest_1) smallest_2 = min(three) two = [smallest_1, smallest_2] return sum_two_smallest_numbers(numbers, index+1, two) return sum(two) print(sum_two_smallest_numbers([7, 15, 12, 18, 22])) ``` Output: `19` --- Some improvements: ```py def sum_two_smallest_numbers(nums, index=0, two=[]): if not index: # equivalent to if index == 0 two = nums[:2] # rather than [nums[0], nums[1]] index = 2 if index < len(nums): three = two + [nums[index]] three.remove(max(three)) # Remove max rather than add two min return sum_two_smallest_numbers(nums, index+1, three) return sum(two) ```
28,296,476
I finished installing pip on linux, the `pip list` command works. But when using the `pip install` command it got the following error: ``` Traceback (most recent call last): File "/usr/local/lib/python2.7/site-packages/pip-6.0.7-py2.7.egg/pip/basecommand.py", line 232, in main status = self.run(options, args) File "/usr/local/lib/python2.7/site-packages/pip-6.0.7-py2.7.egg/pip/commands/install.py", line 339, in run requirement_set.prepare_files(finder) File "/usr/local/lib/python2.7/site-packages/pip-6.0.7-py2.7.egg/pip/req/req_set.py", line 333, in prepare_files upgrade=self.upgrade, File "/usr/local/lib/python2.7/site-packages/pip-6.0.7-py2.7.egg/pip/index.py", line 305, in find_requirement page = self._get_page(main_index_url, req) File "/usr/local/lib/python2.7/site-packages/pip-6.0.7-py2.7.egg/pip/index.py", line 783, in _get_page return HTMLPage.get_page(link, req, session=self.session) File "/usr/local/lib/python2.7/site-packages/pip-6.0.7-py2.7.egg/pip/index.py", line 872, in get_page "Cache-Control": "max-age=600", File "/usr/local/lib/python2.7/site-packages/pip-6.0.7-py2.7.egg/pip/_vendor/requests/sessions.py", line 473, in get return self.request('GET', url, **kwargs) File "/usr/local/lib/python2.7/site-packages/pip-6.0.7-py2.7.egg/pip/download.py", line 365, in request return super(PipSession, self).request(method, url, *args, **kwargs) File "/usr/local/lib/python2.7/site-packages/pip-6.0.7-py2.7.egg/pip/_vendor/requests/sessions.py", line 461, in request resp = self.send(prep, **send_kwargs) File "/usr/local/lib/python2.7/site-packages/pip-6.0.7-py2.7.egg/pip/_vendor/requests/sessions.py", line 573, in send r = adapter.send(request, **kwargs) File "/usr/local/lib/python2.7/site-packages/pip-6.0.7-py2.7.egg/pip/_vendor/cachecontrol/adapter.py", line 43, in send resp = super(CacheControlAdapter, self).send(request, **kw) File "/usr/local/lib/python2.7/site-packages/pip-6.0.7-py2.7.egg/pip/_vendor/requests/adapters.py", line 370, in send timeout=timeout File "/usr/local/lib/python2.7/site-packages/pip-6.0.7-py2.7.egg/pip/_vendor/requests/packages/urllib3/connectionpool.py", line 518, in urlopen body=body, headers=headers) File "/usr/local/lib/python2.7/site-packages/pip-6.0.7-py2.7.egg/pip/_vendor/requests/packages/urllib3/connectionpool.py", line 322, in _make_request self._validate_conn(conn) File "/usr/local/lib/python2.7/site-packages/pip-6.0.7-py2.7.egg/pip/_vendor/requests/packages/urllib3/connectionpool.py", line 727, in _validate_conn conn.connect() File "/usr/local/lib/python2.7/site-packages/pip-6.0.7-py2.7.egg/pip/_vendor/requests/packages/urllib3/connection.py", line 238, in connect ssl_version=resolved_ssl_version) File "/usr/local/lib/python2.7/site-packages/pip-6.0.7-py2.7.egg/pip/_vendor/requests/packages/urllib3/util/ssl_.py", line 254, in ssl_wrap_socket return context.wrap_socket(sock) File "/usr/local/lib/python2.7/ssl.py", line 350, in wrap_socket _context=self) File "/usr/local/lib/python2.7/ssl.py", line 537, in __init__ raise ValueError("check_hostname requires server_hostname") ValueError: check_hostname requires server_hostname ``` How can I fix this?
2015/02/03
[ "https://Stackoverflow.com/questions/28296476", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1305814/" ]
[pip 6.1.0](https://pypi.python.org/pypi/pip/6.1.0) has been released, fixing this issue. You can upgrade with: ``` pip --trusted-host pypi.python.org install -U pip ``` to self-upgrade. --- *Original answer*: This is caused by a change in Python 2.7.9, which `urllib3` needs to account for. See [issue #543](https://github.com/shazow/urllib3/issues/543) for that project. Your OpenSSL libraries do not support SNI, which means `urllib3` won't pass in the host name to the SSL socket wrapper, but Python 2.7.9 expects the hostname to be passed in anyway for different purposes. `urllib3` is indirectly used by `requests` (see [`requests` issue 2435](https://github.com/kennethreitz/requests/issues/2435)), which in turn is being used by `pip`. I've opened a [ticket to track this from `pip`'s perspective](https://github.com/pypa/pip/issues/2395). The underlying issues have been fixed by the project maintainers, and awaiting a new release. You could install the current development version of `pip` if you are impatient: ``` pip install --trusted-host=github.com -U https://github.com/pypa/pip/archive/develop.zip ``` This'll install pip-6.1.0.dev0, when 6.1.0 is fully released you can upgrade again with `pip install -U pip` to get the final release from PyPI.
It is related to urllib3. You can resolve it with urllib3 version 1.25.8. Download that version of urllib3 manually and install it. Even though you install thia version, pip will still use its own version.So you have to remove it and replace it. Usually, installed module is on PythonXX/Lib/site-packages 1. Delete urllib3 in PythonXX/Lib/site-packages/pip/\_vendor 2. Move "PythonXX/Lib/site-packages/urllib3" to "PythonXX/Lib/site-packages/pip/\_vendor".
28,296,476
I finished installing pip on linux, the `pip list` command works. But when using the `pip install` command it got the following error: ``` Traceback (most recent call last): File "/usr/local/lib/python2.7/site-packages/pip-6.0.7-py2.7.egg/pip/basecommand.py", line 232, in main status = self.run(options, args) File "/usr/local/lib/python2.7/site-packages/pip-6.0.7-py2.7.egg/pip/commands/install.py", line 339, in run requirement_set.prepare_files(finder) File "/usr/local/lib/python2.7/site-packages/pip-6.0.7-py2.7.egg/pip/req/req_set.py", line 333, in prepare_files upgrade=self.upgrade, File "/usr/local/lib/python2.7/site-packages/pip-6.0.7-py2.7.egg/pip/index.py", line 305, in find_requirement page = self._get_page(main_index_url, req) File "/usr/local/lib/python2.7/site-packages/pip-6.0.7-py2.7.egg/pip/index.py", line 783, in _get_page return HTMLPage.get_page(link, req, session=self.session) File "/usr/local/lib/python2.7/site-packages/pip-6.0.7-py2.7.egg/pip/index.py", line 872, in get_page "Cache-Control": "max-age=600", File "/usr/local/lib/python2.7/site-packages/pip-6.0.7-py2.7.egg/pip/_vendor/requests/sessions.py", line 473, in get return self.request('GET', url, **kwargs) File "/usr/local/lib/python2.7/site-packages/pip-6.0.7-py2.7.egg/pip/download.py", line 365, in request return super(PipSession, self).request(method, url, *args, **kwargs) File "/usr/local/lib/python2.7/site-packages/pip-6.0.7-py2.7.egg/pip/_vendor/requests/sessions.py", line 461, in request resp = self.send(prep, **send_kwargs) File "/usr/local/lib/python2.7/site-packages/pip-6.0.7-py2.7.egg/pip/_vendor/requests/sessions.py", line 573, in send r = adapter.send(request, **kwargs) File "/usr/local/lib/python2.7/site-packages/pip-6.0.7-py2.7.egg/pip/_vendor/cachecontrol/adapter.py", line 43, in send resp = super(CacheControlAdapter, self).send(request, **kw) File "/usr/local/lib/python2.7/site-packages/pip-6.0.7-py2.7.egg/pip/_vendor/requests/adapters.py", line 370, in send timeout=timeout File "/usr/local/lib/python2.7/site-packages/pip-6.0.7-py2.7.egg/pip/_vendor/requests/packages/urllib3/connectionpool.py", line 518, in urlopen body=body, headers=headers) File "/usr/local/lib/python2.7/site-packages/pip-6.0.7-py2.7.egg/pip/_vendor/requests/packages/urllib3/connectionpool.py", line 322, in _make_request self._validate_conn(conn) File "/usr/local/lib/python2.7/site-packages/pip-6.0.7-py2.7.egg/pip/_vendor/requests/packages/urllib3/connectionpool.py", line 727, in _validate_conn conn.connect() File "/usr/local/lib/python2.7/site-packages/pip-6.0.7-py2.7.egg/pip/_vendor/requests/packages/urllib3/connection.py", line 238, in connect ssl_version=resolved_ssl_version) File "/usr/local/lib/python2.7/site-packages/pip-6.0.7-py2.7.egg/pip/_vendor/requests/packages/urllib3/util/ssl_.py", line 254, in ssl_wrap_socket return context.wrap_socket(sock) File "/usr/local/lib/python2.7/ssl.py", line 350, in wrap_socket _context=self) File "/usr/local/lib/python2.7/ssl.py", line 537, in __init__ raise ValueError("check_hostname requires server_hostname") ValueError: check_hostname requires server_hostname ``` How can I fix this?
2015/02/03
[ "https://Stackoverflow.com/questions/28296476", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1305814/" ]
[pip 6.1.0](https://pypi.python.org/pypi/pip/6.1.0) has been released, fixing this issue. You can upgrade with: ``` pip --trusted-host pypi.python.org install -U pip ``` to self-upgrade. --- *Original answer*: This is caused by a change in Python 2.7.9, which `urllib3` needs to account for. See [issue #543](https://github.com/shazow/urllib3/issues/543) for that project. Your OpenSSL libraries do not support SNI, which means `urllib3` won't pass in the host name to the SSL socket wrapper, but Python 2.7.9 expects the hostname to be passed in anyway for different purposes. `urllib3` is indirectly used by `requests` (see [`requests` issue 2435](https://github.com/kennethreitz/requests/issues/2435)), which in turn is being used by `pip`. I've opened a [ticket to track this from `pip`'s perspective](https://github.com/pypa/pip/issues/2395). The underlying issues have been fixed by the project maintainers, and awaiting a new release. You could install the current development version of `pip` if you are impatient: ``` pip install --trusted-host=github.com -U https://github.com/pypa/pip/archive/develop.zip ``` This'll install pip-6.1.0.dev0, when 6.1.0 is fully released you can upgrade again with `pip install -U pip` to get the final release from PyPI.
I encountered this problem and tried the above methods, but not work. I finally find it is because I turn on the VPN. When I turn off the VPN, I can successfully download packages.
28,296,476
I finished installing pip on linux, the `pip list` command works. But when using the `pip install` command it got the following error: ``` Traceback (most recent call last): File "/usr/local/lib/python2.7/site-packages/pip-6.0.7-py2.7.egg/pip/basecommand.py", line 232, in main status = self.run(options, args) File "/usr/local/lib/python2.7/site-packages/pip-6.0.7-py2.7.egg/pip/commands/install.py", line 339, in run requirement_set.prepare_files(finder) File "/usr/local/lib/python2.7/site-packages/pip-6.0.7-py2.7.egg/pip/req/req_set.py", line 333, in prepare_files upgrade=self.upgrade, File "/usr/local/lib/python2.7/site-packages/pip-6.0.7-py2.7.egg/pip/index.py", line 305, in find_requirement page = self._get_page(main_index_url, req) File "/usr/local/lib/python2.7/site-packages/pip-6.0.7-py2.7.egg/pip/index.py", line 783, in _get_page return HTMLPage.get_page(link, req, session=self.session) File "/usr/local/lib/python2.7/site-packages/pip-6.0.7-py2.7.egg/pip/index.py", line 872, in get_page "Cache-Control": "max-age=600", File "/usr/local/lib/python2.7/site-packages/pip-6.0.7-py2.7.egg/pip/_vendor/requests/sessions.py", line 473, in get return self.request('GET', url, **kwargs) File "/usr/local/lib/python2.7/site-packages/pip-6.0.7-py2.7.egg/pip/download.py", line 365, in request return super(PipSession, self).request(method, url, *args, **kwargs) File "/usr/local/lib/python2.7/site-packages/pip-6.0.7-py2.7.egg/pip/_vendor/requests/sessions.py", line 461, in request resp = self.send(prep, **send_kwargs) File "/usr/local/lib/python2.7/site-packages/pip-6.0.7-py2.7.egg/pip/_vendor/requests/sessions.py", line 573, in send r = adapter.send(request, **kwargs) File "/usr/local/lib/python2.7/site-packages/pip-6.0.7-py2.7.egg/pip/_vendor/cachecontrol/adapter.py", line 43, in send resp = super(CacheControlAdapter, self).send(request, **kw) File "/usr/local/lib/python2.7/site-packages/pip-6.0.7-py2.7.egg/pip/_vendor/requests/adapters.py", line 370, in send timeout=timeout File "/usr/local/lib/python2.7/site-packages/pip-6.0.7-py2.7.egg/pip/_vendor/requests/packages/urllib3/connectionpool.py", line 518, in urlopen body=body, headers=headers) File "/usr/local/lib/python2.7/site-packages/pip-6.0.7-py2.7.egg/pip/_vendor/requests/packages/urllib3/connectionpool.py", line 322, in _make_request self._validate_conn(conn) File "/usr/local/lib/python2.7/site-packages/pip-6.0.7-py2.7.egg/pip/_vendor/requests/packages/urllib3/connectionpool.py", line 727, in _validate_conn conn.connect() File "/usr/local/lib/python2.7/site-packages/pip-6.0.7-py2.7.egg/pip/_vendor/requests/packages/urllib3/connection.py", line 238, in connect ssl_version=resolved_ssl_version) File "/usr/local/lib/python2.7/site-packages/pip-6.0.7-py2.7.egg/pip/_vendor/requests/packages/urllib3/util/ssl_.py", line 254, in ssl_wrap_socket return context.wrap_socket(sock) File "/usr/local/lib/python2.7/ssl.py", line 350, in wrap_socket _context=self) File "/usr/local/lib/python2.7/ssl.py", line 537, in __init__ raise ValueError("check_hostname requires server_hostname") ValueError: check_hostname requires server_hostname ``` How can I fix this?
2015/02/03
[ "https://Stackoverflow.com/questions/28296476", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1305814/" ]
I get the same issue, and find that it can be avoided (pip 6.0.8) in my case as follows ``` pip --trusted-host pypi.python.org install <thing> ```
It is related to urllib3. You can resolve it with urllib3 version 1.25.8. Download that version of urllib3 manually and install it. Even though you install thia version, pip will still use its own version.So you have to remove it and replace it. Usually, installed module is on PythonXX/Lib/site-packages 1. Delete urllib3 in PythonXX/Lib/site-packages/pip/\_vendor 2. Move "PythonXX/Lib/site-packages/urllib3" to "PythonXX/Lib/site-packages/pip/\_vendor".
28,296,476
I finished installing pip on linux, the `pip list` command works. But when using the `pip install` command it got the following error: ``` Traceback (most recent call last): File "/usr/local/lib/python2.7/site-packages/pip-6.0.7-py2.7.egg/pip/basecommand.py", line 232, in main status = self.run(options, args) File "/usr/local/lib/python2.7/site-packages/pip-6.0.7-py2.7.egg/pip/commands/install.py", line 339, in run requirement_set.prepare_files(finder) File "/usr/local/lib/python2.7/site-packages/pip-6.0.7-py2.7.egg/pip/req/req_set.py", line 333, in prepare_files upgrade=self.upgrade, File "/usr/local/lib/python2.7/site-packages/pip-6.0.7-py2.7.egg/pip/index.py", line 305, in find_requirement page = self._get_page(main_index_url, req) File "/usr/local/lib/python2.7/site-packages/pip-6.0.7-py2.7.egg/pip/index.py", line 783, in _get_page return HTMLPage.get_page(link, req, session=self.session) File "/usr/local/lib/python2.7/site-packages/pip-6.0.7-py2.7.egg/pip/index.py", line 872, in get_page "Cache-Control": "max-age=600", File "/usr/local/lib/python2.7/site-packages/pip-6.0.7-py2.7.egg/pip/_vendor/requests/sessions.py", line 473, in get return self.request('GET', url, **kwargs) File "/usr/local/lib/python2.7/site-packages/pip-6.0.7-py2.7.egg/pip/download.py", line 365, in request return super(PipSession, self).request(method, url, *args, **kwargs) File "/usr/local/lib/python2.7/site-packages/pip-6.0.7-py2.7.egg/pip/_vendor/requests/sessions.py", line 461, in request resp = self.send(prep, **send_kwargs) File "/usr/local/lib/python2.7/site-packages/pip-6.0.7-py2.7.egg/pip/_vendor/requests/sessions.py", line 573, in send r = adapter.send(request, **kwargs) File "/usr/local/lib/python2.7/site-packages/pip-6.0.7-py2.7.egg/pip/_vendor/cachecontrol/adapter.py", line 43, in send resp = super(CacheControlAdapter, self).send(request, **kw) File "/usr/local/lib/python2.7/site-packages/pip-6.0.7-py2.7.egg/pip/_vendor/requests/adapters.py", line 370, in send timeout=timeout File "/usr/local/lib/python2.7/site-packages/pip-6.0.7-py2.7.egg/pip/_vendor/requests/packages/urllib3/connectionpool.py", line 518, in urlopen body=body, headers=headers) File "/usr/local/lib/python2.7/site-packages/pip-6.0.7-py2.7.egg/pip/_vendor/requests/packages/urllib3/connectionpool.py", line 322, in _make_request self._validate_conn(conn) File "/usr/local/lib/python2.7/site-packages/pip-6.0.7-py2.7.egg/pip/_vendor/requests/packages/urllib3/connectionpool.py", line 727, in _validate_conn conn.connect() File "/usr/local/lib/python2.7/site-packages/pip-6.0.7-py2.7.egg/pip/_vendor/requests/packages/urllib3/connection.py", line 238, in connect ssl_version=resolved_ssl_version) File "/usr/local/lib/python2.7/site-packages/pip-6.0.7-py2.7.egg/pip/_vendor/requests/packages/urllib3/util/ssl_.py", line 254, in ssl_wrap_socket return context.wrap_socket(sock) File "/usr/local/lib/python2.7/ssl.py", line 350, in wrap_socket _context=self) File "/usr/local/lib/python2.7/ssl.py", line 537, in __init__ raise ValueError("check_hostname requires server_hostname") ValueError: check_hostname requires server_hostname ``` How can I fix this?
2015/02/03
[ "https://Stackoverflow.com/questions/28296476", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1305814/" ]
I get the same issue, and find that it can be avoided (pip 6.0.8) in my case as follows ``` pip --trusted-host pypi.python.org install <thing> ```
I encountered this problem and tried the above methods, but not work. I finally find it is because I turn on the VPN. When I turn off the VPN, I can successfully download packages.
28,296,476
I finished installing pip on linux, the `pip list` command works. But when using the `pip install` command it got the following error: ``` Traceback (most recent call last): File "/usr/local/lib/python2.7/site-packages/pip-6.0.7-py2.7.egg/pip/basecommand.py", line 232, in main status = self.run(options, args) File "/usr/local/lib/python2.7/site-packages/pip-6.0.7-py2.7.egg/pip/commands/install.py", line 339, in run requirement_set.prepare_files(finder) File "/usr/local/lib/python2.7/site-packages/pip-6.0.7-py2.7.egg/pip/req/req_set.py", line 333, in prepare_files upgrade=self.upgrade, File "/usr/local/lib/python2.7/site-packages/pip-6.0.7-py2.7.egg/pip/index.py", line 305, in find_requirement page = self._get_page(main_index_url, req) File "/usr/local/lib/python2.7/site-packages/pip-6.0.7-py2.7.egg/pip/index.py", line 783, in _get_page return HTMLPage.get_page(link, req, session=self.session) File "/usr/local/lib/python2.7/site-packages/pip-6.0.7-py2.7.egg/pip/index.py", line 872, in get_page "Cache-Control": "max-age=600", File "/usr/local/lib/python2.7/site-packages/pip-6.0.7-py2.7.egg/pip/_vendor/requests/sessions.py", line 473, in get return self.request('GET', url, **kwargs) File "/usr/local/lib/python2.7/site-packages/pip-6.0.7-py2.7.egg/pip/download.py", line 365, in request return super(PipSession, self).request(method, url, *args, **kwargs) File "/usr/local/lib/python2.7/site-packages/pip-6.0.7-py2.7.egg/pip/_vendor/requests/sessions.py", line 461, in request resp = self.send(prep, **send_kwargs) File "/usr/local/lib/python2.7/site-packages/pip-6.0.7-py2.7.egg/pip/_vendor/requests/sessions.py", line 573, in send r = adapter.send(request, **kwargs) File "/usr/local/lib/python2.7/site-packages/pip-6.0.7-py2.7.egg/pip/_vendor/cachecontrol/adapter.py", line 43, in send resp = super(CacheControlAdapter, self).send(request, **kw) File "/usr/local/lib/python2.7/site-packages/pip-6.0.7-py2.7.egg/pip/_vendor/requests/adapters.py", line 370, in send timeout=timeout File "/usr/local/lib/python2.7/site-packages/pip-6.0.7-py2.7.egg/pip/_vendor/requests/packages/urllib3/connectionpool.py", line 518, in urlopen body=body, headers=headers) File "/usr/local/lib/python2.7/site-packages/pip-6.0.7-py2.7.egg/pip/_vendor/requests/packages/urllib3/connectionpool.py", line 322, in _make_request self._validate_conn(conn) File "/usr/local/lib/python2.7/site-packages/pip-6.0.7-py2.7.egg/pip/_vendor/requests/packages/urllib3/connectionpool.py", line 727, in _validate_conn conn.connect() File "/usr/local/lib/python2.7/site-packages/pip-6.0.7-py2.7.egg/pip/_vendor/requests/packages/urllib3/connection.py", line 238, in connect ssl_version=resolved_ssl_version) File "/usr/local/lib/python2.7/site-packages/pip-6.0.7-py2.7.egg/pip/_vendor/requests/packages/urllib3/util/ssl_.py", line 254, in ssl_wrap_socket return context.wrap_socket(sock) File "/usr/local/lib/python2.7/ssl.py", line 350, in wrap_socket _context=self) File "/usr/local/lib/python2.7/ssl.py", line 537, in __init__ raise ValueError("check_hostname requires server_hostname") ValueError: check_hostname requires server_hostname ``` How can I fix this?
2015/02/03
[ "https://Stackoverflow.com/questions/28296476", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1305814/" ]
It is related to urllib3. You can resolve it with urllib3 version 1.25.8. Download that version of urllib3 manually and install it. Even though you install thia version, pip will still use its own version.So you have to remove it and replace it. Usually, installed module is on PythonXX/Lib/site-packages 1. Delete urllib3 in PythonXX/Lib/site-packages/pip/\_vendor 2. Move "PythonXX/Lib/site-packages/urllib3" to "PythonXX/Lib/site-packages/pip/\_vendor".
I encountered this problem and tried the above methods, but not work. I finally find it is because I turn on the VPN. When I turn off the VPN, I can successfully download packages.
63,019,336
I am yet to learn the 'lambda' concept in python, I tried to look for answers and every answer includes lambda in it. This is my code, can you please suggest me a way to sort it by values. ```html sorted_dict = {'sir': '113', 'to': '146', 'my': '9', 'jesus': '4', 'saving': '275', 'changing': '72', 'apologize': '285', 'pain': '308', 'sisters': '27', 'forgiving': '36', 'can': '62', 'family': '77', 'sorry': '8', 'is': '360', 'too': '15', 'her': '37', 'wanted': '18', 'being': '44', 'into': '208', 'are': '17', 'just': '97', 'so': '148', 'now': '112', 'be': '19', 'right': '189', 'been': '105', 'no': '56', 'because': '74', 'forgive': '52', 'keep': '88', 'wish': '12', "i'm": '67', 'always': '53', 'ask': '29'} new_list = list() for key,value in sorted_dict.items(): new_tup = (key, value) new_list.append(new_tup) new_list = sorted(new_list) ``` How do i proceed further?
2020/07/21
[ "https://Stackoverflow.com/questions/63019336", "https://Stackoverflow.com", "https://Stackoverflow.com/users/13967182/" ]
lambda is often used as the key to sort every value in an iterator. The same step from turning dictionaries to list of tuples, can be done using the dict method `dict.items()`. and i used lambda in sorting, as a key to tell the sorted function that, i want to sort based on the value in each tuple located in the 1st index. ``` sorted_dict = {'sir': '113', 'to': '146', 'my': '9', 'jesus': '4', 'saving': '275', 'changing': '72', 'apologize': '285', 'pain': '308', 'sisters': '27', 'forgiving': '36', 'can': '62', 'family': '77', 'sorry': '8', 'is': '360', 'too': '15', 'her': '37', 'wanted': '18', 'being': '44', 'into': '208', 'are': '17', 'just': '97', 'so': '148', 'now': '112', 'be': '19', 'right': '189', 'been': '105', 'no': '56', 'because': '74', 'forgive': '52', 'keep': '88', 'wish': '12', "i'm": '67', 'always': '53', 'ask': '29'} new_list = sorted_dict.items() new_list = sorted(new_list, key=lambda x: int(x[1])) print(new_list) ```
if you are familiar with other programming concepts, you may have heard of what is called an "inline function"... Lambda is an "inline function" equivalent in Python.. its a function which doesnt have a function name, and is restricted to have only a single line of code. now coming to the problem of sort, the sort function in python accepts two arguments, 1. the list to be sorted 2. a function where you can define how to sort a list. Suppose if its a list of numbers , you dont need the 2nd argument at all.. But if its like your case, where its a list of tuples or say a list of dictionaries,you need to tell python how to sort that list.. That is accomplished with the help of the '`key`' argument in the sort function... Below code is an illustration of that.. ``` In [1]: l1 = [('a',1), ('b', 3), ('c', 2)] In [2]: def sortHelper(x): ...: return x[1] ...: In [3]: l1.sort(key=sortHelper) In [4]: l1 Out[4]: [('a', 1), ('c', 2), ('b', 3)] In [5]: ``` Now as you see, the `sortHelper` method is just a single line function, which can very well be written with a `lambda` function. ``` lambda x: x[1] ``` So its common to use lambda functions, but its not a compulsion.. you can accomplish the same functionality with normal python functions also..
3,351,218
If a string contains `*SUBJECT123`, how do I determine that the string has `subject` in it in python?
2010/07/28
[ "https://Stackoverflow.com/questions/3351218", "https://Stackoverflow.com", "https://Stackoverflow.com/users/277603/" ]
``` if "subject" in mystring.lower(): # do something ```
If you want to have `subject` match `SUBJECT`, you could use [`re`](http://docs.python.org/library/re.html) ``` import re if re.search('subject', your_string, re.IGNORECASE) ``` Or you could transform the string to lower case first and simply use: ``` if "subject" in your_string.lower() ```
3,351,218
If a string contains `*SUBJECT123`, how do I determine that the string has `subject` in it in python?
2010/07/28
[ "https://Stackoverflow.com/questions/3351218", "https://Stackoverflow.com", "https://Stackoverflow.com/users/277603/" ]
``` if "subject" in mystring.lower(): # do something ```
``` if "*SUGJECT123" in mystring and "subject" in mystring: # do something ```
3,351,218
If a string contains `*SUBJECT123`, how do I determine that the string has `subject` in it in python?
2010/07/28
[ "https://Stackoverflow.com/questions/3351218", "https://Stackoverflow.com", "https://Stackoverflow.com/users/277603/" ]
``` if "subject" in mystring.lower(): # do something ```
Just another way ``` mystring.find("subject") ``` the above answers will return true if the string contains "subject" and false otherwise. While find will return its position in the string if it exists else a negative number.
3,351,218
If a string contains `*SUBJECT123`, how do I determine that the string has `subject` in it in python?
2010/07/28
[ "https://Stackoverflow.com/questions/3351218", "https://Stackoverflow.com", "https://Stackoverflow.com/users/277603/" ]
If you want to have `subject` match `SUBJECT`, you could use [`re`](http://docs.python.org/library/re.html) ``` import re if re.search('subject', your_string, re.IGNORECASE) ``` Or you could transform the string to lower case first and simply use: ``` if "subject" in your_string.lower() ```
``` if "*SUGJECT123" in mystring and "subject" in mystring: # do something ```
3,351,218
If a string contains `*SUBJECT123`, how do I determine that the string has `subject` in it in python?
2010/07/28
[ "https://Stackoverflow.com/questions/3351218", "https://Stackoverflow.com", "https://Stackoverflow.com/users/277603/" ]
If you want to have `subject` match `SUBJECT`, you could use [`re`](http://docs.python.org/library/re.html) ``` import re if re.search('subject', your_string, re.IGNORECASE) ``` Or you could transform the string to lower case first and simply use: ``` if "subject" in your_string.lower() ```
Just another way ``` mystring.find("subject") ``` the above answers will return true if the string contains "subject" and false otherwise. While find will return its position in the string if it exists else a negative number.
3,351,218
If a string contains `*SUBJECT123`, how do I determine that the string has `subject` in it in python?
2010/07/28
[ "https://Stackoverflow.com/questions/3351218", "https://Stackoverflow.com", "https://Stackoverflow.com/users/277603/" ]
Just another way ``` mystring.find("subject") ``` the above answers will return true if the string contains "subject" and false otherwise. While find will return its position in the string if it exists else a negative number.
``` if "*SUGJECT123" in mystring and "subject" in mystring: # do something ```
29,276,668
I have created a new project in django. when I run **python manage.py runserver**, I am getting the msg in command prompt like below. ``` /var/www/samplepro/myapp$ python manage.py runserver Performing system checks... System check identified no issues (0 silenced). March 26, 2015 - 10:52:31 Django version 1.7.7, using settings 'myapp.settings' Starting development server at http://127.0.0.1:8000/ Quit the server with CONTROL-C. ``` when I test in browser, I am getting the page can't be displayed error. Can anyone help me to do get the django page.
2015/03/26
[ "https://Stackoverflow.com/questions/29276668", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2522688/" ]
You stated that you are running the server on Ubuntu while you are trying to connect it from another PC running Windows. In that case, you should replace the `127.0.0.1` with the actual IP address of Ubuntu server (the one you use for PuTTY).
I am also developing in a virtual machine. Use `ifconfig` to figure out the IP address that has been assigned to your virtual machine. Then start the server with: ``` python manage.py runserver <IP>:8000 ``` Of course you could also use a different port instead of `8000`. Then use that address and port in your host browser to access your server. Note: At least in my VM, the virtual machine gets a new address from time to time, so check that in case you have trouble starting your server with the same command. If this does not help, check your hosts file as mentioned in one of the comments.
27,595,162
I am looking for an algorithm or modification of an efficient way to get the sum of run through of a tree depth, for example: ``` Z / \ / \ / \ / \ X Y / \ / \ / \ / \ A B C D ``` The number of final leaves is four so us have four final sums and they would be this respectively. [Z+X+A] [Z+X+B] [Z+Y+C] [Z+Y+D] If someone could guide me in the right direction into getting the sums of all possible depths, that would be great. This will be done in python with fairly large trees.
2014/12/22
[ "https://Stackoverflow.com/questions/27595162", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3947332/" ]
You can recurse over the nodes of the tree keeping the sum from the root down to this point. When you reach a leaf node, you return the current sum in a list of one element. In the internal nodes, you concatenate the lists returned from children. Sample Code: ``` class Node: def __init__(self, value, children): self.value = value self.children = children def tree_sums(root, current_sum): current_sum += root.value if len(root.children) == 0: return [current_sum] subtree_sums = [] for child in root.children: subtree_sums += tree_sums(child, current_sum) return subtree_sums tree = Node(1, [Node(2, []), Node(3, [])]) assert tree_sums(tree, 0) == [3, 4] ```
Here is what you are looking for. In this example, trees are stored as dicts with a "value" and a "children" keys, and "children" maps to a list. ``` def triesum(t): if not t['children']: return [t['value']] return [t['value'] + n for c in t['children'] for n in triesum(c)] ``` Example ``` trie = {'value': 5, 'children': [ {'value': 7, 'children': [ {'value': 8, 'children': []}, {'value': 2, 'children': []} ]}, {'value': 4, 'children': [ {'value': 3, 'children': []}, {'value': 6, 'children': []} ]} ]} print sorted(triesum(trie)) == sorted([5 + 7 + 8, 5 + 7 + 2, 5 + 4 + 3, 5 + 4 + 6]) # prints True ```
24,828,771
I am in the process of writing a python script that will (1) obtain a list of y-values for each subplot to plot against a common set of x-values, (2) make each of these subplots a scatter-plot and put it in the appropriate location in the subplot grid, and (3) complete these tasks for different sizes of subplot grids. What I mean by the third statement is this: the test case I'm using results in an array of 64 plots, 8 rows and 8 columns. I would like for the code to be able to handle any size array (roughly between 50 and 80 plots) for various grid dimensions without me having to go back in each time I run the code and say "Okay, here's the number of rows and columns I need." Right now, I'm using an exec command to obtain the y-values, and that's working fine. I'm able to make each of the subplots and get it to populate the grid, but only if I type everything in by hand (64 times of doing the same thing is just dumb, so I know there's got to be a way to automate this). Could anyone suggest a way in which this might be accomplished? I cannot provide data or my code, as this is research material and is not mine to release. Please excuse me if this question is very basic or is something that I should be able to determine from existing documentation. I am very new to programming, and could use a little guidance!
2014/07/18
[ "https://Stackoverflow.com/questions/24828771", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3421870/" ]
A useful function for things like this is `plt.subplots(nrows, ncols)` which will return an array (a numpy object array) of subplots on a regular grid. As an example: ``` import matplotlib.pyplot as plt import numpy as np fig, axes = plt.subplots(nrows=4, ncols=4, sharex=True, sharey=True) # "axes" is a 2D array of axes objects. You can index it as axes[i,j] # or iterate over all items with axes.flat # Plot on all axes for ax in axes.flat: x, y = 10 * np.random.random((2, 20)) colors = np.random.random((20, 3)) ax.scatter(x, y, s=80, facecolors=colors, edgecolors='') ax.set(xticks=np.linspace(0, 10, 6), yticks=np.linspace(0, 10, 6)) # Operate on just the top row of axes: for ax, label in zip(axes[0, :], ['A', 'B', 'C', 'D']): ax.set_title(label, size=20) # Operate on just the first column of axes: for ax, label in zip(axes[:, 0], ['E', 'F', 'G', 'H']): ax.set_ylabel(label, size=20) plt.show() ``` ![enter image description here](https://i.stack.imgur.com/2a3fN.png)
If you are going to use Python, you will want to take a stroll through the [Python Tutorial](https://docs.python.org/2.7/tutorial/index.html) to learn the the language and data structures available to you, there is good instructional material online, you might want to consider some of the *computer science 101* courses that use Python - MIT OCW, edX, [How To Think Like A Computer Scientist](http://www.greenteapress.com/thinkpython/html/index.html)... Without knowing the full details, I imagine it could be something like this which is a mixture of *real* code and pseudocode: ``` yvalues = list() while yvalues_exist: yvalues.append(get_yvalue) #limit plot to eight columns columns = 8 quotient, remainder = divmod(len(yvalues), columns) rows = quotient if remainder: rows = rows + 1 for n, yvalue in enumerate(yvalues): plt.subplot(rows, columns, n) plt.plot(yvalue) ``` list(), append(), while, divmod(), len(), if, for, enumerate() are all Python statements, methods, or built-in functions.
43,159,565
I've been using the script below to download technical videos for later analysis. The script has worked well for me and retrieves the highest resolution version available for the videos that I have needed. Now I've come across a [4K YouTube video](https://youtu.be/vzS1Vkpsi5k), and my script only saves an mp4 with 1280x720. I'd like to know if there is a way to adjust my current script to download higher resolution versions of this video. I understand there are python packages that might address this, but right now I would like stick to this step-by-step method if possible. [![enter image description here](https://i.stack.imgur.com/0XvsY.png)](https://i.stack.imgur.com/0XvsY.png) [![enter image description here](https://i.stack.imgur.com/Ihmly.png)](https://i.stack.imgur.com/Ihmly.png) **above:** info from Quicktime and OSX ``` """ length: 175 seconds quality: hd720 type: video/mp4; codecs="avc1.64001F, mp4a.40.2" Last-Modified: Sun, 21 Aug 2016 10:41:48 GMT Content-Type: video/mp4 Date: Sat, 01 Apr 2017 16:50:16 GMT Expires: Sat, 01 Apr 2017 16:50:16 GMT Cache-Control: private, max-age=21294 Accept-Ranges: bytes Content-Length: 35933033 Connection: close Alt-Svc: quic=":443"; ma=2592000 X-Content-Type-Options: nosniff Server: gvs 1. """ import urlparse, urllib2 vid = "vzS1Vkpsi5k" save_title = "YouTube SpaceX - Booster Number 4 - Thaicom 8 06-06-2016" url_init = "https://www.youtube.com/get_video_info?video_id=" + vid resp = urllib2.urlopen(url_init, timeout=10) data = resp.read() info = urlparse.parse_qs(data) title = info['title'] print "length: ", info['length_seconds'][0] + " seconds" stream_map = info['url_encoded_fmt_stream_map'][0] vid_info = stream_map.split(",") mp4_filename = save_title + ".mp4" for video in vid_info: item = urlparse.parse_qs(video) print 'quality: ', item['quality'][0] print 'type: ', item['type'][0] url_download = item['url'][0] resp = urllib2.urlopen(url_download) print resp.headers length = int(resp.headers['Content-Length']) my_file = open(mp4_filename, "w+") done, i = 0, 0 buff = resp.read(1024) while buff: my_file.write(buff) done += 1024 percent = done * 100.0 / length buff = resp.read(1024) if not i%1000: percent = done * 100.0 / length print str(percent) + "%" i += 1 break ```
2017/04/01
[ "https://Stackoverflow.com/questions/43159565", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3904031/" ]
In Windows you will need to right click a .py, and press Edit to edit the file using IDLE. Since the default action of double clicking a .py is executing the file with python on a shell prompt. To open just IDLE: Click on that. `C:\Python36\Lib\idlelib\idle.bat`
If your using Windows 10 just type in `idle` where it says: "Type here for search"
43,159,565
I've been using the script below to download technical videos for later analysis. The script has worked well for me and retrieves the highest resolution version available for the videos that I have needed. Now I've come across a [4K YouTube video](https://youtu.be/vzS1Vkpsi5k), and my script only saves an mp4 with 1280x720. I'd like to know if there is a way to adjust my current script to download higher resolution versions of this video. I understand there are python packages that might address this, but right now I would like stick to this step-by-step method if possible. [![enter image description here](https://i.stack.imgur.com/0XvsY.png)](https://i.stack.imgur.com/0XvsY.png) [![enter image description here](https://i.stack.imgur.com/Ihmly.png)](https://i.stack.imgur.com/Ihmly.png) **above:** info from Quicktime and OSX ``` """ length: 175 seconds quality: hd720 type: video/mp4; codecs="avc1.64001F, mp4a.40.2" Last-Modified: Sun, 21 Aug 2016 10:41:48 GMT Content-Type: video/mp4 Date: Sat, 01 Apr 2017 16:50:16 GMT Expires: Sat, 01 Apr 2017 16:50:16 GMT Cache-Control: private, max-age=21294 Accept-Ranges: bytes Content-Length: 35933033 Connection: close Alt-Svc: quic=":443"; ma=2592000 X-Content-Type-Options: nosniff Server: gvs 1. """ import urlparse, urllib2 vid = "vzS1Vkpsi5k" save_title = "YouTube SpaceX - Booster Number 4 - Thaicom 8 06-06-2016" url_init = "https://www.youtube.com/get_video_info?video_id=" + vid resp = urllib2.urlopen(url_init, timeout=10) data = resp.read() info = urlparse.parse_qs(data) title = info['title'] print "length: ", info['length_seconds'][0] + " seconds" stream_map = info['url_encoded_fmt_stream_map'][0] vid_info = stream_map.split(",") mp4_filename = save_title + ".mp4" for video in vid_info: item = urlparse.parse_qs(video) print 'quality: ', item['quality'][0] print 'type: ', item['type'][0] url_download = item['url'][0] resp = urllib2.urlopen(url_download) print resp.headers length = int(resp.headers['Content-Length']) my_file = open(mp4_filename, "w+") done, i = 0, 0 buff = resp.read(1024) while buff: my_file.write(buff) done += 1024 percent = done * 100.0 / length buff = resp.read(1024) if not i%1000: percent = done * 100.0 / length print str(percent) + "%" i += 1 break ```
2017/04/01
[ "https://Stackoverflow.com/questions/43159565", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3904031/" ]
In Windows you will need to right click a .py, and press Edit to edit the file using IDLE. Since the default action of double clicking a .py is executing the file with python on a shell prompt. To open just IDLE: Click on that. `C:\Python36\Lib\idlelib\idle.bat`
My solution to setting options and then invoking Idle on a python script is: ``` Set optn=blah ... Set optn=blah start pythonw C:\Users\%USERNAME%\AppData\Local\Programs\Python\Python36-32\Lib\idlelib\idle.py STFxlate.py ``` This allows you to setup the environment prior to invoking idle. This assumes that pythonw is in the current path
43,159,565
I've been using the script below to download technical videos for later analysis. The script has worked well for me and retrieves the highest resolution version available for the videos that I have needed. Now I've come across a [4K YouTube video](https://youtu.be/vzS1Vkpsi5k), and my script only saves an mp4 with 1280x720. I'd like to know if there is a way to adjust my current script to download higher resolution versions of this video. I understand there are python packages that might address this, but right now I would like stick to this step-by-step method if possible. [![enter image description here](https://i.stack.imgur.com/0XvsY.png)](https://i.stack.imgur.com/0XvsY.png) [![enter image description here](https://i.stack.imgur.com/Ihmly.png)](https://i.stack.imgur.com/Ihmly.png) **above:** info from Quicktime and OSX ``` """ length: 175 seconds quality: hd720 type: video/mp4; codecs="avc1.64001F, mp4a.40.2" Last-Modified: Sun, 21 Aug 2016 10:41:48 GMT Content-Type: video/mp4 Date: Sat, 01 Apr 2017 16:50:16 GMT Expires: Sat, 01 Apr 2017 16:50:16 GMT Cache-Control: private, max-age=21294 Accept-Ranges: bytes Content-Length: 35933033 Connection: close Alt-Svc: quic=":443"; ma=2592000 X-Content-Type-Options: nosniff Server: gvs 1. """ import urlparse, urllib2 vid = "vzS1Vkpsi5k" save_title = "YouTube SpaceX - Booster Number 4 - Thaicom 8 06-06-2016" url_init = "https://www.youtube.com/get_video_info?video_id=" + vid resp = urllib2.urlopen(url_init, timeout=10) data = resp.read() info = urlparse.parse_qs(data) title = info['title'] print "length: ", info['length_seconds'][0] + " seconds" stream_map = info['url_encoded_fmt_stream_map'][0] vid_info = stream_map.split(",") mp4_filename = save_title + ".mp4" for video in vid_info: item = urlparse.parse_qs(video) print 'quality: ', item['quality'][0] print 'type: ', item['type'][0] url_download = item['url'][0] resp = urllib2.urlopen(url_download) print resp.headers length = int(resp.headers['Content-Length']) my_file = open(mp4_filename, "w+") done, i = 0, 0 buff = resp.read(1024) while buff: my_file.write(buff) done += 1024 percent = done * 100.0 / length buff = resp.read(1024) if not i%1000: percent = done * 100.0 / length print str(percent) + "%" i += 1 break ```
2017/04/01
[ "https://Stackoverflow.com/questions/43159565", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3904031/" ]
In Windows you will need to right click a .py, and press Edit to edit the file using IDLE. Since the default action of double clicking a .py is executing the file with python on a shell prompt. To open just IDLE: Click on that. `C:\Python36\Lib\idlelib\idle.bat`
Start menu > type `IDLE (Python 3.4.3 <bitnum>-bit)`. Replace `<bitnum>` with 32 if 32-bit, otherwise 64. Example: > > IDLE (Python 3.6.2 64-bit) > > > I agree with one who says: > > just type "IDLE" in the start-menu where it says "Type here to search" and press **[**{**ENTER**}**]** > > >
43,159,565
I've been using the script below to download technical videos for later analysis. The script has worked well for me and retrieves the highest resolution version available for the videos that I have needed. Now I've come across a [4K YouTube video](https://youtu.be/vzS1Vkpsi5k), and my script only saves an mp4 with 1280x720. I'd like to know if there is a way to adjust my current script to download higher resolution versions of this video. I understand there are python packages that might address this, but right now I would like stick to this step-by-step method if possible. [![enter image description here](https://i.stack.imgur.com/0XvsY.png)](https://i.stack.imgur.com/0XvsY.png) [![enter image description here](https://i.stack.imgur.com/Ihmly.png)](https://i.stack.imgur.com/Ihmly.png) **above:** info from Quicktime and OSX ``` """ length: 175 seconds quality: hd720 type: video/mp4; codecs="avc1.64001F, mp4a.40.2" Last-Modified: Sun, 21 Aug 2016 10:41:48 GMT Content-Type: video/mp4 Date: Sat, 01 Apr 2017 16:50:16 GMT Expires: Sat, 01 Apr 2017 16:50:16 GMT Cache-Control: private, max-age=21294 Accept-Ranges: bytes Content-Length: 35933033 Connection: close Alt-Svc: quic=":443"; ma=2592000 X-Content-Type-Options: nosniff Server: gvs 1. """ import urlparse, urllib2 vid = "vzS1Vkpsi5k" save_title = "YouTube SpaceX - Booster Number 4 - Thaicom 8 06-06-2016" url_init = "https://www.youtube.com/get_video_info?video_id=" + vid resp = urllib2.urlopen(url_init, timeout=10) data = resp.read() info = urlparse.parse_qs(data) title = info['title'] print "length: ", info['length_seconds'][0] + " seconds" stream_map = info['url_encoded_fmt_stream_map'][0] vid_info = stream_map.split(",") mp4_filename = save_title + ".mp4" for video in vid_info: item = urlparse.parse_qs(video) print 'quality: ', item['quality'][0] print 'type: ', item['type'][0] url_download = item['url'][0] resp = urllib2.urlopen(url_download) print resp.headers length = int(resp.headers['Content-Length']) my_file = open(mp4_filename, "w+") done, i = 0, 0 buff = resp.read(1024) while buff: my_file.write(buff) done += 1024 percent = done * 100.0 / length buff = resp.read(1024) if not i%1000: percent = done * 100.0 / length print str(percent) + "%" i += 1 break ```
2017/04/01
[ "https://Stackoverflow.com/questions/43159565", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3904031/" ]
In Windows you will need to right click a .py, and press Edit to edit the file using IDLE. Since the default action of double clicking a .py is executing the file with python on a shell prompt. To open just IDLE: Click on that. `C:\Python36\Lib\idlelib\idle.bat`
For those using Anaconda, type idle on windows search bar ("Run or Execute command"). This probably wont work if you didn't install anaconda with environment variables. You can also go to ``` Anaconda3 folder > Scripts >idle.exe ``` and create a shortcut to you desktop.
43,159,565
I've been using the script below to download technical videos for later analysis. The script has worked well for me and retrieves the highest resolution version available for the videos that I have needed. Now I've come across a [4K YouTube video](https://youtu.be/vzS1Vkpsi5k), and my script only saves an mp4 with 1280x720. I'd like to know if there is a way to adjust my current script to download higher resolution versions of this video. I understand there are python packages that might address this, but right now I would like stick to this step-by-step method if possible. [![enter image description here](https://i.stack.imgur.com/0XvsY.png)](https://i.stack.imgur.com/0XvsY.png) [![enter image description here](https://i.stack.imgur.com/Ihmly.png)](https://i.stack.imgur.com/Ihmly.png) **above:** info from Quicktime and OSX ``` """ length: 175 seconds quality: hd720 type: video/mp4; codecs="avc1.64001F, mp4a.40.2" Last-Modified: Sun, 21 Aug 2016 10:41:48 GMT Content-Type: video/mp4 Date: Sat, 01 Apr 2017 16:50:16 GMT Expires: Sat, 01 Apr 2017 16:50:16 GMT Cache-Control: private, max-age=21294 Accept-Ranges: bytes Content-Length: 35933033 Connection: close Alt-Svc: quic=":443"; ma=2592000 X-Content-Type-Options: nosniff Server: gvs 1. """ import urlparse, urllib2 vid = "vzS1Vkpsi5k" save_title = "YouTube SpaceX - Booster Number 4 - Thaicom 8 06-06-2016" url_init = "https://www.youtube.com/get_video_info?video_id=" + vid resp = urllib2.urlopen(url_init, timeout=10) data = resp.read() info = urlparse.parse_qs(data) title = info['title'] print "length: ", info['length_seconds'][0] + " seconds" stream_map = info['url_encoded_fmt_stream_map'][0] vid_info = stream_map.split(",") mp4_filename = save_title + ".mp4" for video in vid_info: item = urlparse.parse_qs(video) print 'quality: ', item['quality'][0] print 'type: ', item['type'][0] url_download = item['url'][0] resp = urllib2.urlopen(url_download) print resp.headers length = int(resp.headers['Content-Length']) my_file = open(mp4_filename, "w+") done, i = 0, 0 buff = resp.read(1024) while buff: my_file.write(buff) done += 1024 percent = done * 100.0 / length buff = resp.read(1024) if not i%1000: percent = done * 100.0 / length print str(percent) + "%" i += 1 break ```
2017/04/01
[ "https://Stackoverflow.com/questions/43159565", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3904031/" ]
If your using Windows 10 just type in `idle` where it says: "Type here for search"
My solution to setting options and then invoking Idle on a python script is: ``` Set optn=blah ... Set optn=blah start pythonw C:\Users\%USERNAME%\AppData\Local\Programs\Python\Python36-32\Lib\idlelib\idle.py STFxlate.py ``` This allows you to setup the environment prior to invoking idle. This assumes that pythonw is in the current path
43,159,565
I've been using the script below to download technical videos for later analysis. The script has worked well for me and retrieves the highest resolution version available for the videos that I have needed. Now I've come across a [4K YouTube video](https://youtu.be/vzS1Vkpsi5k), and my script only saves an mp4 with 1280x720. I'd like to know if there is a way to adjust my current script to download higher resolution versions of this video. I understand there are python packages that might address this, but right now I would like stick to this step-by-step method if possible. [![enter image description here](https://i.stack.imgur.com/0XvsY.png)](https://i.stack.imgur.com/0XvsY.png) [![enter image description here](https://i.stack.imgur.com/Ihmly.png)](https://i.stack.imgur.com/Ihmly.png) **above:** info from Quicktime and OSX ``` """ length: 175 seconds quality: hd720 type: video/mp4; codecs="avc1.64001F, mp4a.40.2" Last-Modified: Sun, 21 Aug 2016 10:41:48 GMT Content-Type: video/mp4 Date: Sat, 01 Apr 2017 16:50:16 GMT Expires: Sat, 01 Apr 2017 16:50:16 GMT Cache-Control: private, max-age=21294 Accept-Ranges: bytes Content-Length: 35933033 Connection: close Alt-Svc: quic=":443"; ma=2592000 X-Content-Type-Options: nosniff Server: gvs 1. """ import urlparse, urllib2 vid = "vzS1Vkpsi5k" save_title = "YouTube SpaceX - Booster Number 4 - Thaicom 8 06-06-2016" url_init = "https://www.youtube.com/get_video_info?video_id=" + vid resp = urllib2.urlopen(url_init, timeout=10) data = resp.read() info = urlparse.parse_qs(data) title = info['title'] print "length: ", info['length_seconds'][0] + " seconds" stream_map = info['url_encoded_fmt_stream_map'][0] vid_info = stream_map.split(",") mp4_filename = save_title + ".mp4" for video in vid_info: item = urlparse.parse_qs(video) print 'quality: ', item['quality'][0] print 'type: ', item['type'][0] url_download = item['url'][0] resp = urllib2.urlopen(url_download) print resp.headers length = int(resp.headers['Content-Length']) my_file = open(mp4_filename, "w+") done, i = 0, 0 buff = resp.read(1024) while buff: my_file.write(buff) done += 1024 percent = done * 100.0 / length buff = resp.read(1024) if not i%1000: percent = done * 100.0 / length print str(percent) + "%" i += 1 break ```
2017/04/01
[ "https://Stackoverflow.com/questions/43159565", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3904031/" ]
If your using Windows 10 just type in `idle` where it says: "Type here for search"
For those using Anaconda, type idle on windows search bar ("Run or Execute command"). This probably wont work if you didn't install anaconda with environment variables. You can also go to ``` Anaconda3 folder > Scripts >idle.exe ``` and create a shortcut to you desktop.
43,159,565
I've been using the script below to download technical videos for later analysis. The script has worked well for me and retrieves the highest resolution version available for the videos that I have needed. Now I've come across a [4K YouTube video](https://youtu.be/vzS1Vkpsi5k), and my script only saves an mp4 with 1280x720. I'd like to know if there is a way to adjust my current script to download higher resolution versions of this video. I understand there are python packages that might address this, but right now I would like stick to this step-by-step method if possible. [![enter image description here](https://i.stack.imgur.com/0XvsY.png)](https://i.stack.imgur.com/0XvsY.png) [![enter image description here](https://i.stack.imgur.com/Ihmly.png)](https://i.stack.imgur.com/Ihmly.png) **above:** info from Quicktime and OSX ``` """ length: 175 seconds quality: hd720 type: video/mp4; codecs="avc1.64001F, mp4a.40.2" Last-Modified: Sun, 21 Aug 2016 10:41:48 GMT Content-Type: video/mp4 Date: Sat, 01 Apr 2017 16:50:16 GMT Expires: Sat, 01 Apr 2017 16:50:16 GMT Cache-Control: private, max-age=21294 Accept-Ranges: bytes Content-Length: 35933033 Connection: close Alt-Svc: quic=":443"; ma=2592000 X-Content-Type-Options: nosniff Server: gvs 1. """ import urlparse, urllib2 vid = "vzS1Vkpsi5k" save_title = "YouTube SpaceX - Booster Number 4 - Thaicom 8 06-06-2016" url_init = "https://www.youtube.com/get_video_info?video_id=" + vid resp = urllib2.urlopen(url_init, timeout=10) data = resp.read() info = urlparse.parse_qs(data) title = info['title'] print "length: ", info['length_seconds'][0] + " seconds" stream_map = info['url_encoded_fmt_stream_map'][0] vid_info = stream_map.split(",") mp4_filename = save_title + ".mp4" for video in vid_info: item = urlparse.parse_qs(video) print 'quality: ', item['quality'][0] print 'type: ', item['type'][0] url_download = item['url'][0] resp = urllib2.urlopen(url_download) print resp.headers length = int(resp.headers['Content-Length']) my_file = open(mp4_filename, "w+") done, i = 0, 0 buff = resp.read(1024) while buff: my_file.write(buff) done += 1024 percent = done * 100.0 / length buff = resp.read(1024) if not i%1000: percent = done * 100.0 / length print str(percent) + "%" i += 1 break ```
2017/04/01
[ "https://Stackoverflow.com/questions/43159565", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3904031/" ]
Start menu > type `IDLE (Python 3.4.3 <bitnum>-bit)`. Replace `<bitnum>` with 32 if 32-bit, otherwise 64. Example: > > IDLE (Python 3.6.2 64-bit) > > > I agree with one who says: > > just type "IDLE" in the start-menu where it says "Type here to search" and press **[**{**ENTER**}**]** > > >
My solution to setting options and then invoking Idle on a python script is: ``` Set optn=blah ... Set optn=blah start pythonw C:\Users\%USERNAME%\AppData\Local\Programs\Python\Python36-32\Lib\idlelib\idle.py STFxlate.py ``` This allows you to setup the environment prior to invoking idle. This assumes that pythonw is in the current path
43,159,565
I've been using the script below to download technical videos for later analysis. The script has worked well for me and retrieves the highest resolution version available for the videos that I have needed. Now I've come across a [4K YouTube video](https://youtu.be/vzS1Vkpsi5k), and my script only saves an mp4 with 1280x720. I'd like to know if there is a way to adjust my current script to download higher resolution versions of this video. I understand there are python packages that might address this, but right now I would like stick to this step-by-step method if possible. [![enter image description here](https://i.stack.imgur.com/0XvsY.png)](https://i.stack.imgur.com/0XvsY.png) [![enter image description here](https://i.stack.imgur.com/Ihmly.png)](https://i.stack.imgur.com/Ihmly.png) **above:** info from Quicktime and OSX ``` """ length: 175 seconds quality: hd720 type: video/mp4; codecs="avc1.64001F, mp4a.40.2" Last-Modified: Sun, 21 Aug 2016 10:41:48 GMT Content-Type: video/mp4 Date: Sat, 01 Apr 2017 16:50:16 GMT Expires: Sat, 01 Apr 2017 16:50:16 GMT Cache-Control: private, max-age=21294 Accept-Ranges: bytes Content-Length: 35933033 Connection: close Alt-Svc: quic=":443"; ma=2592000 X-Content-Type-Options: nosniff Server: gvs 1. """ import urlparse, urllib2 vid = "vzS1Vkpsi5k" save_title = "YouTube SpaceX - Booster Number 4 - Thaicom 8 06-06-2016" url_init = "https://www.youtube.com/get_video_info?video_id=" + vid resp = urllib2.urlopen(url_init, timeout=10) data = resp.read() info = urlparse.parse_qs(data) title = info['title'] print "length: ", info['length_seconds'][0] + " seconds" stream_map = info['url_encoded_fmt_stream_map'][0] vid_info = stream_map.split(",") mp4_filename = save_title + ".mp4" for video in vid_info: item = urlparse.parse_qs(video) print 'quality: ', item['quality'][0] print 'type: ', item['type'][0] url_download = item['url'][0] resp = urllib2.urlopen(url_download) print resp.headers length = int(resp.headers['Content-Length']) my_file = open(mp4_filename, "w+") done, i = 0, 0 buff = resp.read(1024) while buff: my_file.write(buff) done += 1024 percent = done * 100.0 / length buff = resp.read(1024) if not i%1000: percent = done * 100.0 / length print str(percent) + "%" i += 1 break ```
2017/04/01
[ "https://Stackoverflow.com/questions/43159565", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3904031/" ]
My solution to setting options and then invoking Idle on a python script is: ``` Set optn=blah ... Set optn=blah start pythonw C:\Users\%USERNAME%\AppData\Local\Programs\Python\Python36-32\Lib\idlelib\idle.py STFxlate.py ``` This allows you to setup the environment prior to invoking idle. This assumes that pythonw is in the current path
For those using Anaconda, type idle on windows search bar ("Run or Execute command"). This probably wont work if you didn't install anaconda with environment variables. You can also go to ``` Anaconda3 folder > Scripts >idle.exe ``` and create a shortcut to you desktop.
43,159,565
I've been using the script below to download technical videos for later analysis. The script has worked well for me and retrieves the highest resolution version available for the videos that I have needed. Now I've come across a [4K YouTube video](https://youtu.be/vzS1Vkpsi5k), and my script only saves an mp4 with 1280x720. I'd like to know if there is a way to adjust my current script to download higher resolution versions of this video. I understand there are python packages that might address this, but right now I would like stick to this step-by-step method if possible. [![enter image description here](https://i.stack.imgur.com/0XvsY.png)](https://i.stack.imgur.com/0XvsY.png) [![enter image description here](https://i.stack.imgur.com/Ihmly.png)](https://i.stack.imgur.com/Ihmly.png) **above:** info from Quicktime and OSX ``` """ length: 175 seconds quality: hd720 type: video/mp4; codecs="avc1.64001F, mp4a.40.2" Last-Modified: Sun, 21 Aug 2016 10:41:48 GMT Content-Type: video/mp4 Date: Sat, 01 Apr 2017 16:50:16 GMT Expires: Sat, 01 Apr 2017 16:50:16 GMT Cache-Control: private, max-age=21294 Accept-Ranges: bytes Content-Length: 35933033 Connection: close Alt-Svc: quic=":443"; ma=2592000 X-Content-Type-Options: nosniff Server: gvs 1. """ import urlparse, urllib2 vid = "vzS1Vkpsi5k" save_title = "YouTube SpaceX - Booster Number 4 - Thaicom 8 06-06-2016" url_init = "https://www.youtube.com/get_video_info?video_id=" + vid resp = urllib2.urlopen(url_init, timeout=10) data = resp.read() info = urlparse.parse_qs(data) title = info['title'] print "length: ", info['length_seconds'][0] + " seconds" stream_map = info['url_encoded_fmt_stream_map'][0] vid_info = stream_map.split(",") mp4_filename = save_title + ".mp4" for video in vid_info: item = urlparse.parse_qs(video) print 'quality: ', item['quality'][0] print 'type: ', item['type'][0] url_download = item['url'][0] resp = urllib2.urlopen(url_download) print resp.headers length = int(resp.headers['Content-Length']) my_file = open(mp4_filename, "w+") done, i = 0, 0 buff = resp.read(1024) while buff: my_file.write(buff) done += 1024 percent = done * 100.0 / length buff = resp.read(1024) if not i%1000: percent = done * 100.0 / length print str(percent) + "%" i += 1 break ```
2017/04/01
[ "https://Stackoverflow.com/questions/43159565", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3904031/" ]
Start menu > type `IDLE (Python 3.4.3 <bitnum>-bit)`. Replace `<bitnum>` with 32 if 32-bit, otherwise 64. Example: > > IDLE (Python 3.6.2 64-bit) > > > I agree with one who says: > > just type "IDLE" in the start-menu where it says "Type here to search" and press **[**{**ENTER**}**]** > > >
For those using Anaconda, type idle on windows search bar ("Run or Execute command"). This probably wont work if you didn't install anaconda with environment variables. You can also go to ``` Anaconda3 folder > Scripts >idle.exe ``` and create a shortcut to you desktop.
49,327,630
I am new to python 3 and I'm working on sentiment analysis of tweets. My code begins with a for loop that takes in 50 tweets, which i clean and pre-process. After this (still inside the for loop) i want to save each tweet in a text file(every tweet on a new line) Here's how the code goes: ``` for loop: .. print statments .. if loop: filename=open("withnouns.txt","a") sys.stdout = filename print(new_words)#tokenised tweet that i want to save in txt file print("\n") sys.stdout.close()#i close it because i dont want to save print statements OUTSIDE if loop to be saved in txt file .. .. print statements ``` After running this its showing error: I/O operation on closed file on line 71 (the first print statement after if loop) My question is, is there any way I can temporarily close and then open `sys.stdout` and have it active only inside the if loop?
2018/03/16
[ "https://Stackoverflow.com/questions/49327630", "https://Stackoverflow.com", "https://Stackoverflow.com/users/9504637/" ]
I'm not sure if this is exactly what you want to do but you can change this ``` filename=open("withnouns.txt","a") sys.stdout = filename print(new_words) print("\n") sys.stdout.close() ``` to ``` filename=open("withnouns.txt","a") filename.write(new_words + "\n") filename.write("\n\n") filename.close() ``` alternatively, you can get the original value of `sys.stdout` from `sys.__stdout__`, so your code becomes ``` filename=open("withnouns.txt","a") sys.stdout = filename print(new_words) print("\n") filename.close() sys.stdout = sys.__stdout__ ```
You're confusing two different kinds of writing to file. `sys.stdout` pipes your output to the console/terminal. This can be written to file but it's very roundabout. Writing to file is different. In python you should look at the [`csv` module](https://docs.python.org/3/library/csv.html) if you're writing lists of values of all the same length (and maybe even if you're not, it's very easy to use). Open your file outside the loop. In the loop, write to file line by line. Close the file outside the loop. This is automatically done for you if you use the following "with" syntax: ``` import csv with open('file.csv') as f: writer = csv.writer(f, delimiter=',', quotechar='|', quoting=csv.QUOTE_MINIMAL) for loop: # tokenize tweet writer.writerow(tweet) ``` Alternatively, loop through and save tokenized-tweets to a list-of-lists. Then, outside and after the loop, write the entire thing to a file: ``` import csv tweets = [] for loop: # tokenize tweet tweets.append(tweet) with open('file.csv') as f: writer = csv.writer(f, delimiter=',', quotechar='|', quoting=csv.QUOTE_MINIMAL) writer.writerows(tweets) ```
49,327,630
I am new to python 3 and I'm working on sentiment analysis of tweets. My code begins with a for loop that takes in 50 tweets, which i clean and pre-process. After this (still inside the for loop) i want to save each tweet in a text file(every tweet on a new line) Here's how the code goes: ``` for loop: .. print statments .. if loop: filename=open("withnouns.txt","a") sys.stdout = filename print(new_words)#tokenised tweet that i want to save in txt file print("\n") sys.stdout.close()#i close it because i dont want to save print statements OUTSIDE if loop to be saved in txt file .. .. print statements ``` After running this its showing error: I/O operation on closed file on line 71 (the first print statement after if loop) My question is, is there any way I can temporarily close and then open `sys.stdout` and have it active only inside the if loop?
2018/03/16
[ "https://Stackoverflow.com/questions/49327630", "https://Stackoverflow.com", "https://Stackoverflow.com/users/9504637/" ]
You do not need to assign to `sys.stdout` *at all*. Just tell `print()` to write to the file instead, using the `file` argument: ``` print(new_words, file=filename) print("\n", file=filename) ``` There is no need to assign anything to `sys.stdout` now, because now `print()` writes directly to your file instead. You also want to use the file object as a context manager, so it is closed or you: ``` with open("withnouns.txt","a") as filename: print(new_words, file=filename) print("\n", file=filename) ``` You never needed to close the `sys.stdout` reference anyway, you wanted to close `filename` instead, and restore `sys.stdout` to it's former state. If you did want need to replace `sys.stdout`, you have a few options, from most correct to least: * Use [`contextlib.redirect_stdout()`](https://docs.python.org/3/library/contextlib.html#contextlib.redirect_stdout): ``` import contextlib with contextlib.redirect_stdout(some_fileobject): # do things that write to stdout ``` At the end of the block `stdout` is fixed up for you. * Manually store `sys.stdout` first: ``` old_stdout = sys.stdout sys.stdout = new_object try: # do things that write to stdout finally: sys.stdout = old_stdout ``` * Use the [`sys.__stdout__` copy](https://docs.python.org/3/library/sys.html#sys.__stdout__); this is set on start-up: ``` sys.stdout = new_object try: # do things that write to stdout finally: sys.stdout = sys.__stdout__ ``` You need to take into account that `sys.stdout` may have been replaced by something else before your code runs, and restoring it back to `sys.__stdout__` might be the wrong thing to do.
You're confusing two different kinds of writing to file. `sys.stdout` pipes your output to the console/terminal. This can be written to file but it's very roundabout. Writing to file is different. In python you should look at the [`csv` module](https://docs.python.org/3/library/csv.html) if you're writing lists of values of all the same length (and maybe even if you're not, it's very easy to use). Open your file outside the loop. In the loop, write to file line by line. Close the file outside the loop. This is automatically done for you if you use the following "with" syntax: ``` import csv with open('file.csv') as f: writer = csv.writer(f, delimiter=',', quotechar='|', quoting=csv.QUOTE_MINIMAL) for loop: # tokenize tweet writer.writerow(tweet) ``` Alternatively, loop through and save tokenized-tweets to a list-of-lists. Then, outside and after the loop, write the entire thing to a file: ``` import csv tweets = [] for loop: # tokenize tweet tweets.append(tweet) with open('file.csv') as f: writer = csv.writer(f, delimiter=',', quotechar='|', quoting=csv.QUOTE_MINIMAL) writer.writerows(tweets) ```
67,293,952
In short, I have to scrape Flipkart and store the data in Mongodb. Firstly, use [MongoDB Atlas](https://www.mongodb.com/cloud/atlas/lp/try2-in) to get yourself a free managed Mongodb server. Test if you are able to connect to it using python's library pymongo. Secondly, install [Scrapy](https://docs.scrapy.org/en/latest/) and use its documentation to get yourself friendly with scraping using the Scrapy Framework. Then, go to the following 2 urls Men's Topwear <https://www.flipkart.com/clothing-and-accessories/topwear/pr?sid=clo%2Cash&otracker=categorytree&p%5B%5D=facets.ideal_for%255B%255D%3DMen> Women's Footwear <https://www.flipkart.com/womens-footwear/pr?sid=osp,iko&otracker=nmenu_sub_Women_0_Footwear> Each page has 40 products and you have to scrape upto 25 pages from each starting Url (approx. 2000 products) and store the data in Mongodb (database: , collection: flipkart). The data should get inserted in Mongodb directly from the Scrapy framework using Scrapy Mongodb Pipelines. Each product you scrape should have the following data: * `name` [store as string] * `brand` [store as string] * `original_price` [store as float] * `sale_price` [store as float] * `image_url` [store as string] * `product_page_url` [store as string] * `product_category` [store as string] [ it can contain 2 values "women footwear" or "men topwear" ] But I am able to scrape only brand, title sale price and product url the original price has two string and its getting mismatch and i am not able to save the data in mongodb is anyone can help me in this. ``` from ..items import FlipkartItem import json import scrapy import re class FlipkartscrapySpider(scrapy.Spider): name = 'flipkartscrapy' def start_requests(self): urls = ['https://www.flipkart.com/clothing-and-accessories/topwear/pr?sid=clo%2Cash&otracker=categorytree&p%5B%5D=facets.ideal_for%255B%255D%3DMen&page={}', 'https://www.flipkart.com/womens-footwear/pr?sid=osp%2Ciko&otracker=nmenu_sub_Women_0_Footwear&page={}'] for url in urls: for i in range(1,25): x = url.format(i) yield scrapy.Request(url=x, callback=self.parse) def parse(self, response): items = FlipkartItem() name = response.xpath('//*[contains(concat( " ", @class, " " ), concat( " ", "IRpwTa", " " ))]').xpath('text()').getall() brand = response.xpath('//*[contains(concat( " ", @class, " " ), concat( " ", "_2WkVRV", " " ))]').xpath('text()').getall() original_price = response.xpath('//*[contains(concat( " ", @class, " " ), concat( " ", "_3I9_wc", " " ))]').xpath('text()').getall() sale_price = response.xpath('//*[contains(concat( " ", @class, " " ), concat( " ", "_30jeq3", " " ))]').xpath('text()').getall() image_url = response.css('._1a8UBa').css('::attr(src)').getall() product_page_url =response.css('._13oc-S > div').css('::attr(href)').getall() items['name'] = name items['brand'] = brand items['original_price'] = original_price items['sale_price'] = sale_price items['image_url'] = image_url items['product_page_url'] = 'https://www.flipkart.com' + str(product_page_url) yield items ``` Original price output coming like this `original_price`: ``` ['₹', '999', '₹', '1,499', '₹', '1,888', '₹', '2,199', '₹', '1,499', '₹', '1,069', '₹', '1,099', '₹', '1,999', '₹', '2,598', '₹', '1,299', '₹', '1,999', '₹', '899', '₹', '1,099', '₹', '1,699', '₹', '1,399', '₹', '999', '₹', '999', '₹', '1,999', '₹', '1,099', '₹', '1,199', '₹', '999', '₹', '999', '₹', '1,999', '₹', '1,287', '₹', '999', '₹', '1,199', '₹', '899', '₹', '999', '₹', '1,849', '₹', '1,499', '₹', '999', '₹', '999', '₹', '899', '₹', '1,999', '₹', '1,849', '₹', '3,499', '₹', '2,397', '₹', '899', '₹', '1,999'] ```
2021/04/28
[ "https://Stackoverflow.com/questions/67293952", "https://Stackoverflow.com", "https://Stackoverflow.com/users/15781486/" ]
I had the same issue, and here's what I figured out: You need to include `Store` from redux, and use it as your type definition for your own `store` return value. Short answer: ``` import {combineReducers, Store} from 'redux'; [...] const store:Store = configureStore([...]) [...] export default store; ``` Longer answer: As I understand it, what was happening is that `Store` type uses `$CombinedState` as part of its definition. When `configureStore()` returns, it inherits the `State` type. However since `State` is not explicitly used in your code, that now means that your code includes a value that references `$CombinedState`, despite it not existing anywhere in your code either. When you then try to export it out of your module, you're exporting a value with a type that doesn't exist within your module, and so you get an error. You can import `State` from redux (which will in turn explicity cause your code to bring in `$CombinedState`), then use it to explicitly define `store` that gets assigned the return of `configureStore()`. You should then no longer be exporting unknown named types. You can also be more specific with your `Store` type, as it is a generic: ``` const store:Store<RootState> ``` Although I'm not entirely sure if that would be a circular dependency, since your RootState depends on store.
A potential solution would be to extend the RootState type to include $CombinedState as follows: ``` import { $CombinedState } from '@reduxjs/toolkit'; export type RootState = ReturnType<typeof store.getState> & { readonly [$CombinedState]?: undefined; }; ```
50,847,000
I have a csv file which has contents like the following: **stores.csv** ``` Site, Score, Rank roolee.com,100,125225 piperandscoot.com,29.3,222166 calledtosurf.com,23.8,361542 cladandcloth.com,17.9,208670 neeseesdresses.com,9.6,251016 ... ``` Here's my model. **models.py** ``` class SimilarStore(models.Model): store = models.ForeignKey(Store) csv = FileField() domain = models.CharField(max_length=100, blank=True) score = models.IntegerField(blank=True) rank = models.IntegerField(blank=True) ``` So, I wanna upload the `stores.csv` file into `csv` field so that each column data goes into each `domain`, `score`, and `rank`. I found out some resources making python file to parse data and run it through a command like `python parse.py'. However, I need it to be done by uploading the csv file in Django admin. Can anyone explain how I can do that?
2018/06/13
[ "https://Stackoverflow.com/questions/50847000", "https://Stackoverflow.com", "https://Stackoverflow.com/users/9393766/" ]
You can read the csv file through pandas and then convert it to your desired object ny using pandas funtion 'infer\_objects()'. I hope the code below helps you in this regard, ``` import pandas as pd SimilarStore_df = pd.read_csv('./store.csv') #importing csv file as pandas DataFrame SimilarStore_df.columns = ['Domain', 'Score','Rank'] SimilarStore=SimilarStore_df.infer_objects() #converting DataFrame to object print(SimilarStore) print(SimilarStore.columns) ``` Output: ``` Domain Score Rank 0 roolee.com 100.0 125225 1 piperandscoot.com 29.3 222166 2 calledtosurf.com 23.8 361542 3 cladandcloth.com 17.9 208670 4 neeseesdresses.com 9.6 251016 Index(['Domain', 'Score', 'Rank'], dtype='object') ```
Just find the table name in your Database also find the corresponding column names with the table and use the to\_sql command in pandas
6,906,515
Okay, so I'm writing a very simplistic password cracker in python that brute forces a password with alphanumeric characters. Currently this code only supports 1 character passwords and a password file with a md5 hashed password inside. It will eventually include the option to specify your own character limits (how many characters the cracker tries until it fails). Right now I cannot kill this code when I want it to die. I have included a try and except snippit, however it's not working. What did I do wrong? Code: <http://pastebin.com/MkJGmmDU> ``` import linecache, hashlib alphaNumeric = ["a","b","c","d","e","f","g","h","i","j","k","l","m","n","o","p","q","r","s","t","u","v","w","x","y","z","A","B","C","D","E","F","G","H","I","J","K","L","M","N","O","P","Q","R","S","T","U","V","W","X","Y","Z",1,2,3,4,5,6,7,8,9,0] class main: def checker(): try: while 1: if hashlib.md5(alphaNumeric[num1]) == passwordHash: print "Success! Your password is: " + str(alphaNumeric[num1]) break except KeyboardInterrupt: print "Keyboard Interrupt." global num1, passwordHash, fileToCrack, numOfChars print "What file do you want to crack?" fileToCrack = raw_input("> ") print "How many characters do you want to try?" numOfChars = raw_input("> ") print "Scanning file..." passwordHash = linecache.getline(fileToCrack, 1)[0:32] num1 = 0 checker() main ```
2011/08/02
[ "https://Stackoverflow.com/questions/6906515", "https://Stackoverflow.com", "https://Stackoverflow.com/users/856255/" ]
The way to allow a `KeyboardInterrupt` to end your program is to **do nothing**. They work by depending on nothing catching them in an `except` block; when an exception bubbles all the way out of a program (or thread), it terminates. What you have done is to trap the `KeyboardInterrupt`s and handle them by printing a message and then continuing. As for why the program gets stuck, there is nothing that ever causes `num1` to change, so the md5 calculation is the same calculation every time. If you wanted to iterate over the symbols in `alphaNumeric`, then do that: `for symbol in alphaNumeric: # do something with 'symbol'`. Of course, that will still only consider every possible one-character password. You're going to have to try harder than that... :) I think you're also confused about the use of classes. Python **does not** require you to wrap everything inside a class. The `main` at the end of your program does nothing useful; your code runs because it is evaluated when the compiler tries to figure out what a `main` class is. This is an abuse of syntax. What you want to do is put this code in a main **function**, and **call** the function (the same way you call `checker` currently).
This is what worked for me... ``` import sys try: ....code that hangs.... except KeyboardInterrupt: print "interupt" sys.exit() ```
6,906,515
Okay, so I'm writing a very simplistic password cracker in python that brute forces a password with alphanumeric characters. Currently this code only supports 1 character passwords and a password file with a md5 hashed password inside. It will eventually include the option to specify your own character limits (how many characters the cracker tries until it fails). Right now I cannot kill this code when I want it to die. I have included a try and except snippit, however it's not working. What did I do wrong? Code: <http://pastebin.com/MkJGmmDU> ``` import linecache, hashlib alphaNumeric = ["a","b","c","d","e","f","g","h","i","j","k","l","m","n","o","p","q","r","s","t","u","v","w","x","y","z","A","B","C","D","E","F","G","H","I","J","K","L","M","N","O","P","Q","R","S","T","U","V","W","X","Y","Z",1,2,3,4,5,6,7,8,9,0] class main: def checker(): try: while 1: if hashlib.md5(alphaNumeric[num1]) == passwordHash: print "Success! Your password is: " + str(alphaNumeric[num1]) break except KeyboardInterrupt: print "Keyboard Interrupt." global num1, passwordHash, fileToCrack, numOfChars print "What file do you want to crack?" fileToCrack = raw_input("> ") print "How many characters do you want to try?" numOfChars = raw_input("> ") print "Scanning file..." passwordHash = linecache.getline(fileToCrack, 1)[0:32] num1 = 0 checker() main ```
2011/08/02
[ "https://Stackoverflow.com/questions/6906515", "https://Stackoverflow.com", "https://Stackoverflow.com/users/856255/" ]
The way to allow a `KeyboardInterrupt` to end your program is to **do nothing**. They work by depending on nothing catching them in an `except` block; when an exception bubbles all the way out of a program (or thread), it terminates. What you have done is to trap the `KeyboardInterrupt`s and handle them by printing a message and then continuing. As for why the program gets stuck, there is nothing that ever causes `num1` to change, so the md5 calculation is the same calculation every time. If you wanted to iterate over the symbols in `alphaNumeric`, then do that: `for symbol in alphaNumeric: # do something with 'symbol'`. Of course, that will still only consider every possible one-character password. You're going to have to try harder than that... :) I think you're also confused about the use of classes. Python **does not** require you to wrap everything inside a class. The `main` at the end of your program does nothing useful; your code runs because it is evaluated when the compiler tries to figure out what a `main` class is. This is an abuse of syntax. What you want to do is put this code in a main **function**, and **call** the function (the same way you call `checker` currently).
Well, when you use that `try` and `except` block, the error is raised when that error occurs. In your case, `KeyboardInterrupt` is your error here. But when `KeyboardInterrupt` is activated, nothing happens. This due to having nothing in the `except` part. You could do this after importing `sys`: ``` try: #Your code# except KeyboardInterrupt: print 'Put Text Here' sys.exit() ``` `sys.exit()` is an easy way to safely exit the program. This can be used for making programs with passwords to end the program if the password is wrong or something like that. That should fix the `except` part. Now to the `try` part: If you have `break` as the end of the `try` part, nothing is going to happen. Why? Because `break` only works on loops, most people tend to do it for `while` loops. Let's make some examples. Here's one: ``` while 1: print 'djfgerj' break ``` The `break` statement will stop and end the loop immediately unlike its brother `continue`, which continues the loop. That's just extra information. Now if you have `break` in a something like this: ``` if liners == 0: break ``` That's going to depend where that `if` statement is. If it is in a loop, it is going to stop the loop. If not, nothing is going to happen. I am assuming you made an attempt to exit the *function* which didn't work. It looks like the program should end, so use `sys.exit()` like I showed you above. Also, you should group that last piece of code (in the class) into a seperate function. I hope this helps you!
6,906,515
Okay, so I'm writing a very simplistic password cracker in python that brute forces a password with alphanumeric characters. Currently this code only supports 1 character passwords and a password file with a md5 hashed password inside. It will eventually include the option to specify your own character limits (how many characters the cracker tries until it fails). Right now I cannot kill this code when I want it to die. I have included a try and except snippit, however it's not working. What did I do wrong? Code: <http://pastebin.com/MkJGmmDU> ``` import linecache, hashlib alphaNumeric = ["a","b","c","d","e","f","g","h","i","j","k","l","m","n","o","p","q","r","s","t","u","v","w","x","y","z","A","B","C","D","E","F","G","H","I","J","K","L","M","N","O","P","Q","R","S","T","U","V","W","X","Y","Z",1,2,3,4,5,6,7,8,9,0] class main: def checker(): try: while 1: if hashlib.md5(alphaNumeric[num1]) == passwordHash: print "Success! Your password is: " + str(alphaNumeric[num1]) break except KeyboardInterrupt: print "Keyboard Interrupt." global num1, passwordHash, fileToCrack, numOfChars print "What file do you want to crack?" fileToCrack = raw_input("> ") print "How many characters do you want to try?" numOfChars = raw_input("> ") print "Scanning file..." passwordHash = linecache.getline(fileToCrack, 1)[0:32] num1 = 0 checker() main ```
2011/08/02
[ "https://Stackoverflow.com/questions/6906515", "https://Stackoverflow.com", "https://Stackoverflow.com/users/856255/" ]
Besides printing, you need to actually exit your program when capturin `KeyboardInterrupt`, you're only printing a message.
This is what worked for me... ``` import sys try: ....code that hangs.... except KeyboardInterrupt: print "interupt" sys.exit() ```
6,906,515
Okay, so I'm writing a very simplistic password cracker in python that brute forces a password with alphanumeric characters. Currently this code only supports 1 character passwords and a password file with a md5 hashed password inside. It will eventually include the option to specify your own character limits (how many characters the cracker tries until it fails). Right now I cannot kill this code when I want it to die. I have included a try and except snippit, however it's not working. What did I do wrong? Code: <http://pastebin.com/MkJGmmDU> ``` import linecache, hashlib alphaNumeric = ["a","b","c","d","e","f","g","h","i","j","k","l","m","n","o","p","q","r","s","t","u","v","w","x","y","z","A","B","C","D","E","F","G","H","I","J","K","L","M","N","O","P","Q","R","S","T","U","V","W","X","Y","Z",1,2,3,4,5,6,7,8,9,0] class main: def checker(): try: while 1: if hashlib.md5(alphaNumeric[num1]) == passwordHash: print "Success! Your password is: " + str(alphaNumeric[num1]) break except KeyboardInterrupt: print "Keyboard Interrupt." global num1, passwordHash, fileToCrack, numOfChars print "What file do you want to crack?" fileToCrack = raw_input("> ") print "How many characters do you want to try?" numOfChars = raw_input("> ") print "Scanning file..." passwordHash = linecache.getline(fileToCrack, 1)[0:32] num1 = 0 checker() main ```
2011/08/02
[ "https://Stackoverflow.com/questions/6906515", "https://Stackoverflow.com", "https://Stackoverflow.com/users/856255/" ]
Besides printing, you need to actually exit your program when capturin `KeyboardInterrupt`, you're only printing a message.
Well, when you use that `try` and `except` block, the error is raised when that error occurs. In your case, `KeyboardInterrupt` is your error here. But when `KeyboardInterrupt` is activated, nothing happens. This due to having nothing in the `except` part. You could do this after importing `sys`: ``` try: #Your code# except KeyboardInterrupt: print 'Put Text Here' sys.exit() ``` `sys.exit()` is an easy way to safely exit the program. This can be used for making programs with passwords to end the program if the password is wrong or something like that. That should fix the `except` part. Now to the `try` part: If you have `break` as the end of the `try` part, nothing is going to happen. Why? Because `break` only works on loops, most people tend to do it for `while` loops. Let's make some examples. Here's one: ``` while 1: print 'djfgerj' break ``` The `break` statement will stop and end the loop immediately unlike its brother `continue`, which continues the loop. That's just extra information. Now if you have `break` in a something like this: ``` if liners == 0: break ``` That's going to depend where that `if` statement is. If it is in a loop, it is going to stop the loop. If not, nothing is going to happen. I am assuming you made an attempt to exit the *function* which didn't work. It looks like the program should end, so use `sys.exit()` like I showed you above. Also, you should group that last piece of code (in the class) into a seperate function. I hope this helps you!
28,168,732
Here is my JSON string which I need to parse. ``` { "192.168.1.2_151b3a32-ce00-114a-e000-0004a50c1c6d_stream1" : { "@class" : "com.barco.compose.media.transcode.internal.h264.H264Gateway", "id" : "192.168.1.2_151b3a32-ce00-114a-e000-0004a50c1c6d_stream1", "uri" : { "high" : "rtp://239.1.1.2:5006", "low" : "rtp://239.1.1.1:5006" }, "owner" : { "@class" : "com.barco.compose.media.transcode.Acma", "source" : "dvi1-1-mna-1890322558", "stream" : "udp://239.1.1.7:5004", "type" : "video", "resolution" : [ 1, 1 ], "framerateDivider" : [ 1, 1 ], "profile" : [ "baseline" ], "output" : [ "high", "low" ], "destination" : [ "", "" ], "ipvsProfile" : "high", "ipvsSDP" : [ "" ], "ipvsTitle" : "DiORStream", "ipvsTagName" : [ "" ], "ipvsTagValue" : [ "" ], "ipvsDescription" : "NMSDesc", "ipvsHLS" : true }, "type" : "video" }, "192.168.1.2_151b3a32-ce00-114a-e000-0004a50c1c6d_stream2" : { "@class" : "com.barco.compose.media.transcode.internal.h264.H264Gateway", "id" : "192.168.1.2_151b3a32-ce00-114a-e000-0004a50c1c6d_stream2", "uri" : { "high" : "rtp://239.1.1.4:5006", "low" : "rtp://239.1.1.3:5006" }, "owner" : { "@class" : "com.barco.compose.media.transcode.Acma", "source" : "dvi1-1-mna-1890322558", "stream" : "udp://239.1.1.7:5004", "type" : "video", "resolution" : [ 1, 1 ], "framerateDivider" : [ 1, 1 ], "profile" : [ "baseline" ], "output" : [ "high", "low" ], "destination" : [ "", "" ], "ipvsProfile" : "high", "ipvsSDP" : [ "" ], "ipvsTitle" : "nikhil", "ipvsTagName" : [ "" ], "ipvsTagValue" : [ "" ], "ipvsDescription" : "nikhilDesc", "ipvsHLS" : true }, "type" : "video" } } ``` Now I want to get value of "id". I have used [jsonpath-rw library](https://pypi.python.org/pypi/jsonpath-rw) for Python, but that is not working. If I use `*` whole response gets printed. Looks like the whole response is root. I have used different combinations on <http://jsonpath.curiousconcept.com/>, such as `*.id`, `$[0].id`.
2015/01/27
[ "https://Stackoverflow.com/questions/28168732", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4498089/" ]
Your `JSON` object contains several root (or top) objects indexed with keys such as `"192.168.1.2_151b3a32-ce00-114a-e000-0004a50c1c6d_stream1"`. From your question it seems you want to access the `id` field of these elements. For that the simple way is probably using the `json` module - included in the standard library - to load the string and then access the `id` of each element. ``` import json my_json_string = "..." my_json_dict = json.loads(my_json_string) for key, value in my_json_dict.items(): print("Id is {} for item {}".format(value["id"], key)) ``` However if you just want the keys of your `JSON` object all you need is ``` import json my_json_string = "..." my_json_dict = json.loads(my_json_string) print(["Got item with id '{}'".format(key) for key in d.keys()]) ```
The following expression is what you need: ``` $..id ``` Proof: ``` In [12]: vv = json.loads(my_input_string) In [13]: jsonpath_expr = parse('$..id') In [14]: [x.value for x in jsonpath_expr.find(vv)] Out[14]: [u'192.168.1.2_151b3a32-ce00-114a-e000-0004a50c1c6d_stream1', u'192.168.1.2_151b3a32-ce00-114a-e000-0004a50c1c6d_stream2'] ``` However, *please be careful*, when using `..` operator, since it does deep scan. It means if there is any other object inside your JSON with `id` field, value of this field will also be added to result of query.
28,168,732
Here is my JSON string which I need to parse. ``` { "192.168.1.2_151b3a32-ce00-114a-e000-0004a50c1c6d_stream1" : { "@class" : "com.barco.compose.media.transcode.internal.h264.H264Gateway", "id" : "192.168.1.2_151b3a32-ce00-114a-e000-0004a50c1c6d_stream1", "uri" : { "high" : "rtp://239.1.1.2:5006", "low" : "rtp://239.1.1.1:5006" }, "owner" : { "@class" : "com.barco.compose.media.transcode.Acma", "source" : "dvi1-1-mna-1890322558", "stream" : "udp://239.1.1.7:5004", "type" : "video", "resolution" : [ 1, 1 ], "framerateDivider" : [ 1, 1 ], "profile" : [ "baseline" ], "output" : [ "high", "low" ], "destination" : [ "", "" ], "ipvsProfile" : "high", "ipvsSDP" : [ "" ], "ipvsTitle" : "DiORStream", "ipvsTagName" : [ "" ], "ipvsTagValue" : [ "" ], "ipvsDescription" : "NMSDesc", "ipvsHLS" : true }, "type" : "video" }, "192.168.1.2_151b3a32-ce00-114a-e000-0004a50c1c6d_stream2" : { "@class" : "com.barco.compose.media.transcode.internal.h264.H264Gateway", "id" : "192.168.1.2_151b3a32-ce00-114a-e000-0004a50c1c6d_stream2", "uri" : { "high" : "rtp://239.1.1.4:5006", "low" : "rtp://239.1.1.3:5006" }, "owner" : { "@class" : "com.barco.compose.media.transcode.Acma", "source" : "dvi1-1-mna-1890322558", "stream" : "udp://239.1.1.7:5004", "type" : "video", "resolution" : [ 1, 1 ], "framerateDivider" : [ 1, 1 ], "profile" : [ "baseline" ], "output" : [ "high", "low" ], "destination" : [ "", "" ], "ipvsProfile" : "high", "ipvsSDP" : [ "" ], "ipvsTitle" : "nikhil", "ipvsTagName" : [ "" ], "ipvsTagValue" : [ "" ], "ipvsDescription" : "nikhilDesc", "ipvsHLS" : true }, "type" : "video" } } ``` Now I want to get value of "id". I have used [jsonpath-rw library](https://pypi.python.org/pypi/jsonpath-rw) for Python, but that is not working. If I use `*` whole response gets printed. Looks like the whole response is root. I have used different combinations on <http://jsonpath.curiousconcept.com/>, such as `*.id`, `$[0].id`.
2015/01/27
[ "https://Stackoverflow.com/questions/28168732", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4498089/" ]
The following expression is what you need: ``` $..id ``` Proof: ``` In [12]: vv = json.loads(my_input_string) In [13]: jsonpath_expr = parse('$..id') In [14]: [x.value for x in jsonpath_expr.find(vv)] Out[14]: [u'192.168.1.2_151b3a32-ce00-114a-e000-0004a50c1c6d_stream1', u'192.168.1.2_151b3a32-ce00-114a-e000-0004a50c1c6d_stream2'] ``` However, *please be careful*, when using `..` operator, since it does deep scan. It means if there is any other object inside your JSON with `id` field, value of this field will also be added to result of query.
you can get `id` like this(save your json string as a **single line** file as `test.json`): ``` import simplejson as json with open("test.json") as fp: line = fp.readline() json = json.loads(line) for k,v in json.items(): print v.get("id","no id found") # no id found is the default value in case there is no id defined ```
28,168,732
Here is my JSON string which I need to parse. ``` { "192.168.1.2_151b3a32-ce00-114a-e000-0004a50c1c6d_stream1" : { "@class" : "com.barco.compose.media.transcode.internal.h264.H264Gateway", "id" : "192.168.1.2_151b3a32-ce00-114a-e000-0004a50c1c6d_stream1", "uri" : { "high" : "rtp://239.1.1.2:5006", "low" : "rtp://239.1.1.1:5006" }, "owner" : { "@class" : "com.barco.compose.media.transcode.Acma", "source" : "dvi1-1-mna-1890322558", "stream" : "udp://239.1.1.7:5004", "type" : "video", "resolution" : [ 1, 1 ], "framerateDivider" : [ 1, 1 ], "profile" : [ "baseline" ], "output" : [ "high", "low" ], "destination" : [ "", "" ], "ipvsProfile" : "high", "ipvsSDP" : [ "" ], "ipvsTitle" : "DiORStream", "ipvsTagName" : [ "" ], "ipvsTagValue" : [ "" ], "ipvsDescription" : "NMSDesc", "ipvsHLS" : true }, "type" : "video" }, "192.168.1.2_151b3a32-ce00-114a-e000-0004a50c1c6d_stream2" : { "@class" : "com.barco.compose.media.transcode.internal.h264.H264Gateway", "id" : "192.168.1.2_151b3a32-ce00-114a-e000-0004a50c1c6d_stream2", "uri" : { "high" : "rtp://239.1.1.4:5006", "low" : "rtp://239.1.1.3:5006" }, "owner" : { "@class" : "com.barco.compose.media.transcode.Acma", "source" : "dvi1-1-mna-1890322558", "stream" : "udp://239.1.1.7:5004", "type" : "video", "resolution" : [ 1, 1 ], "framerateDivider" : [ 1, 1 ], "profile" : [ "baseline" ], "output" : [ "high", "low" ], "destination" : [ "", "" ], "ipvsProfile" : "high", "ipvsSDP" : [ "" ], "ipvsTitle" : "nikhil", "ipvsTagName" : [ "" ], "ipvsTagValue" : [ "" ], "ipvsDescription" : "nikhilDesc", "ipvsHLS" : true }, "type" : "video" } } ``` Now I want to get value of "id". I have used [jsonpath-rw library](https://pypi.python.org/pypi/jsonpath-rw) for Python, but that is not working. If I use `*` whole response gets printed. Looks like the whole response is root. I have used different combinations on <http://jsonpath.curiousconcept.com/>, such as `*.id`, `$[0].id`.
2015/01/27
[ "https://Stackoverflow.com/questions/28168732", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4498089/" ]
Your `JSON` object contains several root (or top) objects indexed with keys such as `"192.168.1.2_151b3a32-ce00-114a-e000-0004a50c1c6d_stream1"`. From your question it seems you want to access the `id` field of these elements. For that the simple way is probably using the `json` module - included in the standard library - to load the string and then access the `id` of each element. ``` import json my_json_string = "..." my_json_dict = json.loads(my_json_string) for key, value in my_json_dict.items(): print("Id is {} for item {}".format(value["id"], key)) ``` However if you just want the keys of your `JSON` object all you need is ``` import json my_json_string = "..." my_json_dict = json.loads(my_json_string) print(["Got item with id '{}'".format(key) for key in d.keys()]) ```
You don't have a base/root element - just a collection of keys and values. You can iterate through them with something like this: ``` import json import pprint values = json.loads(YOUR_JSON_STRING) for v in values: print v, pprint.pprint(values[v]) ``` That will print out the following two elements (each with it's own key) in the resulting JSON object: ``` 192.168.1.2_151b3a32-ce00-114a-e000-0004a50c1c6d_stream1{u'@class': u'com.barco.compose.media.transcode.internal.h264.H264Gateway', u'id': u'192.168.1.2_151b3a32-ce00-114a-e000-0004a50c1c6d_stream1', u'owner': {u'@class': u'com.barco.compose.media.transcode.Acma', u'destination': [u'', u''], u'framerateDivider': [1, 1], u'ipvsDescription': u'NMSDesc', u'ipvsHLS': True, u'ipvsProfile': u'high', u'ipvsSDP': [u''], u'ipvsTagName': [u''], u'ipvsTagValue': [u''], u'ipvsTitle': u'DiORStream', u'output': [u'high', u'low'], u'profile': [u'baseline'], u'resolution': [1, 1], u'source': u'dvi1-1-mna-1890322558', u'stream': u'udp://239.1.1.7:5004', u'type': u'video'}, u'type': u'video', u'uri': {u'high': u'rtp://239.1.1.2:5006', u'low': u'rtp://239.1.1.1:5006'}} None 192.168.1.2_151b3a32-ce00-114a-e000-0004a50c1c6d_stream2{u'@class': u'com.barco.compose.media.transcode.internal.h264.H264Gateway', u'id': u'192.168.1.2_151b3a32-ce00-114a-e000-0004a50c1c6d_stream2', u'owner': {u'@class': u'com.barco.compose.media.transcode.Acma', u'destination': [u'', u''], u'framerateDivider': [1, 1], u'ipvsDescription': u'nikhilDesc', u'ipvsHLS': True, u'ipvsProfile': u'high', u'ipvsSDP': [u''], u'ipvsTagName': [u''], u'ipvsTagValue': [u''], u'ipvsTitle': u'nikhil', u'output': [u'high', u'low'], u'profile': [u'baseline'], u'resolution': [1, 1], u'source': u'dvi1-1-mna-1890322558', u'stream': u'udp://239.1.1.7:5004', u'type': u'video'}, u'type': u'video', u'uri': {u'high': u'rtp://239.1.1.4:5006', u'low': u'rtp://239.1.1.3:5006'}} ```
28,168,732
Here is my JSON string which I need to parse. ``` { "192.168.1.2_151b3a32-ce00-114a-e000-0004a50c1c6d_stream1" : { "@class" : "com.barco.compose.media.transcode.internal.h264.H264Gateway", "id" : "192.168.1.2_151b3a32-ce00-114a-e000-0004a50c1c6d_stream1", "uri" : { "high" : "rtp://239.1.1.2:5006", "low" : "rtp://239.1.1.1:5006" }, "owner" : { "@class" : "com.barco.compose.media.transcode.Acma", "source" : "dvi1-1-mna-1890322558", "stream" : "udp://239.1.1.7:5004", "type" : "video", "resolution" : [ 1, 1 ], "framerateDivider" : [ 1, 1 ], "profile" : [ "baseline" ], "output" : [ "high", "low" ], "destination" : [ "", "" ], "ipvsProfile" : "high", "ipvsSDP" : [ "" ], "ipvsTitle" : "DiORStream", "ipvsTagName" : [ "" ], "ipvsTagValue" : [ "" ], "ipvsDescription" : "NMSDesc", "ipvsHLS" : true }, "type" : "video" }, "192.168.1.2_151b3a32-ce00-114a-e000-0004a50c1c6d_stream2" : { "@class" : "com.barco.compose.media.transcode.internal.h264.H264Gateway", "id" : "192.168.1.2_151b3a32-ce00-114a-e000-0004a50c1c6d_stream2", "uri" : { "high" : "rtp://239.1.1.4:5006", "low" : "rtp://239.1.1.3:5006" }, "owner" : { "@class" : "com.barco.compose.media.transcode.Acma", "source" : "dvi1-1-mna-1890322558", "stream" : "udp://239.1.1.7:5004", "type" : "video", "resolution" : [ 1, 1 ], "framerateDivider" : [ 1, 1 ], "profile" : [ "baseline" ], "output" : [ "high", "low" ], "destination" : [ "", "" ], "ipvsProfile" : "high", "ipvsSDP" : [ "" ], "ipvsTitle" : "nikhil", "ipvsTagName" : [ "" ], "ipvsTagValue" : [ "" ], "ipvsDescription" : "nikhilDesc", "ipvsHLS" : true }, "type" : "video" } } ``` Now I want to get value of "id". I have used [jsonpath-rw library](https://pypi.python.org/pypi/jsonpath-rw) for Python, but that is not working. If I use `*` whole response gets printed. Looks like the whole response is root. I have used different combinations on <http://jsonpath.curiousconcept.com/>, such as `*.id`, `$[0].id`.
2015/01/27
[ "https://Stackoverflow.com/questions/28168732", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4498089/" ]
Your `JSON` object contains several root (or top) objects indexed with keys such as `"192.168.1.2_151b3a32-ce00-114a-e000-0004a50c1c6d_stream1"`. From your question it seems you want to access the `id` field of these elements. For that the simple way is probably using the `json` module - included in the standard library - to load the string and then access the `id` of each element. ``` import json my_json_string = "..." my_json_dict = json.loads(my_json_string) for key, value in my_json_dict.items(): print("Id is {} for item {}".format(value["id"], key)) ``` However if you just want the keys of your `JSON` object all you need is ``` import json my_json_string = "..." my_json_dict = json.loads(my_json_string) print(["Got item with id '{}'".format(key) for key in d.keys()]) ```
you can get `id` like this(save your json string as a **single line** file as `test.json`): ``` import simplejson as json with open("test.json") as fp: line = fp.readline() json = json.loads(line) for k,v in json.items(): print v.get("id","no id found") # no id found is the default value in case there is no id defined ```
28,168,732
Here is my JSON string which I need to parse. ``` { "192.168.1.2_151b3a32-ce00-114a-e000-0004a50c1c6d_stream1" : { "@class" : "com.barco.compose.media.transcode.internal.h264.H264Gateway", "id" : "192.168.1.2_151b3a32-ce00-114a-e000-0004a50c1c6d_stream1", "uri" : { "high" : "rtp://239.1.1.2:5006", "low" : "rtp://239.1.1.1:5006" }, "owner" : { "@class" : "com.barco.compose.media.transcode.Acma", "source" : "dvi1-1-mna-1890322558", "stream" : "udp://239.1.1.7:5004", "type" : "video", "resolution" : [ 1, 1 ], "framerateDivider" : [ 1, 1 ], "profile" : [ "baseline" ], "output" : [ "high", "low" ], "destination" : [ "", "" ], "ipvsProfile" : "high", "ipvsSDP" : [ "" ], "ipvsTitle" : "DiORStream", "ipvsTagName" : [ "" ], "ipvsTagValue" : [ "" ], "ipvsDescription" : "NMSDesc", "ipvsHLS" : true }, "type" : "video" }, "192.168.1.2_151b3a32-ce00-114a-e000-0004a50c1c6d_stream2" : { "@class" : "com.barco.compose.media.transcode.internal.h264.H264Gateway", "id" : "192.168.1.2_151b3a32-ce00-114a-e000-0004a50c1c6d_stream2", "uri" : { "high" : "rtp://239.1.1.4:5006", "low" : "rtp://239.1.1.3:5006" }, "owner" : { "@class" : "com.barco.compose.media.transcode.Acma", "source" : "dvi1-1-mna-1890322558", "stream" : "udp://239.1.1.7:5004", "type" : "video", "resolution" : [ 1, 1 ], "framerateDivider" : [ 1, 1 ], "profile" : [ "baseline" ], "output" : [ "high", "low" ], "destination" : [ "", "" ], "ipvsProfile" : "high", "ipvsSDP" : [ "" ], "ipvsTitle" : "nikhil", "ipvsTagName" : [ "" ], "ipvsTagValue" : [ "" ], "ipvsDescription" : "nikhilDesc", "ipvsHLS" : true }, "type" : "video" } } ``` Now I want to get value of "id". I have used [jsonpath-rw library](https://pypi.python.org/pypi/jsonpath-rw) for Python, but that is not working. If I use `*` whole response gets printed. Looks like the whole response is root. I have used different combinations on <http://jsonpath.curiousconcept.com/>, such as `*.id`, `$[0].id`.
2015/01/27
[ "https://Stackoverflow.com/questions/28168732", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4498089/" ]
Your `JSON` object contains several root (or top) objects indexed with keys such as `"192.168.1.2_151b3a32-ce00-114a-e000-0004a50c1c6d_stream1"`. From your question it seems you want to access the `id` field of these elements. For that the simple way is probably using the `json` module - included in the standard library - to load the string and then access the `id` of each element. ``` import json my_json_string = "..." my_json_dict = json.loads(my_json_string) for key, value in my_json_dict.items(): print("Id is {} for item {}".format(value["id"], key)) ``` However if you just want the keys of your `JSON` object all you need is ``` import json my_json_string = "..." my_json_dict = json.loads(my_json_string) print(["Got item with id '{}'".format(key) for key in d.keys()]) ```
Found the subject when trying to find a solution for a problem of mine. In the meantime, I think you found a solution but if anyone is passing by check the jsonlines library that is very useful and could solve the problem <https://jsonlines.readthedocs.io/en/latest/>
28,168,732
Here is my JSON string which I need to parse. ``` { "192.168.1.2_151b3a32-ce00-114a-e000-0004a50c1c6d_stream1" : { "@class" : "com.barco.compose.media.transcode.internal.h264.H264Gateway", "id" : "192.168.1.2_151b3a32-ce00-114a-e000-0004a50c1c6d_stream1", "uri" : { "high" : "rtp://239.1.1.2:5006", "low" : "rtp://239.1.1.1:5006" }, "owner" : { "@class" : "com.barco.compose.media.transcode.Acma", "source" : "dvi1-1-mna-1890322558", "stream" : "udp://239.1.1.7:5004", "type" : "video", "resolution" : [ 1, 1 ], "framerateDivider" : [ 1, 1 ], "profile" : [ "baseline" ], "output" : [ "high", "low" ], "destination" : [ "", "" ], "ipvsProfile" : "high", "ipvsSDP" : [ "" ], "ipvsTitle" : "DiORStream", "ipvsTagName" : [ "" ], "ipvsTagValue" : [ "" ], "ipvsDescription" : "NMSDesc", "ipvsHLS" : true }, "type" : "video" }, "192.168.1.2_151b3a32-ce00-114a-e000-0004a50c1c6d_stream2" : { "@class" : "com.barco.compose.media.transcode.internal.h264.H264Gateway", "id" : "192.168.1.2_151b3a32-ce00-114a-e000-0004a50c1c6d_stream2", "uri" : { "high" : "rtp://239.1.1.4:5006", "low" : "rtp://239.1.1.3:5006" }, "owner" : { "@class" : "com.barco.compose.media.transcode.Acma", "source" : "dvi1-1-mna-1890322558", "stream" : "udp://239.1.1.7:5004", "type" : "video", "resolution" : [ 1, 1 ], "framerateDivider" : [ 1, 1 ], "profile" : [ "baseline" ], "output" : [ "high", "low" ], "destination" : [ "", "" ], "ipvsProfile" : "high", "ipvsSDP" : [ "" ], "ipvsTitle" : "nikhil", "ipvsTagName" : [ "" ], "ipvsTagValue" : [ "" ], "ipvsDescription" : "nikhilDesc", "ipvsHLS" : true }, "type" : "video" } } ``` Now I want to get value of "id". I have used [jsonpath-rw library](https://pypi.python.org/pypi/jsonpath-rw) for Python, but that is not working. If I use `*` whole response gets printed. Looks like the whole response is root. I have used different combinations on <http://jsonpath.curiousconcept.com/>, such as `*.id`, `$[0].id`.
2015/01/27
[ "https://Stackoverflow.com/questions/28168732", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4498089/" ]
You don't have a base/root element - just a collection of keys and values. You can iterate through them with something like this: ``` import json import pprint values = json.loads(YOUR_JSON_STRING) for v in values: print v, pprint.pprint(values[v]) ``` That will print out the following two elements (each with it's own key) in the resulting JSON object: ``` 192.168.1.2_151b3a32-ce00-114a-e000-0004a50c1c6d_stream1{u'@class': u'com.barco.compose.media.transcode.internal.h264.H264Gateway', u'id': u'192.168.1.2_151b3a32-ce00-114a-e000-0004a50c1c6d_stream1', u'owner': {u'@class': u'com.barco.compose.media.transcode.Acma', u'destination': [u'', u''], u'framerateDivider': [1, 1], u'ipvsDescription': u'NMSDesc', u'ipvsHLS': True, u'ipvsProfile': u'high', u'ipvsSDP': [u''], u'ipvsTagName': [u''], u'ipvsTagValue': [u''], u'ipvsTitle': u'DiORStream', u'output': [u'high', u'low'], u'profile': [u'baseline'], u'resolution': [1, 1], u'source': u'dvi1-1-mna-1890322558', u'stream': u'udp://239.1.1.7:5004', u'type': u'video'}, u'type': u'video', u'uri': {u'high': u'rtp://239.1.1.2:5006', u'low': u'rtp://239.1.1.1:5006'}} None 192.168.1.2_151b3a32-ce00-114a-e000-0004a50c1c6d_stream2{u'@class': u'com.barco.compose.media.transcode.internal.h264.H264Gateway', u'id': u'192.168.1.2_151b3a32-ce00-114a-e000-0004a50c1c6d_stream2', u'owner': {u'@class': u'com.barco.compose.media.transcode.Acma', u'destination': [u'', u''], u'framerateDivider': [1, 1], u'ipvsDescription': u'nikhilDesc', u'ipvsHLS': True, u'ipvsProfile': u'high', u'ipvsSDP': [u''], u'ipvsTagName': [u''], u'ipvsTagValue': [u''], u'ipvsTitle': u'nikhil', u'output': [u'high', u'low'], u'profile': [u'baseline'], u'resolution': [1, 1], u'source': u'dvi1-1-mna-1890322558', u'stream': u'udp://239.1.1.7:5004', u'type': u'video'}, u'type': u'video', u'uri': {u'high': u'rtp://239.1.1.4:5006', u'low': u'rtp://239.1.1.3:5006'}} ```
you can get `id` like this(save your json string as a **single line** file as `test.json`): ``` import simplejson as json with open("test.json") as fp: line = fp.readline() json = json.loads(line) for k,v in json.items(): print v.get("id","no id found") # no id found is the default value in case there is no id defined ```
28,168,732
Here is my JSON string which I need to parse. ``` { "192.168.1.2_151b3a32-ce00-114a-e000-0004a50c1c6d_stream1" : { "@class" : "com.barco.compose.media.transcode.internal.h264.H264Gateway", "id" : "192.168.1.2_151b3a32-ce00-114a-e000-0004a50c1c6d_stream1", "uri" : { "high" : "rtp://239.1.1.2:5006", "low" : "rtp://239.1.1.1:5006" }, "owner" : { "@class" : "com.barco.compose.media.transcode.Acma", "source" : "dvi1-1-mna-1890322558", "stream" : "udp://239.1.1.7:5004", "type" : "video", "resolution" : [ 1, 1 ], "framerateDivider" : [ 1, 1 ], "profile" : [ "baseline" ], "output" : [ "high", "low" ], "destination" : [ "", "" ], "ipvsProfile" : "high", "ipvsSDP" : [ "" ], "ipvsTitle" : "DiORStream", "ipvsTagName" : [ "" ], "ipvsTagValue" : [ "" ], "ipvsDescription" : "NMSDesc", "ipvsHLS" : true }, "type" : "video" }, "192.168.1.2_151b3a32-ce00-114a-e000-0004a50c1c6d_stream2" : { "@class" : "com.barco.compose.media.transcode.internal.h264.H264Gateway", "id" : "192.168.1.2_151b3a32-ce00-114a-e000-0004a50c1c6d_stream2", "uri" : { "high" : "rtp://239.1.1.4:5006", "low" : "rtp://239.1.1.3:5006" }, "owner" : { "@class" : "com.barco.compose.media.transcode.Acma", "source" : "dvi1-1-mna-1890322558", "stream" : "udp://239.1.1.7:5004", "type" : "video", "resolution" : [ 1, 1 ], "framerateDivider" : [ 1, 1 ], "profile" : [ "baseline" ], "output" : [ "high", "low" ], "destination" : [ "", "" ], "ipvsProfile" : "high", "ipvsSDP" : [ "" ], "ipvsTitle" : "nikhil", "ipvsTagName" : [ "" ], "ipvsTagValue" : [ "" ], "ipvsDescription" : "nikhilDesc", "ipvsHLS" : true }, "type" : "video" } } ``` Now I want to get value of "id". I have used [jsonpath-rw library](https://pypi.python.org/pypi/jsonpath-rw) for Python, but that is not working. If I use `*` whole response gets printed. Looks like the whole response is root. I have used different combinations on <http://jsonpath.curiousconcept.com/>, such as `*.id`, `$[0].id`.
2015/01/27
[ "https://Stackoverflow.com/questions/28168732", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4498089/" ]
Found the subject when trying to find a solution for a problem of mine. In the meantime, I think you found a solution but if anyone is passing by check the jsonlines library that is very useful and could solve the problem <https://jsonlines.readthedocs.io/en/latest/>
you can get `id` like this(save your json string as a **single line** file as `test.json`): ``` import simplejson as json with open("test.json") as fp: line = fp.readline() json = json.loads(line) for k,v in json.items(): print v.get("id","no id found") # no id found is the default value in case there is no id defined ```
12,730,524
On this sample code i want to use the variables on the `function db_properties` at the function `connect_and_query`. To accomplish that I choose the `return`. So, using that strategy the code works perfectly. But, in this example the db.properties files only has 4 variables. That said, if the properties file had 20+ variables, should I continue using `return`? Or is there a most elegant/cleaner/correct way to do that? ``` import psycopg2 import sys from ConfigParser import SafeConfigParser class Main: def db_properties(self): cfgFile='c:\test\db.properties' parser = SafeConfigParser() parser.read(cfgFile) dbHost = parser.get('database','db_host') dbName = parser.get('database','db_name') dbUser = parser.get('database','db_login') dbPass = parser.get('database','db_pass') return dbHost,dbName,dbUser,dbPass def connect_and_query(self): try: con = None dbHost=self.db_properties()[0] dbName=self.db_properties()[1] dbUser=self.db_properties()[2] dbPass=self.db_properties()[3] con = None qry=("select star from galaxy") con = psycopg2.connect(host=dbHost,database=dbName, user=dbUser, password=dbPass) cur = con.cursor() cur.execute(qry) data = cur.fetchall() for result in data: qryResult = result[0] print "the test result is : " +qryResult except psycopg2.DatabaseError, e: print 'Error %s' % e sys.exit(1) finally: if con: con.close() operation=Main() operation.connect_and_query() ``` Im using python 2.7 Regards
2012/10/04
[ "https://Stackoverflow.com/questions/12730524", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1323826/" ]
If there are a lot of variables, or if you want to easily change the variables being read, return a dictionary. ``` def db_properties(self, *variables): cfgFile='c:\test\db.properties' parser = SafeConfigParser() parser.read(cfgFile) return { variable: parser.get('database', variable) for variable in variables } def connect_and_query(self): try: con = None config = self.db_properties( 'db_host', 'db_name', 'db_login', 'db_pass', ) #or you can use: # variables = ['db_host','db_name','db_login','db_pass','db_whatever','db_whatever2',...] # config = self.db_properties(*variables) #now you can use any variable like: config['db_host'] # ---rest of the function here--- ``` Edit: I refactored the code so you can specify the variables you want to load in the calling function itself.
You certainly don't want to call `db_properties()` 4 times; just call it once and store the result. It's also almost certainly better to return a dict rather than a tuple, since as it is the caller needs to know what the method returns in order, rather than just having access to the values by their names. As the number of values getting passed around grows, this gets even harder to maintain. e.g.: ``` class Main: def db_properties(self): cfgFile='c:\test\db.properties' parser = SafeConfigParser() parser.read(cfgFile) configDict= dict() configDict['dbHost'] = parser.get('database','db_host') configDict['dbName'] = parser.get('database','db_name') configDict['dbUser'] = parser.get('database','db_login') configDict['dbPass'] = parser.get('database','db_pass') return configDict def connect_and_query(self): try: con = None conf = self.db_properties() con = None qry=("select star from galaxy") con = psycopg2.connect(host=conf['dbHost'],database=conf['dbName'], user=conf['dbUser'], password=conf['dbPass']) ```
12,730,524
On this sample code i want to use the variables on the `function db_properties` at the function `connect_and_query`. To accomplish that I choose the `return`. So, using that strategy the code works perfectly. But, in this example the db.properties files only has 4 variables. That said, if the properties file had 20+ variables, should I continue using `return`? Or is there a most elegant/cleaner/correct way to do that? ``` import psycopg2 import sys from ConfigParser import SafeConfigParser class Main: def db_properties(self): cfgFile='c:\test\db.properties' parser = SafeConfigParser() parser.read(cfgFile) dbHost = parser.get('database','db_host') dbName = parser.get('database','db_name') dbUser = parser.get('database','db_login') dbPass = parser.get('database','db_pass') return dbHost,dbName,dbUser,dbPass def connect_and_query(self): try: con = None dbHost=self.db_properties()[0] dbName=self.db_properties()[1] dbUser=self.db_properties()[2] dbPass=self.db_properties()[3] con = None qry=("select star from galaxy") con = psycopg2.connect(host=dbHost,database=dbName, user=dbUser, password=dbPass) cur = con.cursor() cur.execute(qry) data = cur.fetchall() for result in data: qryResult = result[0] print "the test result is : " +qryResult except psycopg2.DatabaseError, e: print 'Error %s' % e sys.exit(1) finally: if con: con.close() operation=Main() operation.connect_and_query() ``` Im using python 2.7 Regards
2012/10/04
[ "https://Stackoverflow.com/questions/12730524", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1323826/" ]
You certainly don't want to call `db_properties()` 4 times; just call it once and store the result. It's also almost certainly better to return a dict rather than a tuple, since as it is the caller needs to know what the method returns in order, rather than just having access to the values by their names. As the number of values getting passed around grows, this gets even harder to maintain. e.g.: ``` class Main: def db_properties(self): cfgFile='c:\test\db.properties' parser = SafeConfigParser() parser.read(cfgFile) configDict= dict() configDict['dbHost'] = parser.get('database','db_host') configDict['dbName'] = parser.get('database','db_name') configDict['dbUser'] = parser.get('database','db_login') configDict['dbPass'] = parser.get('database','db_pass') return configDict def connect_and_query(self): try: con = None conf = self.db_properties() con = None qry=("select star from galaxy") con = psycopg2.connect(host=conf['dbHost'],database=conf['dbName'], user=conf['dbUser'], password=conf['dbPass']) ```
NB: untested You could change your db\_properties to return a dict: ``` from functools import partial # call as db_properties('db_host', 'db_name'...) def db_properties(self, *args): parser = SafeConfigParser() parser.read('config file') getter = partial(parser.get, 'database') return dict(zip(args, map(getter, args))) ``` But otherwise it's probably best to keep the parser as an attribute of the instance, and provide a convenience method... class whatever(object): def **init**(self, \*args, \*\*kwargs): # blah blah blah cfgFile='c:\test\db.properties' self.\_parser = SafeConfigParser() self.\_parser.read(cfgFile) @property def db\_config(self, key): return self.\_parser.get('database', key) Then use `con = psycopg2.connect(host=self.db_config('db_host')...)`
12,730,524
On this sample code i want to use the variables on the `function db_properties` at the function `connect_and_query`. To accomplish that I choose the `return`. So, using that strategy the code works perfectly. But, in this example the db.properties files only has 4 variables. That said, if the properties file had 20+ variables, should I continue using `return`? Or is there a most elegant/cleaner/correct way to do that? ``` import psycopg2 import sys from ConfigParser import SafeConfigParser class Main: def db_properties(self): cfgFile='c:\test\db.properties' parser = SafeConfigParser() parser.read(cfgFile) dbHost = parser.get('database','db_host') dbName = parser.get('database','db_name') dbUser = parser.get('database','db_login') dbPass = parser.get('database','db_pass') return dbHost,dbName,dbUser,dbPass def connect_and_query(self): try: con = None dbHost=self.db_properties()[0] dbName=self.db_properties()[1] dbUser=self.db_properties()[2] dbPass=self.db_properties()[3] con = None qry=("select star from galaxy") con = psycopg2.connect(host=dbHost,database=dbName, user=dbUser, password=dbPass) cur = con.cursor() cur.execute(qry) data = cur.fetchall() for result in data: qryResult = result[0] print "the test result is : " +qryResult except psycopg2.DatabaseError, e: print 'Error %s' % e sys.exit(1) finally: if con: con.close() operation=Main() operation.connect_and_query() ``` Im using python 2.7 Regards
2012/10/04
[ "https://Stackoverflow.com/questions/12730524", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1323826/" ]
You certainly don't want to call `db_properties()` 4 times; just call it once and store the result. It's also almost certainly better to return a dict rather than a tuple, since as it is the caller needs to know what the method returns in order, rather than just having access to the values by their names. As the number of values getting passed around grows, this gets even harder to maintain. e.g.: ``` class Main: def db_properties(self): cfgFile='c:\test\db.properties' parser = SafeConfigParser() parser.read(cfgFile) configDict= dict() configDict['dbHost'] = parser.get('database','db_host') configDict['dbName'] = parser.get('database','db_name') configDict['dbUser'] = parser.get('database','db_login') configDict['dbPass'] = parser.get('database','db_pass') return configDict def connect_and_query(self): try: con = None conf = self.db_properties() con = None qry=("select star from galaxy") con = psycopg2.connect(host=conf['dbHost'],database=conf['dbName'], user=conf['dbUser'], password=conf['dbPass']) ```
I'd suggest returning a namedtuple: ``` from collections import namedtuple # in db_properties() return namedtuple("dbconfig", "host name user password")( parser.get('database','db_host'), parser.get('database','db_name'), parser.get('database','db_login'), parser.get('database','db_pass'), ) ``` Now you have an object that you can access either by index or by attribute. ``` config = self.db_properties() print config[0] # db_host print config.host # same ```
12,730,524
On this sample code i want to use the variables on the `function db_properties` at the function `connect_and_query`. To accomplish that I choose the `return`. So, using that strategy the code works perfectly. But, in this example the db.properties files only has 4 variables. That said, if the properties file had 20+ variables, should I continue using `return`? Or is there a most elegant/cleaner/correct way to do that? ``` import psycopg2 import sys from ConfigParser import SafeConfigParser class Main: def db_properties(self): cfgFile='c:\test\db.properties' parser = SafeConfigParser() parser.read(cfgFile) dbHost = parser.get('database','db_host') dbName = parser.get('database','db_name') dbUser = parser.get('database','db_login') dbPass = parser.get('database','db_pass') return dbHost,dbName,dbUser,dbPass def connect_and_query(self): try: con = None dbHost=self.db_properties()[0] dbName=self.db_properties()[1] dbUser=self.db_properties()[2] dbPass=self.db_properties()[3] con = None qry=("select star from galaxy") con = psycopg2.connect(host=dbHost,database=dbName, user=dbUser, password=dbPass) cur = con.cursor() cur.execute(qry) data = cur.fetchall() for result in data: qryResult = result[0] print "the test result is : " +qryResult except psycopg2.DatabaseError, e: print 'Error %s' % e sys.exit(1) finally: if con: con.close() operation=Main() operation.connect_and_query() ``` Im using python 2.7 Regards
2012/10/04
[ "https://Stackoverflow.com/questions/12730524", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1323826/" ]
If there are a lot of variables, or if you want to easily change the variables being read, return a dictionary. ``` def db_properties(self, *variables): cfgFile='c:\test\db.properties' parser = SafeConfigParser() parser.read(cfgFile) return { variable: parser.get('database', variable) for variable in variables } def connect_and_query(self): try: con = None config = self.db_properties( 'db_host', 'db_name', 'db_login', 'db_pass', ) #or you can use: # variables = ['db_host','db_name','db_login','db_pass','db_whatever','db_whatever2',...] # config = self.db_properties(*variables) #now you can use any variable like: config['db_host'] # ---rest of the function here--- ``` Edit: I refactored the code so you can specify the variables you want to load in the calling function itself.
NB: untested You could change your db\_properties to return a dict: ``` from functools import partial # call as db_properties('db_host', 'db_name'...) def db_properties(self, *args): parser = SafeConfigParser() parser.read('config file') getter = partial(parser.get, 'database') return dict(zip(args, map(getter, args))) ``` But otherwise it's probably best to keep the parser as an attribute of the instance, and provide a convenience method... class whatever(object): def **init**(self, \*args, \*\*kwargs): # blah blah blah cfgFile='c:\test\db.properties' self.\_parser = SafeConfigParser() self.\_parser.read(cfgFile) @property def db\_config(self, key): return self.\_parser.get('database', key) Then use `con = psycopg2.connect(host=self.db_config('db_host')...)`
12,730,524
On this sample code i want to use the variables on the `function db_properties` at the function `connect_and_query`. To accomplish that I choose the `return`. So, using that strategy the code works perfectly. But, in this example the db.properties files only has 4 variables. That said, if the properties file had 20+ variables, should I continue using `return`? Or is there a most elegant/cleaner/correct way to do that? ``` import psycopg2 import sys from ConfigParser import SafeConfigParser class Main: def db_properties(self): cfgFile='c:\test\db.properties' parser = SafeConfigParser() parser.read(cfgFile) dbHost = parser.get('database','db_host') dbName = parser.get('database','db_name') dbUser = parser.get('database','db_login') dbPass = parser.get('database','db_pass') return dbHost,dbName,dbUser,dbPass def connect_and_query(self): try: con = None dbHost=self.db_properties()[0] dbName=self.db_properties()[1] dbUser=self.db_properties()[2] dbPass=self.db_properties()[3] con = None qry=("select star from galaxy") con = psycopg2.connect(host=dbHost,database=dbName, user=dbUser, password=dbPass) cur = con.cursor() cur.execute(qry) data = cur.fetchall() for result in data: qryResult = result[0] print "the test result is : " +qryResult except psycopg2.DatabaseError, e: print 'Error %s' % e sys.exit(1) finally: if con: con.close() operation=Main() operation.connect_and_query() ``` Im using python 2.7 Regards
2012/10/04
[ "https://Stackoverflow.com/questions/12730524", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1323826/" ]
If there are a lot of variables, or if you want to easily change the variables being read, return a dictionary. ``` def db_properties(self, *variables): cfgFile='c:\test\db.properties' parser = SafeConfigParser() parser.read(cfgFile) return { variable: parser.get('database', variable) for variable in variables } def connect_and_query(self): try: con = None config = self.db_properties( 'db_host', 'db_name', 'db_login', 'db_pass', ) #or you can use: # variables = ['db_host','db_name','db_login','db_pass','db_whatever','db_whatever2',...] # config = self.db_properties(*variables) #now you can use any variable like: config['db_host'] # ---rest of the function here--- ``` Edit: I refactored the code so you can specify the variables you want to load in the calling function itself.
I'd suggest returning a namedtuple: ``` from collections import namedtuple # in db_properties() return namedtuple("dbconfig", "host name user password")( parser.get('database','db_host'), parser.get('database','db_name'), parser.get('database','db_login'), parser.get('database','db_pass'), ) ``` Now you have an object that you can access either by index or by attribute. ``` config = self.db_properties() print config[0] # db_host print config.host # same ```
405,282
I am trying to write a life simulation in python with a variety of animals. It is impossible to name each instance of the classes I am going to use because I have no way of knowing how many there will be. So, my question: How can I automatically give a name to an object? I was thinking of creating a "Herd" class which could be all the animals of that type alive at the same time...
2009/01/01
[ "https://Stackoverflow.com/questions/405282", "https://Stackoverflow.com", "https://Stackoverflow.com/users/-1/" ]
Hm, well you normally just stuff all those instances in a list and then iterate over that list if you want to do something with them. If you want to automatically keep track of each instance created you can also make the adding to the list implicit in the class' constructor or create a factory method that keeps track of the created instances.
you could make an 'animal' class with a name attribute. Or you could programmically define the class like so: ``` from new import classobj my_class=classobj('Foo',(object,),{}) ``` Found this: <http://www.gamedev.net/community/forums/topic.asp?topic_id=445037>
405,282
I am trying to write a life simulation in python with a variety of animals. It is impossible to name each instance of the classes I am going to use because I have no way of knowing how many there will be. So, my question: How can I automatically give a name to an object? I was thinking of creating a "Herd" class which could be all the animals of that type alive at the same time...
2009/01/01
[ "https://Stackoverflow.com/questions/405282", "https://Stackoverflow.com", "https://Stackoverflow.com/users/-1/" ]
Hm, well you normally just stuff all those instances in a list and then iterate over that list if you want to do something with them. If you want to automatically keep track of each instance created you can also make the adding to the list implicit in the class' constructor or create a factory method that keeps track of the created instances.
If you need a way to refer to them individually, it's relatively common to have the class give each instance a unique identifier on initialization: ``` >>> import itertools >>> class Animal(object): ... id_iter = itertools.count(1) ... def __init__(self): ... self.id = self.id_iter.next() ... >>> print(Animal().id) 1 >>> print(Animal().id) 2 >>> print(Animal().id) 3 ```
405,282
I am trying to write a life simulation in python with a variety of animals. It is impossible to name each instance of the classes I am going to use because I have no way of knowing how many there will be. So, my question: How can I automatically give a name to an object? I was thinking of creating a "Herd" class which could be all the animals of that type alive at the same time...
2009/01/01
[ "https://Stackoverflow.com/questions/405282", "https://Stackoverflow.com", "https://Stackoverflow.com/users/-1/" ]
Hm, well you normally just stuff all those instances in a list and then iterate over that list if you want to do something with them. If you want to automatically keep track of each instance created you can also make the adding to the list implicit in the class' constructor or create a factory method that keeps track of the created instances.
Any instance could have a name attribute. So it sounds like you may be asking how to dynamically name a *class*, not an *instance*. If that's the case, you can explicitly set the \_\_name\_\_ attribute of a class, or better yet just create the class with the builtin [type](http://docs.python.org/library/functions.html#type) (with 3 args). ``` class Ungulate(Mammal): hoofed = True ``` would be equivalent to ``` cls = type('Ungulate', (Mammal,), {'hoofed': True}) ```
405,282
I am trying to write a life simulation in python with a variety of animals. It is impossible to name each instance of the classes I am going to use because I have no way of knowing how many there will be. So, my question: How can I automatically give a name to an object? I was thinking of creating a "Herd" class which could be all the animals of that type alive at the same time...
2009/01/01
[ "https://Stackoverflow.com/questions/405282", "https://Stackoverflow.com", "https://Stackoverflow.com/users/-1/" ]
Like this? ``` class Animal( object ): pass # lots of details omitted herd= [ Animal() for i in range(10000) ] ``` At this point, herd will have 10,000 distinct instances of the `Animal` class.
you could make an 'animal' class with a name attribute. Or you could programmically define the class like so: ``` from new import classobj my_class=classobj('Foo',(object,),{}) ``` Found this: <http://www.gamedev.net/community/forums/topic.asp?topic_id=445037>