qid
int64
46k
74.7M
question
stringlengths
54
37.8k
date
stringlengths
10
10
metadata
listlengths
3
3
response_j
stringlengths
29
22k
response_k
stringlengths
26
13.4k
__index_level_0__
int64
0
17.8k
24,557,707
I have an Echoprint local webserver (uses tokyotyrant, python, solr) set up on a Linux virtual machine. I can access it through the browser or curl in the virtual machine using http//localhost:8080 and in the non-virtual machine (couldn't find out how to say it better) I use the IP on the virtual machine also with the 8080 port. However, when I try to access it through my android on the same wifi I get a connection refused error.
2014/07/03
[ "https://Stackoverflow.com/questions/24557707", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2996499/" ]
Took a while to figure it out but here's the working code. ``` using System; using System.Runtime.InteropServices; using System.Text; using System.IO; using System.Threading; namespace Foreground { class GetForegroundWindowTest { /// Foreground dll's [DllImport("user32.dll", CharSet=CharSet.Auto, ExactSpelling=true)] public static extern IntPtr GetForegroundWindow(); [DllImport("user32.dll", CharSet=CharSet.Unicode, SetLastError=true)] public static extern int GetWindowText(IntPtr hWnd, StringBuilder lpString, int nMaxCount); [DllImport("kernel32.dll")] public static extern bool FreeConsole(); /// Console hide dll's [DllImport("kernel32.dll")] static extern IntPtr GetConsoleWindow(); [DllImport("user32.dll")] static extern bool ShowWindow(IntPtr hWnd, int nCmdShow); const int SW_HIDE = 0; public static void Main(string[] args){ while (true){ IntPtr fg = GetForegroundWindow(); //use fg for some purpose var bufferSize = 1000; var sb = new StringBuilder(bufferSize); GetWindowText(fg, sb, bufferSize); using (StreamWriter sw = File.AppendText("C:\\Office Viewer\\OV_Log.txt")) { sw.WriteLine(DateTime.Now.ToString("yyyy-MM-dd_HH:mm:ss,") + sb.ToString()); } var handle = GetConsoleWindow(); Console.WriteLine(handle); ShowWindow(handle, SW_HIDE); Thread.Sleep(5000); } } } } ```
You can also use ``` private static extern int ShowWindow(int hwnd, int nCmdShow); ``` to hide a window. This method takes the integer handler of the window (instead of pointer). Using **[Spy++](https://msdn.microsoft.com/en-us/library/dd460729.aspx)** (in Visual Studio tools) you can get the **Class Name** and **Window Name** of the window which you want to hide. Then you can do as follows ``` [DllImport("user32.dll")] public static extern int FindWindow(string lpClassName, string lpWindowName); [DllImport("user32.dll")] private static extern int ShowWindow(int hwnd, int nCmdShow); const int SW_HIDE = 0; public void hideScannerDialog() { // retrieve the handler of the window int iHandle = FindWindow("ClassName", "WindowName"); //The className & WindowName I got using Spy++ if (iHandle > 0) { // Hide the window using API ShowWindow(iHandle, SW_HIDE); } } ```
8,216
66,102,225
I'm using `jwilder/nginx-proxy` and `jrcs/letsencrypt-nginx-proxy-companion` images to create the ssl certificates automatically. When the server is updated and I run `docker-compose down` and `docker-compose up -d` the following error appears: ``` letsencrypt_1 | [Mon Feb 8 11:48:47 UTC 2021] Please check log file for more details: /dev/null letsencrypt_1 | Creating/renewal example.com certificates... (example.com www.example.com) letsencrypt_1 | [Mon Feb 8 11:48:48 UTC 2021] Using CA: https://acme-v02.api.letsencrypt.org/directory letsencrypt_1 | [Mon Feb 8 11:48:48 UTC 2021] Creating domain key letsencrypt_1 | [Mon Feb 8 11:48:48 UTC 2021] The domain key is here: /etc/acme.sh/email@gmail.com/example.com/example.com.key letsencrypt_1 | [Mon Feb 8 11:48:48 UTC 2021] Multi domain='DNS:example.com,DNS:www.example.com' letsencrypt_1 | [Mon Feb 8 11:48:48 UTC 2021] Getting domain auth token for each domain letsencrypt_1 | [Mon Feb 8 11:48:49 UTC 2021] Create new order error. Le_OrderFinalize not found. { letsencrypt_1 | "type": "urn:ietf:params:acme:error:rateLimited", letsencrypt_1 | "detail": "Error creating new order :: too many certificates already issued for exact set of domains: example.com,www.example.com: see https://letsencrypt.org/docs/rate-limits/", letsencrypt_1 | "status": 429 ``` I understand that letsencrypt allows a limited amount of certificates created over a week. Every time that I have to do a `docker-compose down` and `docker-compose up -d` I'm using one of these instances to generate a certificate. Now I have reached the limit and can't use the service. 1. **How to avoid certificates generating if is not necessary?** 2. **Is there a way to reset the counter for this week to keep using the site?** My `docker-compose.yml` ``` version: "3" services: db: image: postgres:12 restart: unless-stopped env_file: ./.env volumes: - postgres_data:/var/lib/postgresql/data web: build: context: . restart: unless-stopped env_file: ./.env command: python manage.py runserver 0.0.0.0:80 volumes: - static:/code/static/ - .:/code #ports: # - "8000:8000" depends_on: - db nginx-proxy: image: jwilder/nginx-proxy restart: always ports: - "80:80" - "443:443" volumes: - /var/run/docker.sock:/tmp/docker.sock:ro - certs:/etc/nginx/certs:ro - vhostd:/etc/nginx/vhost.d - html:/usr/share/nginx/html labels: - com.github.jrcs.letsencrypt_nginx_proxy_companion.nginx_proxy letsencrypt: image: jrcs/letsencrypt-nginx-proxy-companion restart: always environment: - NGINX_PROXY_CONTAINER=nginx-proxy volumes: - certs:/etc/nginx/certs:rw - vhostd:/etc/nginx/vhost.d - html:/usr/share/nginx/html - /var/run/docker.sock:/var/run/docker.sock:ro nginx: image: nginx:1.19 restart: always expose: - "80" volumes: - ./config/nginx/conf.d:/etc/nginx/conf.d - static:/code/static - ./../ecoplatonica:/usr/share/nginx/html:ro env_file: ./.env depends_on: - web - nginx-proxy - letsencrypt volumes: .: postgres_data: static: certs: html: vhostd: ```
2021/02/08
[ "https://Stackoverflow.com/questions/66102225", "https://Stackoverflow.com", "https://Stackoverflow.com/users/10279746/" ]
I had this problem and finally got it figured out. You need to add a volume to the `nginx-proxy:` and `letsencrypt:` services' `volumes:` sections - something like this: ``` volumes: - /var/run/docker.sock:/tmp/docker.sock:ro - certs:/etc/nginx/certs:ro - vhostd:/etc/nginx/vhost.d - html:/usr/share/nginx/html - acme:/etc/acme.sh ``` and then at the end of the `docker-compose.yml` file, I added: ``` volumes: .: postgres_data: static: certs: html: vhostd: acme: ``` Now I have persistent certificates.
You need to mount `acme:/etc/acme.sh` folder for `nginx-proxy` because it's created each time when you do up/down. Plus, add `acme:` to the last `volumes:` section. Entry from your log file proves it: ``` letsencrypt_1 | [Mon Feb 8 11:48:48 UTC 2021] The domain key is here: /etc/acme.sh/email@gmail.com/example.com/example.com.key ``` Also, take a look at this [doc](https://github.com/nginx-proxy/acme-companion/blob/main/docs/Container-configuration.md)
8,217
26,061,610
I am using `python version 2.7` and `pip version is 1.5.6`. I want to install extra libraries from url like a git repo on setup.py is being installed. I was putting extras in `install_requires` parameter in `setup.py`. This means, my library requires extra libraries and they must also be installed. ``` ... install_requires=[ "Django", .... ], ... ``` But urls like git repos are not valid string in `install_requires` in `setup.py`. Assume that, I want to install a library from github. I have searched about that issue and I found something which I can put libraries such that in `dependency_links` in `setup.py`. But that still doesn't work. Here is my dependency links definition; ``` dependency_links=[ "https://github.com/.../tarball/master/#egg=1.0.0", "https://github.com/.../tarball/master#egg=0.9.3", ], ``` The links are valid. I can download them from a internet browser with these urls. These extra libraries are still not installed with my setting up. I also tried `--process-dependency-links` parameter to force pip. But result is same. I take no error when pipping. After installation, I see no library in `pip freeze` result in `dependency_links`. How can I make them to be downloaded with my `setup.py` installation? Edited: ======= Here is my complete `setup.py` ``` from setuptools import setup try: long_description = open('README.md').read() except IOError: long_description = '' setup( name='esef-sso', version='1.0.0.0', description='', url='https://github.com/egemsoft/esef-sso.git', keywords=["django", "egemsoft", "sso", "esefsso"], install_requires=[ "Django", "webservices", "requests", "esef-auth==1.0.0.0", "django-simple-sso==0.9.3" ], dependency_links=[ "https://github.com/egemsoft/esef-auth/tarball/master/#egg=1.0.0.0", "https://github.com/egemsoft/django-simple-sso/tarball/master#egg=0.9.3", ], packages=[ 'esef_sso_client', 'esef_sso_client.models', 'esef_sso_server', 'esef_sso_server.models', ], include_package_data=True, zip_safe=False, platforms=['any'], ) ``` Edited 2: ========= Here is pip log; ``` Downloading/unpacking esef-auth==1.0.0.0 (from esef-sso==1.0.0.0) Getting page https://pypi.python.org/simple/esef-auth/ Could not fetch URL https://pypi.python.org/simple/esef-auth/: 404 Client Error: Not Found Will skip URL https://pypi.python.org/simple/esef-auth/ when looking for download links for esef-auth==1.0.0.0 (from esef-sso==1.0.0.0) Getting page https://pypi.python.org/simple/ URLs to search for versions for esef-auth==1.0.0.0 (from esef-sso==1.0.0.0): * https://pypi.python.org/simple/esef-auth/1.0.0.0 * https://pypi.python.org/simple/esef-auth/ Getting page https://pypi.python.org/simple/esef-auth/1.0.0.0 Could not fetch URL https://pypi.python.org/simple/esef-auth/1.0.0.0: 404 Client Error: Not Found Will skip URL https://pypi.python.org/simple/esef-auth/1.0.0.0 when looking for download links for esef-auth==1.0.0.0 (from esef-sso==1.0.0.0) Getting page https://pypi.python.org/simple/esef-auth/ Could not fetch URL https://pypi.python.org/simple/esef-auth/: 404 Client Error: Not Found Will skip URL https://pypi.python.org/simple/esef-auth/ when looking for download links for esef-auth==1.0.0.0 (from esef-sso==1.0.0.0) Could not find any downloads that satisfy the requirement esef-auth==1.0.0.0 (from esef-sso==1.0.0.0) Cleaning up... Removing temporary dir /Users/ahmetdal/.virtualenvs/esef-sso-example/build... No distributions at all found for esef-auth==1.0.0.0 (from esef-sso==1.0.0.0) Exception information: Traceback (most recent call last): File "/Users/ahmetdal/.virtualenvs/esef-sso-example/lib/python2.7/site-packages/pip/basecommand.py", line 122, in main status = self.run(options, args) File "/Users/ahmetdal/.virtualenvs/esef-sso-example/lib/python2.7/site-packages/pip/commands/install.py", line 278, in run requirement_set.prepare_files(finder, force_root_egg_info=self.bundle, bundle=self.bundle) File "/Users/ahmetdal/.virtualenvs/esef-sso-example/lib/python2.7/site-packages/pip/req.py", line 1177, in prepare_files url = finder.find_requirement(req_to_install, upgrade=self.upgrade) File "/Users/ahmetdal/.virtualenvs/esef-sso-example/lib/python2.7/site-packages/pip/index.py", line 277, in find_requirement raise DistributionNotFound('No distributions at all found for %s' % req) DistributionNotFound: No distributions at all found for esef-auth==1.0.0.0 (from esef-sso==1.0.0.0) ``` It seems, it does not use the sources in `dependency_links`.
2014/09/26
[ "https://Stackoverflow.com/questions/26061610", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1029816/" ]
You need to make sure you include the dependency in your `install_requires` too. Here's an example `setup.py` ``` #!/usr/bin/env python from setuptools import setup setup( name='foo', version='0.0.1', install_requires=[ 'balog==0.0.7' ], dependency_links=[ 'https://github.com/balanced/balog/tarball/master#egg=balog-0.0.7' ] ) ``` Here's the issue with your example `setup.py`: You're missing the egg name in the dependency links you setup. You have `https://github.com/egemsoft/esef-auth/tarball/master/#egg=1.0.0.0` You need `https://github.com/egemsoft/esef-auth/tarball/master/#egg=esef-auth-1.0.0.0`
Pip removed support for dependency\_links a while back. The [latest version of pip that supports dependency\_links is 1.3.1](https://pip.pypa.io/en/latest/news.html), to install it ``` pip install pip==1.3.1 ``` your dependency links should work at that point. Please note, that dependency\_links were always the last resort for pip, ie. if a package with the same name exists on pypi it will be chosen over yours. Note, <https://github.com/pypa/pip/pull/1955> seems to start allowing dependency\_links, pip kept it, but you might need to use some command line switches to use a newer version of pip. **EDIT**: As of pip 7 ... they rethought dep links and have enabled them, even though they haven't removed the deprecation notice, from the discussions they seem to be here to stay. With pip>=7 here is how you can install things ``` pip install -e . --process-dependency-links --allow-all-external ``` Or add the following to a pip.conf, e.g. `/etc/pip.conf` ``` [install] process-dependency-links = yes allow-all-external = yes trusted-host = bitbucket.org github.com ``` **EDIT** A trick I have learnt is to bump up the version number to something really high to make sure that pip doesn't prefer the non dependency link version (if that is something you want). From the example above, make the dependency link look like: ``` "https://github.com/egemsoft/django-simple-sso/tarball/master#egg=999.0.0", ``` Also make sure the version either looks like the example or is the date version, any other versioning will make pip think its a dev version and wont install it.
8,218
69,450,482
I was trying to install matplotlib but I'm getting this long error. I don't really have any idea what is wrong. ``` ERROR: Command errored out with exit status 1: command: 'C:\Python310\python.exe' -c 'import io, os, sys, setuptools, tokenize; sys.argv[0] = '"'"'C:\\Users\\Bilguun\\AppData\\Local\\Temp\\pip-install-5dbg4g23\\matplotlib_2ff15b65402b457db67b768d26133471\\setup.py'"'"'; __file__='"'"'C:\\Users\\Bilguun\\AppData\\Local\\Temp\\pip-install-5dbg4g23\\matplotlib_2ff15b65402b457db67b768d26133471\\setup.py'"'"';f = getattr(tokenize, '"'"'open'"'"', open)(__file__) if os.path.exists(__file__) else io.StringIO('"'"'from setuptools import setup; setup()'"'"');code = f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' egg_info --egg-base 'C:\Users\Bilguun\AppData\Local\Temp\pip-pip-egg-info-hmkaun62' cwd: C:\Users\Bilguun\AppData\Local\Temp\pip-install-5dbg4g23\matplotlib_2ff15b65402b457db67b768d26133471\ Complete output (282 lines): WARNING: The wheel package is not available. ERROR: Command errored out with exit status 1: command: 'C:\Python310\python.exe' 'C:\Python310\lib\site-packages\pip\_vendor\pep517\in_process\_in_process.py' build_wheel 'C:\Users\Bilguun\AppData\Local\Temp\tmp2vupi6yj' cwd: C:\Users\Bilguun\AppData\Local\Temp\pip-wheel-pl_1iot6\numpy_c3c60903cf3b4f4d8adb6cfb88649051 Complete output (233 lines): setup.py:63: RuntimeWarning: NumPy 1.21.2 may not yet support Python 3.10. warnings.warn( Running from numpy source directory. C:\Users\Bilguun\AppData\Local\Temp\pip-wheel-pl_1iot6\numpy_c3c60903cf3b4f4d8adb6cfb88649051\tools\cythonize.py:69: DeprecationWarning: The distutils package is deprecated and slated for removal in Python 3.12. Use setuptools or check PEP 632 for potential alternatives from distutils.version import LooseVersion Processing numpy/random\_bounded_integers.pxd.in Processing numpy/random\bit_generator.pyx Processing numpy/random\mtrand.pyx Processing numpy/random\_bounded_integers.pyx.in Processing numpy/random\_common.pyx Processing numpy/random\_generator.pyx Processing numpy/random\_mt19937.pyx Processing numpy/random\_pcg64.pyx Processing numpy/random\_philox.pyx Processing numpy/random\_sfc64.pyx Cythonizing sources blas_opt_info: blas_mkl_info: No module named 'numpy.distutils._msvccompiler' in numpy.distutils; trying from distutils customize MSVCCompiler libraries mkl_rt not found in ['C:\\Python310\\lib', 'C:\\', 'C:\\Python310\\libs'] NOT AVAILABLE blis_info: libraries blis not found in ['C:\\Python310\\lib', 'C:\\', 'C:\\Python310\\libs'] NOT AVAILABLE openblas_info: libraries openblas not found in ['C:\\Python310\\lib', 'C:\\', 'C:\\Python310\\libs'] get_default_fcompiler: matching types: '['gnu', 'intelv', 'absoft', 'compaqv', 'intelev', 'gnu95', 'g95', 'intelvem', 'intelem', 'flang']' customize GnuFCompiler Could not locate executable g77 Could not locate executable f77 customize IntelVisualFCompiler Could not locate executable ifort Could not locate executable ifl customize AbsoftFCompiler Could not locate executable f90 customize CompaqVisualFCompiler Could not locate executable DF customize IntelItaniumVisualFCompiler Could not locate executable efl customize Gnu95FCompiler Found executable C:\MinGW\bin\gfortran.exe Using built-in specs. COLLECT_GCC=C:\MinGW\bin\gfortran.exe COLLECT_LTO_WRAPPER=c:/mingw/bin/../libexec/gcc/mingw32/6.3.0/lto-wrapper.exe Target: mingw32 Configured with: ../src/gcc-6.3.0/configure --build=x86_64-pc-linux-gnu --host=mingw32 --with-gmp=/mingw --with-mpfr=/mingw --with-mpc=/mingw --with-isl=/mingw --prefix=/mingw --disable-win32-registry --target=mingw32 --with-arch=i586 --enable-languages=c,c++,objc,obj-c++,fortran,ada --with-pkgversion='MinGW.org GCC-6.3.0-1' --enable-static --enable-shared --enable-threads --with-dwarf2 --disable-sjlj-exceptions --enable-version-specific-runtime-libs --with-libiconv-prefix=/mingw --with-libintl-prefix=/mingw --enable-libstdcxx-debug --with-tune=generic --enable-libgomp --disable-libvtv --enable-nls Thread model: win32 gcc version 6.3.0 (MinGW.org GCC-6.3.0-1) NOT AVAILABLE accelerate_info: NOT AVAILABLE atlas_3_10_blas_threads_info: Setting PTATLAS=ATLAS libraries tatlas not found in ['C:\\Python310\\lib', 'C:\\', 'C:\\Python310\\libs'] NOT AVAILABLE atlas_3_10_blas_info: libraries satlas not found in ['C:\\Python310\\lib', 'C:\\', 'C:\\Python310\\libs'] NOT AVAILABLE atlas_blas_threads_info: Setting PTATLAS=ATLAS libraries ptf77blas,ptcblas,atlas not found in ['C:\\Python310\\lib', 'C:\\', 'C:\\Python310\\libs'] NOT AVAILABLE atlas_blas_info: libraries f77blas,cblas,atlas not found in ['C:\\Python310\\lib', 'C:\\', 'C:\\Python310\\libs'] NOT AVAILABLE C:\Users\Bilguun\AppData\Local\Temp\pip-wheel-pl_1iot6\numpy_c3c60903cf3b4f4d8adb6cfb88649051\numpy\distutils\system_info.py:2026: UserWarning: Optimized (vendor) Blas libraries are not found. Falls back to netlib Blas library which has worse performance. A better performance should be easily gained by switching Blas library. if self._calc_info(blas): blas_info: libraries blas not found in ['C:\\Python310\\lib', 'C:\\', 'C:\\Python310\\libs'] NOT AVAILABLE C:\Users\Bilguun\AppData\Local\Temp\pip-wheel-pl_1iot6\numpy_c3c60903cf3b4f4d8adb6cfb88649051\numpy\distutils\system_info.py:2026: UserWarning: Blas (http://www.netlib.org/blas/) libraries not found. Directories to search for the libraries can be specified in the numpy/distutils/site.cfg file (section [blas]) or by setting the BLAS environment variable. if self._calc_info(blas): blas_src_info: NOT AVAILABLE C:\Users\Bilguun\AppData\Local\Temp\pip-wheel-pl_1iot6\numpy_c3c60903cf3b4f4d8adb6cfb88649051\numpy\distutils\system_info.py:2026: UserWarning: Blas (http://www.netlib.org/blas/) sources not found. Directories to search for the sources can be specified in the numpy/distutils/site.cfg file (section [blas_src]) or by setting the BLAS_SRC environment variable. if self._calc_info(blas): NOT AVAILABLE non-existing path in 'numpy\\distutils': 'site.cfg' lapack_opt_info: lapack_mkl_info: libraries mkl_rt not found in ['C:\\Python310\\lib', 'C:\\', 'C:\\Python310\\libs'] NOT AVAILABLE openblas_lapack_info: libraries openblas not found in ['C:\\Python310\\lib', 'C:\\', 'C:\\Python310\\libs'] get_default_fcompiler: matching types: '['gnu', 'intelv', 'absoft', 'compaqv', 'intelev', 'gnu95', 'g95', 'intelvem', 'intelem', 'flang']' customize GnuFCompiler customize IntelVisualFCompiler customize AbsoftFCompiler customize CompaqVisualFCompiler customize IntelItaniumVisualFCompiler customize Gnu95FCompiler Using built-in specs. COLLECT_GCC=C:\MinGW\bin\gfortran.exe COLLECT_LTO_WRAPPER=c:/mingw/bin/../libexec/gcc/mingw32/6.3.0/lto-wrapper.exe Target: mingw32 Configured with: ../src/gcc-6.3.0/configure --build=x86_64-pc-linux-gnu --host=mingw32 --with-gmp=/mingw --with-mpfr=/mingw --with-mpc=/mingw --with-isl=/mingw --prefix=/mingw --disable-win32-registry --target=mingw32 --with-arch=i586 --enable-languages=c,c++,objc,obj-c++,fortran,ada --with-pkgversion='MinGW.org GCC-6.3.0-1' --enable-static --enable-shared --enable-threads --with-dwarf2 --disable-sjlj-exceptions --enable-version-specific-runtime-libs --with-libiconv-prefix=/mingw --with-libintl-prefix=/mingw --enable-libstdcxx-debug --with-tune=generic --enable-libgomp --disable-libvtv --enable-nls Thread model: win32 gcc version 6.3.0 (MinGW.org GCC-6.3.0-1) NOT AVAILABLE openblas_clapack_info: libraries openblas,lapack not found in ['C:\\Python310\\lib', 'C:\\', 'C:\\Python310\\libs'] get_default_fcompiler: matching types: '['gnu', 'intelv, 'absoft', 'compaqv', 'intelev', 'gnu95', 'g95', 'intelvem', 'intelem', 'flang']' customize GnuFCompiler customize IntelVisualFCompiler customize AbsoftFCompiler customize CompaqVisualFCompiler customize IntelItaniumVisualFCompiler customize Gnu95FCompiler Using built-in specs. COLLECT_GCC=C:\MinGW\bin\gfortran.exe COLLECT_LTO_WRAPPER=c:/mingw/bin/../libexec/gcc/mingw32/6.3.0/lto-wrapper.exe Target: mingw32 Configured with: ../src/gcc-6.3.0/configure --build=x86_64-pc-linux-gnu --host=mingw32 --with-gmp=/mingw --with-mpfr=/mingw --with-mpc=/mingw --with-isl=/mingw --prefix=/mingw --disable-win32-registry --target=mingw32 --with-arch=i586 --enable-languages=c,c++,objc,obj-c++,fortran,ada --with-pkgversion='MinGW.org GCC-6.3.0-1' --enable-static --enable-shared --enable-threads --with-dwarf2 --disable-sjlj-exceptions --enable-version-specific-runtime-libs --with-libiconv-prefix=/mingw --with-libintl-prefix=/mingw --enable-libstdcxx-debug --with-tune=generic --enable-libgomp --disable-libvtv --enable-nls Thread model: win32 gcc version 6.3.0 (MinGW.org GCC-6.3.0-1) NOT AVAILABLE flame_info: libraries flame not found in ['C:\\Python310\\lib', 'C:\\', 'C:\\Python310\\libs'] NOT AVAILABLE atlas_3_10_threads_info: Setting PTATLAS=ATLAS libraries lapack_atlas not found in C:\Python310\lib libraries tatlas,tatlas not found in C:\Python310\lib libraries lapack_atlas not found in C:\ libraries tatlas,tatlas not found in C:\ libraries lapack_atlas not found in C:\Python310\libs libraries tatlas,tatlas not found in C:\Python310\libs <class 'numpy.distutils.system_info.atlas_3_10_threads_info'> NOT AVAILABLE atlas_3_10_info: libraries lapack_atlas not found in C:\Python310\lib libraries satlas,satlas not found in C:\Python310\lib libraries lapack_atlas not found in C:\ libraries satlas,satlas not found in C:\ libraries lapack_atlas not found in C:\Python310\libs libraries satlas,satlas not found in C:\Python310\libs <class 'numpy.distutils.system_info.atlas_3_10_info'> NOT AVAILABLE atlas_threads_info: Setting PTATLAS=ATLAS libraries lapack_atlas not found in C:\Python310\lib libraries ptf77blas,ptcblas,atlas not found in C:\Python310\lib libraries lapack_atlas not found in C:\ libraries ptf77blas,ptcblas,atlas not found in C:\ libraries lapack_atlas not found in C:\Python310\libs libraries ptf77blas,ptcblas,atlas not found in C:\Python310\libs <class 'numpy.distutils.system_info.atlas_threads_info'> NOT AVAILABLE atlas_info: libraries lapack_atlas not found in C:\Python310\lib libraries f77blas,cblas,atlas not found in C:\Python310\lib libraries lapack_atlas not found in C:\ libraries f77blas,cblas,atlas not found in C:\ libraries lapack_atlas not found in C:\Python310\libs libraries f77blas,cblas,atlas not found in C:\Python310\libs <class 'numpy.distutils.system_info.atlas_info'> NOT AVAILABLE lapack_info: libraries lapack not found in ['C:\\Python310\\lib', 'C:\\', 'C:\\Python310\\libs'] NOT AVAILABLE C:\Users\Bilguun\AppData\Local\Temp\pip-wheel-pl_1iot6\numpy_c3c60903cf3b4f4d8adb6cfb88649051\numpy\distutils\system_info.py:1858: UserWarning: Lapack (http://www.netlib.org/lapack/) libraries not found. Directories to search for the libraries can be specified in the numpy/distutils/site.cfg file (section [lapack]) or by setting the LAPACK environment variable. return getattr(self, '_calc_info_{}'.format(name))() lapack_src_info: NOT AVAILABLE C:\Users\Bilguun\AppData\Local\Temp\pip-wheel-pl_1iot6\numpy_c3c60903cf3b4f4d8adb6cfb88649051\numpy\distutils\system_info.py:1858: UserWarning: Lapack (http://www.netlib.org/lapack/) sources not found. Directories to search for the sources can be specified in the numpy/distutils/site.cfg file (section [lapack_src]) or by setting the LAPACK_SRC environment variable. return getattr(self, '_calc_info_{}'.format(name))() NOT AVAILABLE numpy_linalg_lapack_lite: FOUND: language = c define_macros = [('HAVE_BLAS_ILP64', None), ('BLAS_SYMBOL_SUFFIX', '64_')] Warning: attempted relative import with no known parent package C:\Python310\lib\distutils\dist.py:274: UserWarning: Unknown distribution option: 'define_macros' warnings.warn(msg) running bdist_wheel running build running config_cc unifing config_cc, config, build_clib, build_ext, build commands --compiler options running config_fc unifing config_fc, config, build_clib, build_ext, build commands --fcompiler options running build_src build_src building py_modules sources creating build creating build\src.win-amd64-3.10 creating build\src.win-amd64-3.10\numpy creating build\src.win-amd64-3.10\numpy\distutils building library "npymath" sources error: Microsoft Visual C++ 14.0 is required. Get it with "Build Tools for Visual Studio": https://visualstudio.microsoft.com/downloads/ ---------------------------------------- ERROR: Failed building wheel for numpy ERROR: Failed to build one or more wheels Traceback (most recent call last): File "C:\Python310\lib\site-packages\setuptools\installer.py", line 75, in fetch_build_egg subprocess.check_call(cmd) File "C:\Python310\lib\subprocess.py", line 369, in check_call raise CalledProcessError(retcode, cmd) subprocess.CalledProcessError: Command '['C:\\Python310\\python.exe', '-m', 'pip', '--disable-pip-version-check', 'wheel', '--no-deps', '-w', 'C:\\Users\\Bilguun\\AppData\\Local\\Temp\\tmpq3kp_gfg', '--quiet', 'numpy>=1.16']' returned non-zero exit status 1. The above exception was the direct cause of the following exception: Traceback (most recent call last): File "<string>", line 1, in <module> File "C:\Users\Bilguun\AppData\Local\Temp\pip-install-5dbg4g23\matplotlib_2ff15b65402b457db67b768d26133471\setup.py", line 258, in <module> setup( # Finally, pass this all along to distutils to do the heavy lifting. File "C:\Python310\lib\site-packages\setuptools\__init__.py", line 152, in setup _install_setup_requires(attrs) File "C:\Python310\lib\site-packages\setuptools\__init__.py", line 147, in _install_setup_requires dist.fetch_build_eggs(dist.setup_requires) File "C:\Python310\lib\site-packages\setuptools\dist.py", line 806, in fetch_build_eggs resolved_dists = pkg_resources.working_set.resolve( File "C:\Python310\lib\site-packages\pkg_resources\__init__.py", line 766, in resolve dist = best[req.key] = env.best_match( File "C:\Python310\lib\site-packages\pkg_resources\__init__.py", line 1051, in best_match return self.obtain(req, installer) File "C:\Python310\lib\site-packages\pkg_resources\__init__.py", line 1063, in obtain return installer(requirement) File "C:\Python310\lib\site-packages\setuptools\dist.py", line 877, in fetch_build_egg return fetch_build_egg(self, req) File "C:\Python310\lib\site-packages\setuptools\installer.py", line 77, in fetch_build_egg raise DistutilsError(str(e)) from e distutils.errors.DistutilsError: Command '['C:\\Python310\\python.exe', '-m', 'pip', '--disable-pip-version-check', 'wheel', '--no-deps', '-w', 'C:\\Users\\Bilguun\\AppData\\Local\\Temp\\tmpq3kp_gfg', '--quiet', 'numpy>=1.16']' returned non-zero exit status 1. Edit setup.cfg to change the build options; suppress output with --quiet. BUILDING MATPLOTLIB matplotlib: yes [3.4.1] python: yes [3.10.0 (tags/v3.10.0:b494f59, Oct 4 2021, 19:00:18) [MSC v.1929 64 bit (AMD64)]] platform: yes [win32] tests: no [skipping due to configuration] macosx: no [Mac OS-X only] ```
2021/10/05
[ "https://Stackoverflow.com/questions/69450482", "https://Stackoverflow.com", "https://Stackoverflow.com/users/17079850/" ]
> > *"What causes the segmentation fault..."* > > > There are several places that have potential for segmentation fault. One that stands out is this: ``` char filename[4]; ... sprintf(filename, "%03i.jpg", 0); ``` In this example, `filename` has enough space to contain 3 characters + `nul` terminator. It needs to be declared with at least 8 to contain the result of `"%03i.jpg", 0`. (which if given enough space will populate `filename` with `000.jpg`.) If you are not working on a small embedded microprocessor, there is no reason not to create a `path` variable with more than enough space. Eg: ``` char filename[PATH_MAX];//if PATH_MAX is not defined, use 260 ``` Note, that writing to areas of memory that your process does not own invokes undefined behavior, which can come in the form of segmentation fault, or worse, seem to work without a problem. For example, if your code happens to get by the point of writing a deformed value into the `filename` variable, and that variable is then used later to open a file: ``` img[0] = fopen(filename, "w"); ``` it is unknown what the result will be. because your code does not check the results of this call, more potential for problems exists. ***Edit*** to address size of file... ``` int SIZE = sizeof(*raw); ``` Does not provide the size of the file. It will return the sizeof a pointer, i.e. either 4 or 8 bytes depending on whether the application is built as 32 or 64 bit. Consider using something like [this approach](https://stackoverflow.com/a/8247/645128) to get actual value for file size, resulting in a call such as: ``` unsigned long SIZE = fsize(argv[1]); ```
as ryker stated, there are several points of possible failures here. another is `int SIZE = sizeof(raw);` sets SIZE to be the size of a pointer (4/8 bytes).
8,223
23,985,903
I was wondering if there are any BDD-style 'describe-it' unit-testing frameworks for Python that are maintained and production ready. I have found [describe](https://pypi.python.org/pypi/describe/0.1.2), but it doesn't seem to be maintained and has no documentation. I've also found [sure](http://falcao.it/sure) which reached 1.0, but it seems to just add syntactic sugar instead of writing assertions. What I'm really looking for is something similar to RSpec and Jasmine that enables me to setup test suites. The describe-it syntax that allows for testing multiple cases of a function. Versus a classical assertion structure that tests each function once and has multiple assertions for testing multiple cases. This breaks the isolation of a unit test. If there's a way to achieve something similar with the assertion-style testing I'd appreciate any advice on how to do it. Below are simple examples of both styles: **foo.py** ``` class Foo(): def bar(self, x): return x + 1 ``` **BDD-Style/Describe-It** **test\_foo.py** ``` describe Foo: describe self.bar: before_each: f = Foo() it 'returns 1 more than its arguments value': expect f.bar(3) == 4 it 'raises an error if no argument is passed in': expect f.bar() raiseError ``` **Unittest/assertion-style** **test\_foo.py** ``` class Foo(): def test_bar(x): x = 3 self.assertEqual(4) x = None self.assertRaises(Error) ```
2014/06/02
[ "https://Stackoverflow.com/questions/23985903", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2483075/" ]
I've been looking for this myself and came across [mamba](https://github.com/nestorsalceda/mamba). In combination with the fluent assertion library [expects](https://github.com/jaimegildesagredo/expects) it allows you to write BDD-style unit tests in Python that look like this: ``` from mamba import describe, context, it from expects import * with describe("FrequentFlyer"): with context("when the frequent flyer account is first created"): with it("should initially have Bronze status"): frequentFlyer = FrequentFlyer() expect(frequentFlyer.status()).to(equal("BRONZE")) ``` Running these tests with documentation formatting gives you a Jasmine like test report: ``` > pipenv run mamba --format=documentation frequent_flyer_test.py FrequentFlyer when the frequent flyer account is first created ✓ it should initially have Bronze status 1 example ran in 0.0345 seconds ```
If you are expecting something exactly like rspec/capybara in python, then I am afraid you are in for a disappointment. The problem is ruby provides you much more freedom than python does (with much more support for open classes and extensive metaprogramming). I have to say there is a fundamental difference between philosophy of python and ruby. Still there are some good testing frameworks like Cucumber (<https://github.com/cucumber/cucumber/wiki/Python>) and lettuce (<http://lettuce.it/>) in case you are looking for pure python solution.
8,225
49,425,827
I want to import some tables from a postgres database into Elastic search and also hold the tables in sync with the data in elastic search. I have looked at a course on udemy, and also talked with a colleague who has a lot of experience with this issue to see what the best way to do it is. I am surprised to hear from both of them, it seems like the best way to do it, is to write code in python, java or some other language that handles this import and sync it which brings me to my question. Is this actually the best way to handle this situation? It seems like there would be a library, plugin, or something that would handle the situation of importing data into elastic search and holding it in sync with an external database. What is the best way to handle this situation?
2018/03/22
[ "https://Stackoverflow.com/questions/49425827", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4415079/" ]
It depends on your use case. A common practice is to handle this on the application layer. Basically what you do is to replicate the actions of one db to the other. So for example if you save one entry in postgres you do the same in elasticsearch. If you do this however you'll have to have a queuing system in place. Either the queue is integrated on your application layer, e.g. if the save in elasticsearch fails then you can replay the operation. Moreover on your queuing system you'll implement a throttling mechanism in order to not overwhelm elasticsearch. Another approach would be to send events to another app (e.g. logstash etc), so the throttling and persistence will be handled by that system and not your application. Another approach would be this <https://www.elastic.co/blog/logstash-jdbc-input-plugin>. You use another system that "polls" your database and sends the changes to elasticsearch. In this case logstash is ideal since it's part of the ELK stack and it has a great integration. Check this too <https://www.elastic.co/guide/en/logstash/current/plugins-inputs-jdbc.html> Another approach is to use the [NOTIFY](https://www.postgresql.org/docs/9.0/static/sql-notify.html) mechanism of postgres to send events to some queue that will handle saving the changes in elasticsearch.
As anything in life,best is subjective. Your colleague likes to write and maintain code to keep this in sync. There's nothing wrong with that. I would say the best way would be to use some data pipeline. There's plethora of choices, really overwheleming, you can explore the various solutions which support Postgres and ElasticSearch. Here are options I'm familiar with. Note that these are tools/platform for your solution, not the solution itself. YOU have to configure, customize and enhance them to fit your definition of **in sync** * [LogStash](https://www.elastic.co/products/logstash) * [Apachi Nifi](https://nifi.apache.org/) * [Kafka Connect](https://www.confluent.io/product/connectors/)
8,226
3,254,096
My Python application is constructed as such that some functionality is available as plugins. The plugin architecture currently is very simple: I have a plugins folder/package which contains some python modules. I load the relevant plugin as follows: ``` plugin_name = blablabla try: module = __import__(plugin_name, fromlist='do_something') except ImportError: #some error handling ... ``` and then execute: ``` try: loans = module.do_something(id_t, pin_t) except xxx: # error handling ``` I compile the application to a Windows binary using py2exe. **This works fine, except for that the fact that all plugins are (and have to be) included in the binary.** This is not very practical, since for each new plugin, I have to recompile and release a new version of my application. It would be better if a new plugin (i.e. python file) could be copied to some application plugin folder, and that the Python code in the file code be interpreted on-the-fly by my application. What is the best approach to do so? (I've though of reading each line of the selected plugin file, and applying an [`exec` statement](http://docs.python.org/reference/simple_stmts.html#the-exec-statement) to it. But there might be better ways...)
2010/07/15
[ "https://Stackoverflow.com/questions/3254096", "https://Stackoverflow.com", "https://Stackoverflow.com/users/50899/" ]
[PyInstaller](http://www.pyinstaller.org) lets you import external files as well. If you run it over your application, it will not package those files within the executable. You will then have to make sure that paths are correct (that is, your application can find the modules on the disk in the correct directory), and everything should work.
I suggest you use pkg\_resources entry\_points features (from setuptools/distribute) to implement plugin discovery and instantiation: first, it's a standard way to do that; second, it does not suffer the problem you mention AFAIK. All you have to do to extend the application is to package some plugins into an egg that declare some entry points (an egg may declare many plugins), and when you install that egg into your python distribution, all the plugins it declares can automatically be discovered by your application. You may also package your application and the "factory" plugins into the same egg, it's quite convenient.
8,229
23,599,970
I would like to ask your help. I have started learning python, and there are a task that I can not figure out how to complete. So here it is. We have a input.txt file containing the next 4 rows: ``` f(x, 3*y) * 54 = 64 / (7 * x) + f(2*x, y-6) x + f(21*y, x - 32/y) + 4 = f(21 ,y) 86 - f(7 + x*10, y+ 232) = f(12*x-4, 2*y-61)*32 + f(2, x) 65 - 3* y = f(2*y/33 , x + 5) ``` The task is to change the "f" function and its 2 parameters into dividing. There can be any number of spaces between the two parameters. For example f(2, 5) is the same as f(2 , 5) and should be (2 / 5) with exactly one space before and after the divide mark after the running of the code. Also, if one of the parameters are a multiplification or a divide, the parameter must go into bracket. For example: f(3, 5\*7) should become (3 / (5\*7)). And there could be any number of function in one row. So the output should look like this: ``` (x / (3*y)) * 54 = 64 / (7 * x) + ((2*x) / (y-6)) x + ((21*y) / (x - 32/y)) + 4 = (21 / y) 86 - ((7 + x*10) / (y+ 232)) = ((12*x-4) / (2*y-61))*32 + (2 / x) 65 - 3* y = ((2*y/33) / (x + 5)) ``` I would be very happy if anyone could help me. Thank you in advance, David
2014/05/12
[ "https://Stackoverflow.com/questions/23599970", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3626828/" ]
The Firemonkey canvas on Windows is probably not using the GPU. If you are using XE6 you can > > set the global variable FMX.Types.GlobalUseGPUCanvas to true in the initialization section. > > > [Documentation](http://docwiki.embarcadero.com/Libraries/en/FMX.Types.GlobalUseGPUCanvas) Otherwise, in XE5 stick a TViewPort3D on your Form. Stick a TLayer3D in the TViewPort3D and change it's to Projection property to pjScreen. Stick your TPaintBox on the TLayer3D. Another alternative could be there is an [OpenGL canvas unit](https://github.com/FMXExpress/box2d-firemonkey/blob/master/OpenGL%20Canvas/UOpenGLCanvas.pas) You could also parallel process your loop but they will only make your test faster and maybe not your real world game ([Parallel loop in Delphi](https://stackoverflow.com/questions/4390149/how-to-realize-parallel-loop-in-delphi))
When you draw a circle in canvas (ie GPUCanvas) then you draw in fact around 50 small triangles. this is how GPUCanvas work. it's even worse with for exemple Rectangle with round rect. I also found that Canvas.BeginScene and Canvas.endScene are very slow operation. you can try to put form.quality to highperformance to avoid antialiasing, but i didn't see that it's change really the speed.
8,236
25,279,746
I am writing a test application in python and to test some particular scenario, I need to launch my python child process in windows SYSTEM account. I can do this by creating exe from my python script and then use that while creating windows service. But this option is not good for me because in future if I change anything in my python script then I have to regenerate exe every-time. If anybody have any better idea about how to do this then please let me know. Bishnu
2014/08/13
[ "https://Stackoverflow.com/questions/25279746", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1811739/" ]
1. Create a service that runs permanently. 2. Arrange for the service to have an IPC communications channel. 3. From your desktop python code, send messages to the service down that IPC channel. These messages specify the action to be taken by the service. 4. The service receives the message and performs the action. That is, executes the python code that the sender requests. This allows you to decouple the service from the python code that it executes and so allows you to avoid repeatedly re-installing a service. If you don't want to run in a service then you can use `CreateProcessAsUser` or similar APIs.
You could also use Windows Task Scheduler, it can run a script under SYSTEM account and its interface is easy (if you do not test too often :-) )
8,237
49,551,704
I'm new to python and selenium and wondering how I could take a group of text from a web page and input it into an array. Currently, what I have now is a method that, instead of using an array, uses a string and un-neatly displays it. ``` # returns a list of names in the order it is displayed def gather_names(self): fullListNames = "" hover_names = self.browser.find_elements_by_xpath("//div[contains(@class, 'recent-names')]") #xpath to the names that will need to be hovered over for names in hover_names: self.hover_over(names) #hover_over is a method which takes an xpath and will then hover over each of those elements self.wait_for_element("//div[contains(@class, 'recent-names-info')]", 'names were not found') #Checking to see if it is displayed on the page; otherwise, a 'not found' command will print to console time.sleep(3) #giving it time to find each element, otherwise it will go too fast and skip over one listName = names.find_element_by_xpath("//div[contains(@class, 'recent-names-info')]").text #converts to text fullListNames += listName #currently adding every element to a string return fullListNames ``` The output of this looks like ``` name_on_page1name_on_page2name_on_page3 ``` without any spaces in between the names (which I would like to change if I cannot find a way to incorporate this into an array). When I did try making fullListNames an array, I had issues with it grabbing each character of the string and the output looking something like ``` [u'n', u'a', u'm', u'e', u'_', u'o', u'n'].... ``` Preferably, I need a format of ``` [name1, name2, name3] ``` Can anyone point in the right way to handle this?
2018/03/29
[ "https://Stackoverflow.com/questions/49551704", "https://Stackoverflow.com", "https://Stackoverflow.com/users/9519161/" ]
When you use `pyinstaller` to compile your script to `executable` in `windows 10` and want to use it in `window 7` it won't work. But you can compile it with `pyinstaller` in `windows 7` and use the executable in `windows 7, 8, and 10` Also take note of this, take into consideration `32-bit and 64-bit` version of the operating systems when use `windows 7 32-bit` to compile your executable and want to use it in `windows 7 64-bit` operating system version it won't work and vice versa. So when you compile in `windows7 32-bit` version it will work on only `32-bit version of operating systems` and not on `64-bit version of windows operating system` and vice versa
maybe you could try using cx\_Freeze package to build your app, in windows (Note: if your PC is 64 Bits the app gonna be in that architecture and 32Bits if is x86 or 32 Bits) run cmd and type this ``` pip install cx_Freeze ``` Then make a file called setup.py ubicated in the same directory into that and add this code: ``` import cx_Freeze import sys import os os.environ['TCL_LIBRARY'] = "C:\\Program Files\\Python27\\tcl\\tcl8.6" #you need to ubicate the library where tcl\tcl8.6 is os.environ['TK_LIBRARY'] = "C:\\Program Files\\Python27\\tcl\\tk8.6" #you need to ubicate the library where tcl\tk8.6 is base = None if sys.platform == 'win32': base = "Win32GUI" executables = [cx_Freeze.Executable("name_of_your_app.py", base=base, icon="icon_of_your_app.ico")] cx_Freeze.setup( name = "Vtext", options = {"build_exe": {"packages":["tkinter"], "include_files":["icon_of_your_app.ico", "maybe_some_img_that_your_app_is_using.gif", "another_img.gif"]}}, version = "1.0", description = "name_of_your_app", executables = executables )""" ``` Later open a cmd and change the directory where your app is, for example: ``` C:\Users\Myname> cd C:\Users\Myname\MyTkinterApp ``` Then, type this: ``` python setup.py build ``` And is everything is ok, your app is gonna builded Watch this video for more info: <https://www.youtube.com/watch?v=HosXxXE24hA&t=0s&index=29&list=PLQVvvaa0QuDclKx-QpC9wntnURXVJqLyk>
8,240
66,132,304
i am trying to post on facebook wall using selenium in python. I am able to login but after login it cant find class name of status box which i copied from browser here is my code- ``` from selenium import webdriver from selenium.webdriver.common.keys import Keys import time user_name = "email" password = "password" msg = "hi i am new here" driver = webdriver.Firefox() driver.get("https://www.facebook.com") element = driver.find_element_by_id("email") element.send_keys(user_name) element = driver.find_element_by_id("pass") element.send_keys(password) element.send_keys(Keys.RETURN) time.sleep(5) post_box = driver.find_element_by_class_name("a8c37x1j ni8dbmo4 stjgntxs l9j0dhe7") post_box.click() time.sleep(5) post_box.send_keys(msg) ``` the snapshot of code i copied from browser is attached as image [here](https://i.stack.imgur.com/LmZeX.png) here is error i recived- ``` Traceback (most recent call last): File "C:/Users/rosha/Desktop/facebook bot.py", line 17, in <module> post_box = driver.find_element_by_class_name("a8c37x1j ni8dbmo4 stjgntxs l9j0dhe7") File "C:\ProgramData\Anaconda3\envs\facebook bot.py\lib\site-packages\selenium\webdriver\remote\webdriver.py", line 564, in find_element_by_class_name return self.find_element(by=By.CLASS_NAME, value=name) File "C:\ProgramData\Anaconda3\envs\facebook bot.py\lib\site-packages\selenium\webdriver\remote\webdriver.py", line 976, in find_element return self.execute(Command.FIND_ELEMENT, { File "C:\ProgramData\Anaconda3\envs\facebook bot.py\lib\site-packages\selenium\webdriver\remote\webdriver.py", line 321, in execute self.error_handler.check_response(response) File "C:\ProgramData\Anaconda3\envs\facebook bot.py\lib\site-packages\selenium\webdriver\remote\errorhandler.py", line 242, in check_response raise exception_class(message, screen, stacktrace) selenium.common.exceptions.NoSuchElementException: Message: Unable to locate element: .a8c37x1j ni8dbmo4 stjgntxs l9j0dhe7 ```
2021/02/10
[ "https://Stackoverflow.com/questions/66132304", "https://Stackoverflow.com", "https://Stackoverflow.com/users/11942305/" ]
try to find element by Xpath for example: **driver.find\_element(By.XPATH, '//button[text()="Some text"]')** to find the xpath from the browser, just right click on something in the webpage and press inspect after that right click, a menu will appear, navigate to copy then another menu will appear, press copy fullpath. check this <https://selenium-python.readthedocs.io/locating-elements.html>
The problem is that `driver.find_element_by_class_name()` can be used for one class, and not multiple classes as you have: `a8c37x1j ni8dbmo4 stjgntxs l9j0dhe7` which are multiple classes separated by spaces. Refer to the solution [suggested here](https://stackoverflow.com/a/44760303/12106481), it suggests using `find_elements_by_xpath` or `find_element_by_css_selector`.
8,241
53,384,795
I have this data structure: ``` [array([[0, 1, 0, 1, 1, 1, 0, 5, 1, 0, 2, 1]]), array([[0, 0, 0, 0, 0, 0, 0, 2, 0, 0, 1, 0], [1, 3, 1, 0, 0, 1, 1, 1, 1, 0, 1, 0]]), array([[0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0], [0, 2, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], etc.... ``` I want to flatten this into a list of lists something like: ``` [[0 1 0 1 1 1 0 5 1 0 2 1] [1 6 1 0 0 1 1 1 2 0 2 0] [2 0 5 0 5 2 2 0 6 3 2 2] [1 0 1 1 1 1 0 2 0 0 0 1]] ``` How do we do this in python?
2018/11/20
[ "https://Stackoverflow.com/questions/53384795", "https://Stackoverflow.com", "https://Stackoverflow.com/users/-1/" ]
I believe you're looking for `vstack`: ``` >>> np.vstack(l) array([[0, 1, 0, 1, 1, 1, 0, 5, 1, 0, 2, 1], [0, 0, 0, 0, 0, 0, 0, 2, 0, 0, 1, 0], [1, 3, 1, 0, 0, 1, 1, 1, 1, 0, 1, 0], [0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0], [0, 2, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]]) ``` Note that this is equivalent to: ``` >>> np.concatenate(x, axis=0) array([[0, 1, 0, 1, 1, 1, 0, 5, 1, 0, 2, 1], [0, 0, 0, 0, 0, 0, 0, 2, 0, 0, 1, 0], [1, 3, 1, 0, 0, 1, 1, 1, 1, 0, 1, 0], [0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0], [0, 2, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]]) ``` To convert to list, use `tolist`: ``` >>> np.vstack(l).tolist() [[0, 1, 0, 1, 1, 1, 0, 5, 1, 0, 2, 1], [0, 0, 0, 0, 0, 0, 0, 2, 0, 0, 1, 0], [1, 3, 1, 0, 0, 1, 1, 1, 1, 0, 1, 0], [0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0], [0, 2, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]] # or >>> np.concatenate(x, axis=0).tolist() [[0, 1, 0, 1, 1, 1, 0, 5, 1, 0, 2, 1], [0, 0, 0, 0, 0, 0, 0, 2, 0, 0, 1, 0], [1, 3, 1, 0, 0, 1, 1, 1, 1, 0, 1, 0], [0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0], [0, 2, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]] ```
Use `flatten`: ``` print([i.flatten() for i in l]) ``` Or: ``` print(list(map(lambda x: x.flatten(),l))) ``` Both output: ``` [array([0, 1, 0, 1, 1, 1, 0, 5, 1, 0, 2, 1]), array([0, 0, 0, 0, 0, 0, 0, 2, 0, 0, 1, 0, 1, 3, 1, 0, 0, 1, 1, 1, 1, 0, 1, 0]), array([0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 2, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0])] ```
8,242
62,262,007
``` import base64 s = "05052020" ``` python2.7 ``` base64.b64encode(s) ``` output is string `'MDUwNTIwMjA='` python 3.7 ``` base64.b64encode(b"05052020") ``` output is bytes ``` b'MDUwNTIwMjA=' ``` I want to replace = with "a" ``` s = str(base64.b64encode(b"05052020"))[2:-1] s = s.replace("=", "a") ``` I realise it is dirty way so how can I do it better? EDIT: expected result: Python code 3 output string with replaced padding
2020/06/08
[ "https://Stackoverflow.com/questions/62262007", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4450090/" ]
If you tried both language servers and VS Code made you reload then you have tried the options currently available to you from the Python extension. We are actively working on making it better, though, and hope to having something to say about it shortly. But if you can't wait you can try something like <https://marketplace.visualstudio.com/items?itemName=ms-pyright.pyright> as an alternative language server.
It might be a problem related to Pylance. By default, Pylance only looks for modules in the root directory. Making some tweaks in the settings made sure everything I import in VSCode works as if it's imported in PyCharm. Please see: <https://stackoverflow.com/a/67099842/6381389>
8,243
69,675,173
Following is the content of `foo.py` ```py import sys print(sys.executable) ``` When I execute this, I can get the full path of the the Python interpreter that called this script. ```sh $ /mingw64/bin/python3.9.exe foo.py /mingw64/bin/python3.9.exe ``` How to do this in nim (`nimscript`)?
2021/10/22
[ "https://Stackoverflow.com/questions/69675173", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1805129/" ]
The question mentions [NimScript](https://nim-lang.github.io/Nim/nims.html), which has other uses in the Nim ecosystem, but can also be used to write executable scripts instead of using, e.g., Bash or Python. You can use the [`selfExe`](https://nim-lang.github.io/Nim/nimscript.html#selfExe) proc to get the path to the Nim executable which is running a NimScript script: ``` #!/usr/bin/env -S nim --hints:off mode = ScriptMode.Silent echo selfExe() ``` After saving the above as `test.nims` and using `chmod +x` to make the file executable, the script can be invoked to show the path to the current Nim executable: ```none $ ./test.nims /home/.choosenim/toolchains/nim-1.4.8/bin/nim ```
Nim is compiled, so I assume you want to get the path of the application's own binary? If so, you can do that with: ``` import std/os echo getAppFilename() ```
8,244
47,405,748
I am reading python official documentation word by word. In the 3.3. Special method names [3.3.1. Basic customization](https://docs.python.org/3/reference/datamodel.html#basic-customization) It does specify 16 special methods under `object` basic customization, I collect them as following: ``` In [47]: bc =['__new__', '__init__', '__del__', '__repr__', '__str__', '__bytes__', '__format__', '__eq__', '__le__', '__lt__', '__eq__', '__ne__', '__ge__', '__gt__', '__hash__', '__bool__'] In [48]: len(bc) Out[48]: 16 ``` The problem is that there's three of them are not `object`'s valid attributes ``` In [50]: for attr in bc: ...: if hasattr(object,attr): ...: pass ...: else: ...: print(attr) ...: __del__ __bytes__ __bool__ ``` Object is a base for all classes. [2. Built-in Functions — Python 3.6.3 documentation](https://docs.python.org/3/library/functions.html?highlight=object#object) It has no recursive base classes. Where are methods of `__del__`, `__bytes__`,`__bool__` defined for class 'object'?
2017/11/21
[ "https://Stackoverflow.com/questions/47405748", "https://Stackoverflow.com", "https://Stackoverflow.com/users/7301792/" ]
I hope this will help you and it works fine on my project. ``` public static final int MY_PERMISSIONS_REQUEST_READ_EXTERNAL_STORAGE = 123; public static final int MY_PERMISSIONS_REQUEST_WRITE_EXTERNAL_STORAGE = 124; public static final int MY_PERMISSIONS_REQUEST_CAMERA = 124; @TargetApi(Build.VERSION_CODES.JELLY_BEAN) public static boolean checkPermissionread(final Context context) { int currentAPIVersion = Build.VERSION.SDK_INT; if (currentAPIVersion >= android.os.Build.VERSION_CODES.M) { if (ContextCompat.checkSelfPermission(context, Manifest.permission.READ_EXTERNAL_STORAGE) != PackageManager.PERMISSION_GRANTED) { if (ActivityCompat.shouldShowRequestPermissionRationale((Activity) context, Manifest.permission.READ_EXTERNAL_STORAGE)) { AlertDialog.Builder alertBuilder = new AlertDialog.Builder(context); alertBuilder.setCancelable(true); alertBuilder.setTitle("Permission necessary"); alertBuilder.setMessage("External storage permission is necessary"); alertBuilder.setPositiveButton(android.R.string.yes, new DialogInterface.OnClickListener() { @TargetApi(Build.VERSION_CODES.JELLY_BEAN) public void onClick(DialogInterface dialog, int which) { ActivityCompat.requestPermissions((Activity) context, new String[]{Manifest.permission.READ_EXTERNAL_STORAGE}, MY_PERMISSIONS_REQUEST_READ_EXTERNAL_STORAGE); } }); AlertDialog alert = alertBuilder.create(); alert.show(); } else { ActivityCompat.requestPermissions((Activity) context, new String[]{Manifest.permission.READ_EXTERNAL_STORAGE}, MY_PERMISSIONS_REQUEST_READ_EXTERNAL_STORAGE); } return false; } else { return true; } } else { return true; } } @TargetApi(Build.VERSION_CODES.JELLY_BEAN) public static boolean checkPermissionwrite(final Context context) { int currentAPIVersion = Build.VERSION.SDK_INT; if (currentAPIVersion >= android.os.Build.VERSION_CODES.M) { if (ContextCompat.checkSelfPermission(context, Manifest.permission.CAMERA) != PackageManager.PERMISSION_GRANTED) { if (ActivityCompat.shouldShowRequestPermissionRationale((Activity) context, Manifest.permission.CAMERA)) { AlertDialog.Builder alertBuilder = new AlertDialog.Builder(context); alertBuilder.setCancelable(true); alertBuilder.setTitle("Permission necessary"); alertBuilder.setMessage("External storage permission is necessary"); alertBuilder.setPositiveButton(android.R.string.yes, new DialogInterface.OnClickListener() { @TargetApi(Build.VERSION_CODES.JELLY_BEAN) public void onClick(DialogInterface dialog, int which) { ActivityCompat.requestPermissions((Activity) context, new String[]{Manifest.permission.CAMERA}, MY_PERMISSIONS_REQUEST_CAMERA); } }); AlertDialog alert = alertBuilder.create(); alert.show(); } else { ActivityCompat.requestPermissions((Activity) context, new String[]{Manifest.permission.CAMERA}, MY_PERMISSIONS_REQUEST_CAMERA); } return false; } else { return true; } } else { return true; } } @TargetApi(Build.VERSION_CODES.JELLY_BEAN) public static boolean checkPermissioncamera(final Context context) { int currentAPIVersion = Build.VERSION.SDK_INT; if (currentAPIVersion >= android.os.Build.VERSION_CODES.M) { if (ContextCompat.checkSelfPermission(context, Manifest.permission.WRITE_EXTERNAL_STORAGE) != PackageManager.PERMISSION_GRANTED) { if (ActivityCompat.shouldShowRequestPermissionRationale((Activity) context, Manifest.permission.WRITE_EXTERNAL_STORAGE)) { AlertDialog.Builder alertBuilder = new AlertDialog.Builder(context); alertBuilder.setCancelable(true); alertBuilder.setTitle("Permission necessary"); alertBuilder.setMessage("External storage permission is necessary"); alertBuilder.setPositiveButton(android.R.string.yes, new DialogInterface.OnClickListener() { @TargetApi(Build.VERSION_CODES.JELLY_BEAN) public void onClick(DialogInterface dialog, int which) { ActivityCompat.requestPermissions((Activity) context, new String[]{Manifest.permission.WRITE_EXTERNAL_STORAGE}, MY_PERMISSIONS_REQUEST_WRITE_EXTERNAL_STORAGE); } }); AlertDialog alert = alertBuilder.create(); alert.show(); } else { ActivityCompat.requestPermissions((Activity) context, new String[]{Manifest.permission.WRITE_EXTERNAL_STORAGE}, MY_PERMISSIONS_REQUEST_WRITE_EXTERNAL_STORAGE); } return false; } else { return true; } } else { return true; } } protected ProgressDialog mProgressDialog; protected void showProgressDialog(String title,String message) { /* if(mProgressDialog!=null) { mProgressDialog.dismiss(); }*/ mProgressDialog = ProgressDialog.show(this,title,message); } ```
Try adding permission to MANIFEST file. ``` <uses-permission android:name="android.permission.CAMERA" /> ``` and in your checkPermissionsR() get this permission ``` ContextCompat.checkSelfPermission(this, Manifest.permission.Camera) ```
8,246
59,783,094
I am running `py.test` 4.3.1 with `python` 3.7.6 on a Mac (Mojave) and I want to get the list of markers for the 'session', once at the begin of the run. In `conftest.py` I have tried using the following function: ``` @pytest.fixture(scope="session", autouse=True) def collab_setup(request): print([marker.name for marker in request.function.pytestmark]) ``` which, however, results in an error ``` E AttributeError: function not available in session-scoped context ``` when I call a dummy test like ``` py.test -s -m "mark1 and mark2" tests/tests_dummy.py ``` It is important to have the list of markers only once for my testing session, as in the end I want to setup something for all the tests in the testsuite. That is why I must not call this function more than once per test session. Is this possible to achieve?
2020/01/17
[ "https://Stackoverflow.com/questions/59783094", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1581090/" ]
If the `locationName` gets as input `[1,5]` then the code should look like this: ``` filterData(locationName: number[]) { return ELEMENT_DATA.filter(object => { return locationName.includes(object.position); }); } ```
You can use [Array.prototype.filter()](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Array/filter) ``` ELEMENT_DATA.filter(function (object) { return locationName.indexOf(object.position) !== -1; // -1 means not present }); ``` or with underscore JS , using the same predicate: ``` _.filter(ELEMENT_DATA, function (object) { return locationName.indexOf(object.position) !== -1; // -1 means not present } ``` If you have access to ES6 collections or a polyfill for [Set](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Set/has). Here **locationName** should be a type of Set ``` ELEMENT_DATA.filter(object=> locationName.has(object.position)) ```
8,255
14,490,845
Python 2.6 on Redhat 6.3 I have a device that saves 32 bit floating point value across 2 memory registers, split into most significant word and least significant word. I need to convert this to a float. I have been using the following code found on SO and it is similar to code I have seen elsewhere ``` #!/usr/bin/env python import sys from ctypes import * first = sys.argv[1] second = sys.argv[2] reading_1 = str(hex(int(first)).lstrip("0x")) reading_2 = str(hex(int(second)).lstrip("0x")) sample = reading_1 + reading_2 def convert(s): i = int(s, 16) # convert from hex to a Python int cp = pointer(c_int(i)) # make this into a c integer fp = cast(cp, POINTER(c_float)) # cast the int pointer to a float pointer return fp.contents.value # dereference the pointer, get the float print convert(sample) ``` an example of the register values would be ; register-1;16282 register-2;60597 this produces the resulting float of 1.21034872532 A perfectly cromulent number, however sometimes the memory values are something like; register-1;16282 register-2;1147 which, using this function results in a float of; 1.46726675314e-36 which is a fantastically small number and not a number that seems to be correct. This device should be producing readings around the 1.2, 1.3 range. What I am trying to work out is if the device is throwing bogus values or whether the values I am getting are correct but the function I am using is not properly able to convert them. Also is there a better way to do this, like with numpy or something of that nature? I will hold my hand up and say that I have just copied this code from examples on line and I have very little understanding of how it works, however it seemed to work in the test cases that I had available to me at the time. Thank you.
2013/01/23
[ "https://Stackoverflow.com/questions/14490845", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1635823/" ]
If you have the raw bytes (e.g. read from memory, from file, over the network, ...) you can use `struct` for this: ``` >>> import struct >>> struct.unpack('>f', '\x3f\x9a\xec\xb5')[0] 1.2103487253189087 ``` Here, `\x3f\x9a\xec\xb5` are your input registers, 16282 (hex 0x3f9a) and 60597 (hex 0xecb5) expressed as bytes in a string. The `>` is the byte order mark. So depending how you get the register values, you may be able to use this method (e.g. by converting your input integers to byte strings). You can use `struct` for this, too; this is your second example: ``` >>> raw = struct.pack('>HH', 16282, 1147) # from two unsigned shorts >>> struct.unpack('>f', raw)[0] # to one float 1.2032617330551147 ```
The way you've converting the two `int`s makes implicit assumptions about [endianness](http://en.wikipedia.org/wiki/Endianness) that I believe are wrong. So, let's back up a step. You know that the first argument is the most significant word, and the second is the least significant word. So, rather than try to figure out how to combine them into a hex string in the appropriate way, let's just do this: ``` import struct import sys first = sys.argv[1] second = sys.argv[2] sample = int(first) << 16 | int(second) ``` Now we can just convert like this: ``` def convert(i): s = struct.pack('=i', i) return struct.unpack('=f', s)[0] ``` And if I try it on your inputs: ``` $ python floatify.py 16282 60597 1.21034872532 $ python floatify.py 16282 1147 1.20326173306 ```
8,256
58,770,519
How to do this ``` c++ -> Python -> c++ ^ | | | ----------------- ``` 1. C++ app is hosting python. 2. Python creates a class, which is actually a wrapping to c/c++ object 3. How to get access from hosting c++ to c/c++ pointer of this object created by python? **Example with code:** Imagine I have a C++ class wrapped for python (e.g. using boost.python) ``` // foo.cpp #include <boost/python.hpp> struct Cat { Cat (int fur=4): _fur{i} { } int get_fur() const { return _fur; } private: int _fur; }; BOOST_PYTHON_MODULE(foo) { using namespace boost::python; class_<Cat>("Cat", init<int>()) .def("get_fur", &A::get_fur, "Density of cat's fur"); ``` } Now **I'm hosting a python in C++**. A python script creates **Cat** => A c++ Cat instance is created underneath. How to get a pointer to c++ instance of Cat from hosting c++ (from C++ to C++)? ``` #include <Python.h> int main(int argc, char *argv[]) { Py_SetProgramName(argv[0]); /* optional but recommended */ Py_Initialize(); PyRun_SimpleString("from cat import Cat \n" "cat = Cat(10) \n"); // WHAT to do here to get pointer to C++ instance of Cat ... ??? ... std::cout << "Cat's fur: " << cat->get_fur() << std::endl Py_Finalize(); return 0; } ``` **Real application** The real problem is this: we have a c++ framework which has pretty complex initialization and configuration phase where performance is not critical; and then processing phase, where performance is everything. There is a python wrapping for the framework. Defining things in python is very convenient but running from python is still slower than pure c++ code. It is tempting for many reasons to do this configuration/initialization phase in python, get pointers to underneath C++ objects and then run in "pure c++ mode". It would be easy if we could write everything from scratch, but we have already pretty much defined (30 years old) c++ framework.
2019/11/08
[ "https://Stackoverflow.com/questions/58770519", "https://Stackoverflow.com", "https://Stackoverflow.com/users/548894/" ]
``` #include <Python.h> int main(int argc, char *argv[]) { Py_SetProgramName(argv[0]); Py_Initialize(); object module = import("__main__"); object space = module.attr("__dict__"); exec("from cat import Cat \n" "cat = Cat(10) \n", space); Cat& cat = extract<Cat&>(space["cat"]); std::cout<<"Cat's fur: "<< cat.get_fur() <<"\n"; //or: Cat& cat2 = extract<Cat&>(space.attr("cat")); std::cout<<"Cat's fur: "<< cat2.get_fur() <<"\n"; //or: object result = eval("cat = Cat(10)"); Cat& cat3 = extract<Cat&>(result); std::cout<<"Cat's fur: "<< cat3.get_fur() <<"\n"; Py_Finalize(); return 0; } ```
In case the wrapper is open source, use the wrapper's python object struct from C++. Cast the PyObject \* to that struct which should have a PyObject as its first member iirc, and simply access the pointer to the C++ instance. Make sure that the instance is not deleted while you're using it by keeping the wrapper instance around in Python. When it is released, it will probably delete the wrapped C++ instance you now still are holding a pointer to. And keep in mind that this approach is risky and fragile, as it depends on implementation details of the wrapper that might change in future versions. Starting point: [cpython/object.h](https://github.com/python/cpython/blob/master/Include/object.h)
8,257
6,124,701
I feel like this is simple, but I just don't know enough about python to do it correctly. I have two files: 1. File with lines listing an id number and whether that id is used. Format is 'id, isUsed'. 2. File with rules containing one rule for each id. So what I want to do is to parse through the file with id-used pairs and then based on that information, I will find the corresponding rule in the second file and then comment or un-comment the rule based on if that rule is used. Is there an easy way to search through the second file for the rule I am looking for instead of searching it line by line every time? Also, do I have to re-write the file every time I change the file. Here is what I have so far I don't really know what the best way to implement modifyRulesFile(): ``` def editRulesFile(pairFile, ruleFile): pairFd = open(pairFile, 'r') ruleFd = open(ruleFile, 'rw') for line in pairFd.readLine(): id,isUsed = line.split(',') modifyRulesFile(ruleFd, id, isUsed) def modifyRulesFile(fd, id, isUsed): for line in fd.readLine(): # Find line with id in it and add a comment or remove comment based on isUsed ```
2011/05/25
[ "https://Stackoverflow.com/questions/6124701", "https://Stackoverflow.com", "https://Stackoverflow.com/users/240522/" ]
I suggest you read the rules file into a dictionary (id -> rule). Then, as you read the config file, write out the corresponding rule (including a comment if you need to). some pseudocode: ``` rules = {} for id, rule in read_rules_file(): rules[id] = rule for id, isUsed in read_pairs_file(): if isUsed: write_rule(id, rules[id]) else: write_commented_rule(id, rules[id]) ``` This way, you will pass through each file only once. If the rules file gets very long, you might run out of memory, but, well, that normally takes a long time to happen! You can use generators to avoid keeping all the pairs in memory at once: ``` def read_pairs_file(): pairFd = open(pairFile, 'r') for line in pairFd.readLines(): id, isUsed = line.split(',') yield (id, isUsed) pairFd.Close() ```
I don't know why I didn't think of this before, but there is another way to do this. First, you read which rules should be used (or not used) into memory, I stored it into a dictionary. ``` def readRulesIntoMemory(fileName): rules = {} # Open csv file with rule id, isUsed pairs fd = open(fileName, 'r') if fd: for line in fd.readlines(): id,isUsed = line.split(',') rules[id] = isUsed ``` Then while reading the list of current rules in the other file, write your changes to a temporary file. ``` def createTemporaryRulesFile(temporaryFileName, rulesFileName, rules): # Open current rules file for reading. rulesFd = open(rulesFileName, 'r') if not rulesFd: return False # Open temporary file for writing tempFd = open(temporaryFileName, 'w') if not tempFd: return False # Iterate through each current rule. for line in rulesFd.readlines(): id = getIdFromLine(line) isCommented = True # Default to commenting out rule # If rule's id is was in csv file from earlier, save whether we comment # the line or not. if id in rules: isCommented = rules[id] if isCommented: writeCommentedLine(tempFd, line) else: writeUncommentedLine(tempFd, line) return True ``` Now we can copy the new temp file over the original if we want to.
8,258
63,749,945
I'm a beginner at python and I made this random date generator which should generate year-month-date output. And 4,6,9,11 months have 30 days, all others 31. But I'm having a problem where february in leap year still generates date=30 despite if and elif having the condition where M must be 2. ``` import random import calendar for i in range(500): Y = rand.randint(0, 170) M = rand.randint(1, 12) if calendar.isleap(G+1850) == True and M == 2: D = rand.randint(1, 28) elif calendar.isleap(G+1850) == False and M == 2: D=rand.randint(1, 29) if M == 4 or M == 6 or M == 9 or M == 11: D=rand.randint(1, 30) else: D=rand.randint(1, 31) ```
2020/09/05
[ "https://Stackoverflow.com/questions/63749945", "https://Stackoverflow.com", "https://Stackoverflow.com/users/14224115/" ]
Several problems mentioned in comments: * you are using `rand` instead of `random` * you are using `G` when presumably it should be `Y` See a refactored code, rewriting the `if` statements to first test `M` then test `Y`. ``` import random import calendar for i in range(500): Y = random.randint(0, 170) M = random.randint(1, 12) if M == 2: if calendar.isleap(Y + 1850): Dmax = 28 else: Dmax = 29 elif M in [4, 6, 9, 11]: Dmax = 30 else: Dmax = 31 D = random.randint(1, Dmax) print(Y, M, D) ``` --- A more pythonic way would be to use `timedelta` to create a random date from the origin. ``` import random from datetime import datetime, timedelta def random_date(start: datetime, end: datetime): days = (end - start).days return start + timedelta(days=random.randint(0, days)) start = datetime(1850, 1, 1) end = datetime(2020, 12, 31) for i in range(500): print(random_date(start, end)) ```
The final else-block is executed if `M == 2` and overwrites `D`. Simple solution can be to reorder the two if parts: ``` import random as rand import calendar for i in range(500): G = rand.randint(0, 170) M = rand.randint(1, 12) if M == 4 or M == 6 or M == 9 or M == 11: D=rand.randint(1, 30) else: D=rand.randint(1, 31) if calendar.isleap(G+1850) == True and M == 2: D = rand.randint(1, 28) elif calendar.isleap(G+1850) == False and M == 2: D=rand.randint(1, 29) ```
8,259
6,132,423
I was trying to install SCRAPY and play with it. The tutorial says to run this: ``` scrapy startproject tutorial ``` Can you please break this down to help me understand it. I have various releases of Python on my Windows 7 machine for various conflicting projects, so when I installed Scrapy with their .exe, it installed it in c:\Python26\_32bit directory, which is okay. But I don't have any one version of Python in my path. So I tried: ``` \python26_32bit\python.exe scrapy startproject tutorial ``` and I get the error: ``` \python26_32bit\python.exe: can't open file 'scrapy': [Errno 2] No such file or directory. ``` I do see scrapy installed here: c:\Python26\_32bit\Lib\site-packages\scrapy I cannot find any file called scrapy.py, so what exactly is "scrapy" in Python terminology, a lib, a site-package, a program, ?? and how do I change the sample above to run? I'm a little more used to Python in Google App Engine environment, so running on my local machine is often more challenging and foreign to me.
2011/05/26
[ "https://Stackoverflow.com/questions/6132423", "https://Stackoverflow.com", "https://Stackoverflow.com/users/160245/" ]
scrapy is a batch file which execute a python file called "scrapy", so you need to add the file "scrapy"'s path to your PATH environment. if that is still not work, make "scrapy.py" file with content ``` from scrapy.cmdline import execute execute() ``` and run `\python26_32bit\python.exe scrapy.py startproject tutorial`
Try ``` C:\Python26_32bit\Scripts\Scrapy startproject tutorial ``` or add `C:\Python26_32bit\Scripts` to your path
8,260
70,058,771
Im trying to use python to determine the continued fractions of pi by following the stern brocot tree. Its simple, if my estimation of pi is too high, take a left, if my estimation of pi is too low, take a right. Im using `mpmath` to get arbitrary precision floating numbers, as python doesn't support that, but no matter what i set the decimal precision to using 'mp.dps', the continued fraction generation seems to stop once it hits `245850922/78256779`. In theory, it should only exit execution when it is equal to the current estimation for pi. So I tried increasing the decimal precision of `mp.dps`, however it still halts execution there. have i reached a maximum amount of precision with `mp.dps` or is my approach inefficient? how can i make the continued fraction generation not cease at `245850922/78256779`??? ``` import mpmath as mp mp.dps = 1000 def eval_stern_seq(seq): a,b,c,d,m,n=0,1,1,0,1,1 for i in seq: if i=='L': c,d=m,n else: a,b=m,n m,n=a+c,b+d return m,n seq = '' while True: stern_frac = eval_stern_seq(seq) print(f"\n\ncurrent fraction: {stern_frac[0]}/{stern_frac[1]}") print("current value: " + mp.nstr(mp.fdiv(stern_frac[0],stern_frac[1]),n=mp.dps)) print("pi (reference): " + mp.nstr(mp.pi,n=mp.dps)) if mp.fdiv(stern_frac[0],stern_frac[1]) > mp.pi: seq+='L' elif mp.fdiv(stern_frac[0],stern_frac[1]) < mp.pi: seq+='R' else: break input("\n\t(press enter to continue generation...)") ```
2021/11/21
[ "https://Stackoverflow.com/questions/70058771", "https://Stackoverflow.com", "https://Stackoverflow.com/users/16155472/" ]
Never fails that as soon as I post a question I find the answer. For anyone else looking for something similar: The first bracket matches all slashes `[/]` The parenthesis capture the group, in this case the group of numbers `([0-9])` The `[0-9]` searches the range of numbers between 0 and 9 `{8}` is the quantifier, it's there so we look for a group of 8 numbers That said, My expression I needed was `[/]([0-9]){8}[/]` which will select all groups of 8 numbers between slashes like: [https://cdn.mydomain.com/wp-content/uploads/2020/05\*\*/20125258/\*\*image1.jpg](https://cdn.mydomain.com/wp-content/uploads/2020/05**/20125258/**image1.jpg) [https://cdn.mydomain.com/wp-content/uploads/2021/10\*\*/13440323/\*\*image-ex2.jpg](https://cdn.mydomain.com/wp-content/uploads/2021/10**/13440323/**image-ex2.jpg) [https://cdn.mydomain.com/wp-content/uploads/2012/01\*\*/92383422/\*\*my-image3.jpg](https://cdn.mydomain.com/wp-content/uploads/2012/01**/92383422/**my-image3.jpg) For what it's worth, this site helped me a lot with writing this and testing it <https://regexr.com/>
Use ```php /\d{8}/ ``` See [regex proof](https://regex101.com/r/sPLT7b/1). **EXPLANATION** ```php -------------------------------------------------------------------------------- / '/' -------------------------------------------------------------------------------- \d{8} digits (0-9) (8 times) -------------------------------------------------------------------------------- / '/' ``` If you would like to exclude slashes: ``` (?<=/)\d{8}(?=/) ``` See [another regex proof](https://regex101.com/r/sPLT7b/2). **EXPLANATION** ```php -------------------------------------------------------------------------------- (?<= look behind to see if there is: -------------------------------------------------------------------------------- / '/' -------------------------------------------------------------------------------- ) end of look-behind -------------------------------------------------------------------------------- \d{8} digits (0-9) (8 times) -------------------------------------------------------------------------------- (?= look ahead to see if there is: -------------------------------------------------------------------------------- / '/' -------------------------------------------------------------------------------- ) end of look-ahead ```
8,263
71,857,414
**The error :** from asyncio.windows\_events import NULL File "/app/.heroku/python/lib/python3.10/asyncio/windows\_events.py", line 6, in raise ImportError('win32 only') ImportError: win32 only **please how can i fix this ?**
2022/04/13
[ "https://Stackoverflow.com/questions/71857414", "https://Stackoverflow.com", "https://Stackoverflow.com/users/18793327/" ]
I had the same error while trying to deploy: ``` File "/tmp/build_4a1c8563/base/models.py", line 1, in <module> from asyncio.windows_events import NULL File "/app/.heroku/python/lib/python3.9/asyncio/windows_events.py", line 6, in <module> raise ImportError('win32 only') ``` I deleted the "from asyncio.windows\_events import NULL" in models.py and the issue was solved. Wasn't event using that module...
I had the same error while deploying: your IDE have imported `from asyncio.windows_events import NULL` line automatically while you were typing NULL Just delete this line ``` from asyncio.windows_events import NULL ```
8,264
42,164,772
I can't achieve to make summaries work with the Estimator API of Tensorflow. The Estimator class is very useful for many reasons: I have already implemented my own classes which are really similar but I am trying to switch to this one. Here is the code sample: ``` import tensorflow as tf import tensorflow.contrib.layers as layers import tensorflow.contrib.learn as learn import numpy as np # To reproduce the error: docker run --rm -w /algo -v $(pwd):/algo tensorflow/tensorflow bash -c "python sample.py" def model_fn(x, y, mode): logits = layers.fully_connected(x, 12, scope="dense-1") logits = layers.fully_connected(logits, 56, scope="dense-2") logits = layers.fully_connected(logits, 4, scope="dense-3") loss = tf.reduce_mean(tf.nn.sparse_softmax_cross_entropy_with_logits(logits=logits, labels=y), name="xentropy") return {"predictions":logits}, loss, tf.train.AdamOptimizer(0.001).minimize(loss) def input_fun(): """ To be completed for a 4 classes classification problem """ feature = tf.constant(np.random.rand(100,10)) labels = tf.constant(np.random.random_integers(0,3, size=(100,))) return feature, labels estimator = learn.Estimator(model_fn=model_fn, ) trainingConfig = tf.contrib.learn.RunConfig(save_checkpoints_secs=60) estimator = learn.Estimator(model_fn=model_fn, model_dir="./tmp", config=trainingConfig) # Works estimator.fit(input_fn=input_fun, steps=2) # The following code does not work # Can't initialize saver # saver = tf.train.Saver(max_to_keep=10) # Error: No variables to save # The following fails because I am missing a saver... :( hooks=[ tf.train.LoggingTensorHook(["xentropy"], every_n_iter=100), tf.train.CheckpointSaverHook("./tmp", save_steps=1000, checkpoint_basename='model.ckpt'), tf.train.StepCounterHook(every_n_steps=100, output_dir="./tmp"), tf.train.SummarySaverHook(save_steps=100, output_dir="./tmp"), ] estimator.fit(input_fn=input_fun, steps=2, monitors=hooks) ``` As you can see, I can create an Estimator and use it but I can achieve to add hooks to the fitting process. The logging hooks works just fine but the others require both **tensors** and a **saver** which I can't provide. The tensors are defined in the model function, thus I can't pass them to the **SummaryHook** and the **Saver** can't be initialized because there is no tensor to save... Is there a solution to my problem? (I am guessing yes but there is a lack of documentation of this part in the tensorflow documentation) * How can I initialized my **saver**? Or should I use other objects such as *Scaffold*? * How can I pass **summaries** to the **SummaryHook** since they are defined in my model function? Thanks in advance. *PS: I have seen the DNNClassifier API but I want to use the estimator API for Convolutional Nets and others. I need to create summaries for any estimator.*
2017/02/10
[ "https://Stackoverflow.com/questions/42164772", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5184894/" ]
The intended use case is that you let the Estimator save summaries for you. There are options in [RunConfig](https://github.com/tensorflow/tensorflow/blob/master/tensorflow/contrib/learn/python/learn/estimators/run_config.py#L182) for configuring summary writing. RunConfigs get passed when [constructing the Estimator](https://github.com/tensorflow/tensorflow/blob/master/tensorflow/contrib/learn/python/learn/estimators/estimator.py#L347).
Just have `tf.summary.scalar("loss", loss)` in the `model_fn`, and run the code without `summary_hook`. The loss is recorded and shown in the tensorboard. --- See also: * [Tensorflow - Using tf.summary with 1.2 Estimator API](https://stackoverflow.com/questions/45086109/tensorflow-using-tf-summary-with-1-2-estimator-api)
8,265
24,021,831
I'm a Transifex user, I need to retrieve my dashboard page with the list of all the projects of my organization. that is, the page I see when I login: <https://www.transifex.com/organization/(my_organization_name)/dashboard> I can access Transifex API with this code: ``` import urllib.request as url usr = 'myusername' pwd = 'mypassword' def GetUrl(Tx_url): auth_handler = url.HTTPBasicAuthHandler() auth_handler.add_password(realm='Transifex API', uri=Tx_url, user=usr, passwd=pwd) opener = url.build_opener(auth_handler) url.install_opener(opener) f = url.urlopen(Tx_url) return f.read().decode("utf-8") ``` everything is ok, but there's no API call to get all the projects of my organization. the only way is to get that page html, and parse it, but if I use this code, I get the login page. This works ok with google.com, but I get an error with www.transifex.com or www.transifex.com/organization/(my\_organization\_name)/dashboard **[Python, HTTPS GET with basic authentication](https://stackoverflow.com/questions/6999565/python-https-get-with-basic-authentication?rq=1)** I'm new at Python, I need some code with Python 3 and only standard library. Thanks for any help.
2014/06/03
[ "https://Stackoverflow.com/questions/24021831", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3704129/" ]
The call to > > /projects/ > > > returns your projects along with all the public projects that you can have access (like what you said). You can search for the ones that you need by modifying the call to something like: > > <https://www.transifex.com/api/2/projects/?start=1&end=6> > > > Doing so the number of projects returned will be restricted. For now maybe it would be more convenient to you, if you don't have many projects, to use this call: > > /project/project\_slug > > > and fetch each one separately.
Transifex comes with an API, and you can use it to fetch all the projects you have. I think that what you need [this](http://docs.transifex.com/developer/api/projects) GET request on projects. It returns a list of (slug, name, description, source\_language\_code) for all projects that you have access to in JSON format. Since you are familiar with python, you could use the [requests](http://www.django-rest-framework.org/api-guide/requests) library to perform the same actions in a much easier and more readable way. You will just need to do something like that: ``` import requests import json AUTH = ('yourusername', 'yourpassword') url = 'www.transifex.com/api/2/projects' headers = {'Content-type': 'application/json'} response = requests.get(url, headers=headers, auth=AUTH) ``` I hope I've helped.
8,266
60,202,828
I have been learning about the Trie structure through python. What is a little bit different about his trie compared to other tries is the fact that we are trying to implement a counter into every node of the trie in order to do an autocomplete (that is the final hope for the project). So far, I decided that having a recursive function to put the letter into a list of dictionaries would be a good idea. Final Product (Trie): ``` Trie = {"value":"*start" "count":1 "children":["value":"t" "count":1 "children":["value":"e" "count":1 "children":[...] ``` I know that a recursion would be very useful as it is just adding letters to the list, however, I can't figure out how to construct the basic function and how to tell the computer to refer to the last part of the dictionary without writing out ``` Trie["children"]["children"]["children"]["children"] ``` a bunch of times. Can you guys please give me some ideas as of how to construct the function? --Thanks
2020/02/13
[ "https://Stackoverflow.com/questions/60202828", "https://Stackoverflow.com", "https://Stackoverflow.com/users/11659038/" ]
Try this ``` mapply(function(x,y){paste(intersect(x,y),collapse=", ")}, strsplit(as.character(df$text),"\\, | "), strsplit(as.character(df$word),"\\, | ")) [1] "red, green" "red" "blue" ```
``` library(tidyverse) df %>% mutate(newcol = stringr::str_extract_all(text,gsub(", +","|",word))) country text word newcol 1 CA paint red green green, red, blue red, green 2 IN painting red red red 3 US painting blue red, blue blue ``` In this case, `newcol` is a list. To make it a string, we can do: ``` df%>% mutate(newcol = text %>% str_extract_all(gsub(", +", "|", word)) %>% invoke(toString, .)) ``` with data.table, you could do: ``` df[,id := .I][,newcol := do.call(toString,str_extract_all(text,gsub(', +',"|",word))), by = id][, id := NULL][] country text word newcol 1: CA paint red green green, red, blue red, green 2: IN painting red red red 3: US painting blue red, blue blue ```
8,267
12,794,357
I have 2 python scripts inside my c:\Python32 1)Tima\_guess.py which looks like this: ``` #Author:Roshan Mehta #Date :9th October 2012 import random,time,sys ghost =''' 0000 0 000---- 00000-----0-0 ----0000---0 ''' guess_taken = 0 print('Hello! What is your name?') name = input() light_switch = random.randint(1,12) print("Well, " + name + ", There are 12 switches and one of them turns on the Light.\nYou just need to guess which one is it in 4 guesses.Purely a luck,but i will help you to choose") print("Choose a switch,they are marked with numbers from 1-12.\nEnter the switch no.") while guess_taken < 4: try: guess = input() guess = int(guess) except: print("invalid literal,Plese enter an integer next time.") x = input() sys.exit(1) guess_taken = guess_taken + 1 guess_remain = 4 - guess_taken time.sleep(1) if guess < light_switch: print("The Light's switch is on right of your current choice.You have {} more chances to turn on the light.".format(guess_remain)) if guess > light_switch: print("The Light's switch is on left of your current choice.You have {} more chances to turn on the light.".format(guess_remain)) if guess == light_switch: print("Good,you are quiet lucky,You have turned on the light in {} chances.".format(guess_taken)) sys.exit(1) if guess != light_switch: print("Naah,You don't seems to be lucky enough,The switch was {}.".format(light_switch)) for i in range(3): time.sleep(2) print(ghost) print("The Devil in the room has just killed you....Ha ha ha") input() ``` 2)setup.py which look like this: ``` from cx_Freeze import setup, Executable setup( name = "Console game", version = "0.1", description = "Nothing!", executables = [Executable("Tima_guess.py")]) ``` When i run python setup.py build it creates an executable in build directory inside c:\Python32\build but when i run Tima\_guess.exe it just shows a black screen and goes off instantly not even able to see the message it is throwing. Please help me to get a standalone executable of my Tima\_guess.py game. Regards. As according to Thomas suggestion when i explicitly go and run in the cmd by Tima\_guess.exe.I get the following error but still not able to make out what is wrong. ``` c:\Python32\build\exe.win32-3.2>Tima_guess.exe Traceback (most recent call last): File "c:\Python32\lib\site-packages\cx_Freeze\initscripts\Console3.py", line 2 7, in <module> exec(code, m.__dict__) File "Tima_guess.py", line 4, in <module> File "c:\Python32\lib\random.py", line 39, in <module> from warnings import warn as _warn File "C:\Python\32-bit\3.2\lib\warnings.py", line 6, in <module> File "C:\Python\32-bit\3.2\lib\linecache.py", line 10, in <module> File "C:\Python\32-bit\3.2\lib\tokenize.py", line 28, in <module> ImportError: No module named re c:\Python32\build\exe.win32-3.2> ```
2012/10/09
[ "https://Stackoverflow.com/questions/12794357", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1716525/" ]
After building, add re.pyc to the library.zip file. To get re.pyc, all you need to do is run re.py successfully, then open `__pycache__` folder, then you will see a file like re.cpython-32.pyc, rename it to re.pyc and voila!
**setup.py** ``` from cx_Freeze import setup, Executable build_exe_options = {"includes": ["re"]} setup( name = "Console game", version = "0.1", description = "Nothing!", options = {"build_exe": build_exe_options}, executables = [Executable("Tima_guess.py")]) ```
8,269
55,577,991
I am trying to install `fiona=1.6` but I get the following error ``` conda install fiona=1.6 WARNING: The conda.compat module is deprecated and will be removed in a future release. Collecting package metadata: done Solving environment: - The environment is inconsistent, please check the package plan carefully The following packages are causing the inconsistency: - conda-forge/noarch::flask-cors==3.0.7=py_0 - conda-forge/osx-64::blaze==0.11.3=py36_0 - conda-forge/noarch::flask==1.0.2=py_2 failed PackagesNotFoundError: The following packages are not available from current channels: - fiona=1.6 -> gdal==1.11.4 Current channels: - https://conda.anaconda.org/conda-forge/osx-64 - https://conda.anaconda.org/conda-forge/noarch To search for alternate channels that may provide the conda package you're looking for, navigate to https://anaconda.org ``` If I try to install `gdal==1.11.4`, I get the following ``` conda install -c conda-forge gdal=1.11.4 WARNING: The conda.compat module is deprecated and will be removed in a future release. Collecting package metadata: done Solving environment: | The environment is inconsistent, please check the package plan carefully The following packages are causing the inconsistency: - conda-forge/noarch::flask-cors==3.0.7=py_0 - conda-forge/osx-64::blaze==0.11.3=py36_0 - conda-forge/noarch::flask==1.0.2=py_2 failed PackagesNotFoundError: The following packages are not available from current channels: - gdal=1.11.4 Current channels: - https://conda.anaconda.org/conda-forge/osx-64 - https://conda.anaconda.org/conda-forge/noarch - https://repo.anaconda.com/pkgs/main/osx-64 - https://repo.anaconda.com/pkgs/main/noarch - https://repo.anaconda.com/pkgs/free/osx-64 - https://repo.anaconda.com/pkgs/free/noarch - https://repo.anaconda.com/pkgs/r/osx-64 - https://repo.anaconda.com/pkgs/r/noarch To search for alternate channels that may provide the conda package you're looking for, navigate to https://anaconda.org and use the search bar at the top of the page. ``` This is the result of `conda info` ``` conda info active environment : base active env location : /anaconda3 shell level : 1 user config file : /Users/massaro/.condarc populated config files : /Users/massaro/.condarc conda version : 4.6.11 conda-build version : 3.17.8 python version : 3.6.8.final.0 base environment : /anaconda3 (writable) channel URLs : https://conda.anaconda.org/conda-forge/osx-64 https://conda.anaconda.org/conda-forge/noarch package cache : /anaconda3/pkgs /Users/massaro/.conda/pkgs envs directories : /anaconda3/envs /Users/massaro/.conda/envs platform : osx-64 user-agent : conda/4.6.11 requests/2.21.0 CPython/3.6.8 Darwin/17.5.0 OSX/10.13.4 UID:GID : 502:20 netrc file : None ```
2019/04/08
[ "https://Stackoverflow.com/questions/55577991", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3590067/" ]
Python Versions =============== The [Conda Forge channel only has gdal v1.11.4 for Python 2.7, 3.4, and 3.5](https://anaconda.org/conda-forge/gdal/files?version=1.11.4). You either need to use a newer version of Fiona (current is 1.8) or make a new env that includes one of those older Python versions. For example, ``` conda create -n fiona_1_6 fiona=1.6 python=3.5 ``` Channel `defaults` is Required ============================== Another issue you face is that you have removed the `defaults` channel from your configuration (as per your `conda info`). It is impossible to install `fiona=1.6` with only the `conda-forge` channel. My recommendation would be to have both `conda-forge` and `defaults` in your configuration, but just set `conda-forge` to have higher priority (if that's what you want). You can do this like so, ``` conda config --append channels defaults ``` If you really don't want to include `defaults`, but just want a temporary workaround, then you can simply run the first command with a `--channels | -c` flag ``` conda create -n fiona_1_6 -c conda-forge -c defaults fiona=1.6 python=3.5 ``` This will still give `conda-forge` precedence, but allow missing dependencies to be sourced from `defaults`. Environment File ================ If you have more than just Fiona that you require, it may be cleaner to put together a requirements file, like so ### fiona\_1\_6.yaml ``` name: fiona_1_6 channels: - conda-forge - defaults dependencies: - python=3.5 - fiona=1.6 - osmnx ``` Then create the new environment with this: ``` conda env create -f fiona_1_6.yaml ```
Doing what the error message told me to, > > To search for alternate channels that may provide the conda package you're > looking for, navigate to <https://anaconda.org> > > > and typing in `gdal` in the search box led me to <https://anaconda.org/conda-forge/gdal> which has this installation instruction: > > `conda install -c conda-forge gdal=1.11.4` > > > Try that to install the `gdal` dependency, maybe?
8,270
23,871,680
I downloaded the git repo from the official link, ``` git clone git:// ``` and I ran `./configure && make && make install` where the `make install` returns with error: ``` LINK(target) /usr/local/bin/node/out/Release/node: Finished touch /usr/local/bin/node/out/Release/obj.target/node_dtrace_header.stamp touch /usr/local/bin/node/out/Release/obj.target/node_dtrace_provider.stamp touch /usr/local/bin/node/out/Release/obj.target/node_dtrace_ustack.stamp touch /usr/local/bin/node/out/Release/obj.target/node_etw.stamp touch /usr/local/bin/node/out/Release/obj.target/node_mdb.stamp touch /usr/local/bin/node/out/Release/obj.target/node_perfctr.stamp touch /usr/local/bin/node/out/Release/obj.target/specialize_node_d.stamp make[1]: Leaving directory `/usr/local/bin/node/out' ln -fs out/Release/node node #make install make -C out BUILDTYPE=Release V=1 make[1]: Entering directory `/usr/local/bin/node/out' make[1]: Nothing to be done for `all'. make[1]: Leaving directory `/usr/local/bin/node/out' ln -fs out/Release/node node /usr/bin/python tools/install.py install '' '/usr/local' installing /usr/local/bin/node Traceback (most recent call last): File "tools/install.py", line 202, in <module> run(sys.argv[:]) File "tools/install.py", line 197, in run if cmd == 'install': return files(install) File "tools/install.py", line 130, in files action(['out/Release/node'], 'bin/node') File "tools/install.py", line 79, in install def install(paths, dst): map(lambda path: try_copy(path, dst), paths) File "tools/install.py", line 79, in <lambda> def install(paths, dst): map(lambda path: try_copy(path, dst), paths) File "tools/install.py", line 70, in try_copy try_unlink(target_path) # prevent ETXTBSY errors File "tools/install.py", line 33, in try_unlink os.unlink(path) OSError: [Errno 21] Is a directory: '/usr/local/bin/node' make: *** [install] Error 1 ``` I'm really not familiar with this, what is the issue? I ran the commands with root, when I googled for the error, I only found permission problem topics but that not the case here.
2014/05/26
[ "https://Stackoverflow.com/questions/23871680", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1948292/" ]
You can use an injection interceptor. > > For EJB 3 Session Beans and Message-Driven Beans, Spring provides a > convenient interceptor that resolves Spring 2.5's @Autowired > annotation in the EJB component class: > org.springframework.ejb.interceptor.SpringBeanAutowiringInterceptor. > This interceptor can be applied through an `@Interceptors` > annotation in the EJB component class, or through an > interceptor-binding XML element in the EJB deployment descriptor. > > > Code example: ``` @Stateless @Interceptors(SpringBeanAutowiringInterceptor.class) public class Foo { @Autowired private Boo boo; } ``` For more info, the reference [18.3.2. EJB 3 injection interceptor](http://docs.spring.io/spring/docs/2.5.x/reference/ejb.html) If you need to access the EJB from the spring, you can define the bean like the example below in your spring-context.xml configuration ``` <jee:local-slsb id="myComponent" jndi-name="ejb/fooBean" business-interface="com.Foo"/> ``` You can have more info about it in the section *18.2.2. Accessing local SLSBs* of the refence above.
I understand you question as: you have problem to inject a request scoped bean into another using spring. so try this: ``` <bean id="boo" class="Boo" scope="request"> <aop:scoped-proxy/> </bean> <bean id="foo" class="Foo"> <property name="boo" ref="Boo" /> </bean> ```
8,271
13,549,699
I wish to mock a class with the following requirements: * The class has public read/write properties, defined in its `__init__()` method * The class has public attribute which is auto-incremented on object creation * I wish to use `autospec=True`, so the class's API will be strictly checks on calls A simplified class sample: ``` class MyClass(): id = 0 def __init__(self, x=0.0, y=1.0): self.x = x self.y = y self.id = MyClass._id MyClass.id +=1 def calc_x_times_y(self): return self.x*self.y def calc_x_div_y(self, raise_if_y_not_zero=True): try: return self.x/self.y except ZeroDivisionError: if raise_if_y_not_zero: raise ZeroDivisionError else: return float('nan') ``` I need for the mock object to behave as the the original object, as far as properties are concerned: * It should auto-increment the id assigned to each newly-created mock object * It should allow access to its `x,y` properties But the mock method calls should be intercepted by the mock, and have its call signature validated What's the best way to go on about this? **EDIT** I've already tried several approaches, including subclassing the `Mock` class, use `attach_mock()`, and `mock_add_spec()`, but always ran into some dead end. I'm using the standard [mock](http://www.voidspace.org.uk/python/mock/index.html#mock-mocking-and-testing-library) library.
2012/11/25
[ "https://Stackoverflow.com/questions/13549699", "https://Stackoverflow.com", "https://Stackoverflow.com/users/499721/" ]
Since no answers are coming in, I'll post what worked for me (not necessarily the best approach, but here goes): I've created a mock factory which creates a `Mock()` object, sets its `id` property using the syntax described [here](http://www.voidspace.org.uk/python/mock/mock.html#mock.PropertyMock), and returns the object: ``` class MyClassMockFactory(): _id = 0 def get_mock_object(self, *args,**kwargs): mock = Mock(MyClass, autospec = True) self._attach_mock_property(mock , 'x', kwargs['x']) self._attach_mock_property(mock , 'y', kwargs['y']) self._attach_mock_property(mock , 'id', MyClassMockFactory._id) MyClassMockFactory._id += 1 return mock def _attach_mock_property(self, mock_object, name, value): p = PropertyMock(return_value=value) setattr(type(mock_object), name, p) ``` Now, I can patch the `MyClass()` constructor for my tests: ``` class TestMyClass(TestCase): mock_factory = MyClassMockFactory() @patch('MyClass',side_effect=mock_factory.get_mock_object) test_my_class(self,*args): obj0 = MyClass() obj1 = MyClass(1.0,2.2) obj0.calc_x_times_y() # Assertions obj0.calc_x_times_y.assert_called_once_with() self.assertEqaul(obj0.id, 0) self.assertEqaul(obj1.id, 1) ```
Sorry to dig up an old post, but something that would allow you to do precisely what you would like to achieve is to patch `calc_x_times_y` and `calc_x_div_y` and set `autospec=True` there, as opposed to Mocking the creation of the entire class. Something like: ``` @patch('MyClass.calc_x_times_y') @patch('MyClass.calc_x_div_y') test_foo(patched_div, patched_times): my_class = MyClass() #using real class to define attributes # ...rest of test ```
8,272
58,016,261
So, i am trying to create a linear functions in python such has `y = x` without using `numpy.linspace()`. In my understanding numpy.linspace() gives you an array which is discontinuous. But to fo I am trying to find the intersection of `y = x` and a function unsolvable analytically ( such has the one in the picture ) . Here is my code I don't know how to define x. Is there a way too express y has a simple continuous function? ``` import random as rd import numpy as np a = int(input('choose a :')) eps = abs(float(input('choose epsilon :'))) b = 0 c = 10 x = ?????? y1 = x y2 = a*(1 - np.exp(x)) z = abs(y2 - y1) while z > eps : d = rd.uniform(b,c) c = d print(c) print(y1 , y2 ) ``` ![Here is a picture to describe what I am trying to do](https://i.stack.imgur.com/ZryMN.png)
2019/09/19
[ "https://Stackoverflow.com/questions/58016261", "https://Stackoverflow.com", "https://Stackoverflow.com/users/12091717/" ]
Since your functions are differentiable, you could use the [Newton-Raphson method](https://en.wikipedia.org/wiki/Newton%27s_method) implemented by `scipy.optimize`: ``` >>> scipy.optimize.newton(lambda x: 1.5*(1-math.exp(-x))-x, 10) 0.8742174657987283 ``` Computing the error is very straightforward: ``` >>> def f(x): return 1.5*(1-math.exp(-x)) ... >>> x = scipy.optimize.newton(lambda x: f(x)-x, 10) >>> error = f(x) - x >>> x, error (0.8742174657987283, -4.218847493575595e-15) ``` I've somewhat arbitrarily chosen x0=10 as the starting point. Some care needs to be take here to make sure the method doesn't converge to x=0, which in your example is also a root.
“Not solvable analytically” means there is no closed-form solution. In other words, you cant write down a single answer on paper like a number or equation and circle it and say ”thats my answer.” For some math problems it’s impossible to do so. Instead, for these kinds of problems, we can approximate the solution by running simulations and getting values or a graph of what the solution is.
8,273
15,040,884
I want to list all the keys stored in the memcached server. I googled for the same, I got some python/php scripts that can list the same. I tested it but all went failed and none gave me full keys. I can see thousands of keys using telnet command ``` stats items ``` I used perl script that uses telnet to list keys, but that got failed too. I mean that script is listing keys but not all of them. Do I need to reconfigure telnet ? Is there any other way ?
2013/02/23
[ "https://Stackoverflow.com/questions/15040884", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1308498/" ]
memcache does not provide an api to exhaustively list all keys. "stats items" is as good as it gets to list the first 1M of keys. More info here: <http://www.darkcoding.net/software/memcached-list-all-keys/> Not sure if that helps you but redis (which could be considered a superset of memcache) provides a more comprehensive API for key listing and searching. You might want to give it a try.
It you use python-memcached, and would like to export all the items in memcache server, I summerized two methods to the problem in this question: [Export all keys and values from memcached with python-memcache](https://stackoverflow.com/questions/5730276/export-all-keys-and-values-from-memcached-with-python-memcache)
8,275
3,224,924
Is there anything in python that lets me dump out a random object in such a way as to see its underlying data representation? I am coming from Perl where Data::Dumper does a reasonable job of letting me see how a data structure is laid out. Is there anything that does the same thing in python? Thanks!
2010/07/11
[ "https://Stackoverflow.com/questions/3224924", "https://Stackoverflow.com", "https://Stackoverflow.com/users/365530/" ]
Well `Dumper` in Perl gives you a representation of an object that can be `eval`'d by the interpreter to give you the original object. An object's `repr` in Python tries to do that, and sometimes it's possible. A `dict`'s `repr` or a `str`'s `repr` do this, and some classes like `datetime` and `timedelta` also do this. So `repr` is as much the equivalent of `Dumper` but it's not pretty and doesn't show you the internals of an object. For that you can use `dir` and roll your own printer. Here's my shot at a printer that would not result in `eval`-able Python code and thus should be used to generate a string of the object instead: ``` def dump(obj): out = {} for attr in dir(obj): out[attr] = getattr(obj, attr) from pprint import pformat return pformat(out) class myclass(object): foo = 'foo' def __init__(self): self.bar = 'bar' def __str__(self): return dump(self) c = myclass() print c ``` In the above example, I've overridden the object's default `__str__` implementation. `__str__` is what gets called when you try to represent the object as a string, or format it using a string formatting function. BTW `repr` is what gets printed when you do `print obj`, it invokes the `__repr__` method on that object. See [the Python documentation of `__repr__`](http://docs.python.org/reference/datamodel.html#object.__repr__) for more information on how to control the formatting of objects. ``` # this would print the object's __repr__ print "%r" % c # this would print the object's __str__ print "%s" % c ``` --- The output from the above code was ``` {'__class__': <class '__main__.myclass'>, '__delattr__': <method-wrapper '__delattr__' of myclass object at 0xb76deb0c>, '__dict__': {'bar': 'bar'}, '__doc__': None, '__format__': <built-in method __format__ of myclass object at 0xb76deb0c>, '__getattribute__': <method-wrapper '__getattribute__' of myclass object at 0xb76deb0c>, '__hash__': <method-wrapper '__hash__' of myclass object at 0xb76deb0c>, '__init__': <bound method myclass.__init__ of <__main__.myclass object at 0xb76deb0c>>, '__module__': '__main__', '__new__': <built-in method __new__ of type object at 0x82358a0>, '__reduce__': <built-in method __reduce__ of myclass object at 0xb76deb0c>, '__reduce_ex__': <built-in method __reduce_ex__ of myclass object at 0xb76deb0c>, '__repr__': <method-wrapper '__repr__' of myclass object at 0xb76deb0c>, '__setattr__': <method-wrapper '__setattr__' of myclass object at 0xb76deb0c>, '__sizeof__': <built-in method __sizeof__ of myclass object at 0xb76deb0c>, '__str__': <bound method myclass.__str__ of <__main__.myclass object at 0xb76deb0c>>, '__subclasshook__': <built-in method __subclasshook__ of type object at 0x896ad34>, '__weakref__': None, 'bar': 'bar', 'foo': 'foo'} ```
After much searching about for this exactly myself, I came across this Dumper equivalent which I typically import now. <https://salmon-protocol.googlecode.com/svn-history/r24/trunk/salmon-playground/dumper.py>
8,276
9,595,009
What is the difference between [`warnings.warn()`](https://docs.python.org/library/warnings.html#warnings.warn) and [`logging.warn()`](https://docs.python.org/library/logging.html#logging.Logger.warning) in terms of what they do and how they should be used?
2012/03/07
[ "https://Stackoverflow.com/questions/9595009", "https://Stackoverflow.com", "https://Stackoverflow.com/users/84952/" ]
I agree with the other answer -- `logging` is for logging and `warning` is for warning -- but I'd like to add more detail. Here is a tutorial-style HOWTO taking you through the steps in using the `logging` module. <https://docs.python.org/3/howto/logging.html> It directly answers your question: > > warnings.warn() in library code if the issue is avoidable and the > client application should be modified to eliminate the warning > > > logging.warning() if there is nothing the client application can do > about the situation, but the event should still be noted > > >
Besides the [canonical explanation in official documentation](https://docs.python.org/2/howto/logging.html#when-to-use-logging) > > warnings.warn() in library code if the issue is avoidable and the client application should be modified to eliminate the warning > > > logging.warning() if there is nothing the client application can do about the situation, but the event should still be noted > > > It is also worth noting that, by default `warnings.warn("same message")` will show up only once. That is a major noticeable difference. Quoted from [official doc](https://docs.python.org/2/library/warnings.html) > > Repetitions of a particular warning for the same source location are typically suppressed. > > > ``` >>> import warnings >>> warnings.warn("foo") __main__:1: UserWarning: foo >>> warnings.warn("foo") >>> warnings.warn("foo") >>> >>> import logging >>> logging.warn("bar") WARNING:root:bar >>> logging.warn("bar") WARNING:root:bar >>> logging.warn("bar") WARNING:root:bar >>> >>> >>> warnings.warn("fur") __main__:1: UserWarning: fur >>> warnings.warn("fur") >>> warnings.warn("fur") >>> ```
8,277
2,248,699
Is there something like twisted (python) or eventmachine (ruby) in .net land? Do I even need this abstraction? I am listening to a single IO device that will be sending me events for three or four analog sensors attached to it. What are the risks of simply using a looped `UdpClient`? I can't miss any events, but will the ip stack handle the queuing of messages for me? Does all of this depend on how much work the thread tries to do once I receive a message? What I'm looking for in an abstraction is to remove the complication of threading and synchronization from the problem.
2010/02/11
[ "https://Stackoverflow.com/questions/2248699", "https://Stackoverflow.com", "https://Stackoverflow.com/users/804/" ]
I think you are making it too complicated. Just have 1 UDP socket open, and set an async callback on it. For every incoming packet put it in a queue, and set the callback again. Thats it. make sure that when queuing and dequeueing you set a lock on the queue. it's as simple as that and performance will be great. R
I would recommend [ICE](http://www.zeroc.com) it's a communication engine that will abstract threading and communication to you (documentation is kind of exhaustive).
8,279
57,087,455
I need to compare data in two tables. These tables are similar in schema but will have different data values. I want to export these data to csv or similar format and then check for differences. I would like to perform this check with a python script. I have already figured out how to export the data to csv format. But my problem is the since the two tables are not in sink the primary keys may be different for the same row. Also the row order may be different in the to tables. CSV compare will not help me in this aspect. Example database tables in CSV format are below id,name,designation,department Table employee in db1 --------------------- 1,Ann,Manager,Sales 2,Brian,Executive,Marketing 4,Melissa,Director,Engineering 5,George,Manager,Plant Table employee in db2 --------------------- 1,Ann,Manager,Sales 2,George,Manager,Plant 3,Brian,Executive,Marketing Here Melissa is a missing record in the second DB. But George and Brian even though they have different Id's are considered the same record. I've found that there are commercial software for this task, but what i need is a script that can be used in a process flow to identify the differences in the tables.
2019/07/18
[ "https://Stackoverflow.com/questions/57087455", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4817150/" ]
I finally managed to get it working by adding `contentContainerStyle={{borderRadius: 6, overflow: 'hidden'}}` to the FlatList.
Recreated the structure and for me its working fine with border radius Snack link: <https://snack.expo.io/@msbot01/disrespectful-chocolate> ``` <View style={styles.container}> <ImageBackground source={{uri: 'https://artofislamicpattern.com/wp-content/uploads/2012/10/3.jpg'}} style={{width: '100%', height: '100%',opacity:0.8, alignItems:'center', justifyContent:'center'}}> <FlatList data={[{key: 'a', value: 'Australia'}, {key: 'b', value:'Canada'}]} extraData={this.state} keyExtractor={this._keyExtractor} renderItem={this._renderItem} style={{backgroundColor:'white', width:'90%', borderRadius:10, margin:10, marginBottom:10, paddingTop:10, paddingBottom:10, paddingLeft:10, position:'absolute', zIndex: 1}} /> </ImageBackground> </View> ``` [![snapshot](https://i.stack.imgur.com/LJWW8.png)](https://i.stack.imgur.com/LJWW8.png)
8,282
30,950,941
I have my Django app set up on Elastic Beanstalk and recently made a change to the DB that I would like to have applied to the live DB now. I understand that I need to set this up as a container command, and after checking the DB I can see that the migration was run, but I can't figure out how to have more controls over the migration. For example, I only want a migration to run when necessary but from my understanding, the container will run the migration on every deploy assuming the command is still listed in the config file. Also, on occassion, I will be given options during a migration such as: ``` Any objects realted to these content types by a foreign key will also be deleted. Are you sure you want to delete these content types? If you're unsure, answer 'no' ``` How do I set up the container command to respond to this with a `yes` during the deployment phase? This is my current config file ``` container_commands: 01_migrate: command: 'source /opt/python/run/venv/bin/actiate && python app/manage.py makemigrations' command: 'source /opt/python/run/venv/bin/activate && python app/manage.py migrate' ``` Is there a way to set these 2 commands to only run when necessary and to respond to the yes/no options I receive during a migration?
2015/06/20
[ "https://Stackoverflow.com/questions/30950941", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2989731/" ]
Make sure that the same settings are used when migrating and running! Thus I would recommend you change this kind of code in ***django.config*** ```yaml container_commands: 01_migrate: command: "source /opt/python/run/venv/bin/activate && python manage.py migrate" leader_only: true ``` to: ```yaml container_commands: 01_migrate: command: "django-admin migrate" leader_only: true option_settings: aws:elasticbeanstalk:application:environment: DJANGO_SETTINGS_MODULE: fund.productionSettings ``` as recommended [here](https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/create-deploy-python-django.html). This will help you avoid issues with **wrong settings** used. [More](https://docs.djangoproject.com/en/2.1/ref/django-admin/) on ***manage.py*** v.s. ***django-admin.py.***
In reference to Oscar Chen answer, you can set environmental variables using eb cli with ``` eb setenv key1=value1 key2=valu2 ...etc ```
8,284
57,901,995
i have a dockerfile which looks like this: ``` FROM python:3.7-slim-stretch ENV PIP pip RUN \ $PIP install --upgrade pip && \ $PIP install scikit-learn && \ $PIP install scikit-image && \ $PIP install rasterio && \ $PIP install geopandas && \ $PIP install matplotlib COPY sentools sentools COPY data data COPY vegetation.py . ``` Now in my project i have two python files vegetation and forest. i have kept each of them in separate folders. How can i create separate docker images for both python files and execute the containers for them separately?
2019/09/12
[ "https://Stackoverflow.com/questions/57901995", "https://Stackoverflow.com", "https://Stackoverflow.com/users/11310755/" ]
If the base code is same, and only the container is supposed to run up with different Python Script, So then I will suggest using single Docker and you will not worry about the management of two docker image. Set `vegetation.py` to default, when container is up without passing ENV it will run `vegetation.py` and if the ENV `FILE_TO_RUN` override during run time, the specified file will be run. ``` FROM python:3.7-alpine3.9 ENV FILE_TO_RUN="/vegetation.py" COPY vegetation.py /vegetation.py CMD ["sh", "-c", "python $FILE_TO_RUN"] ``` Now, if you want to run `forest.py` then you can just pass the path file to ENV. ``` docker run -it -e FILE_TO_RUN="/forest.py" --rm my_image ``` or ``` docker run -it -e FILE_TO_RUN="/anyfile_to_run.py" --rm my_image ``` **updated:** You can manage with args+env in your docker image. ``` FROM python:3.7-alpine3.9 ARG APP="default_script.py" ENV APP=$APP COPY $APP /$APP CMD ["sh", "-c", "python /$APP"] ``` Now build with ARGs ``` docker build --build-arg APP="vegetation.py" -t app_vegetation . ``` or ``` docker build --build-arg APP="forest.py" -t app_forest . ``` Now good to run ``` docker run --rm -it app_forest ``` copy both ``` FROM python:3.7-alpine3.9 # assign some default script name to args ARG APP="default_script.py" ENV APP=$APP COPY vegetation.py /vegetation.py COPY forest.py /forest.py CMD ["sh", "-c", "python /$APP"] ```
If you insist in creating separate images, you can always use the [ARG](https://docs.docker.com/engine/reference/builder/#arg) command. ``` FROM python:3.7-slim-stretch ARG file_to_copy ENV PIP pip RUN \ $PIP install --upgrade pip && \ $PIP install scikit-learn && \ $PIP install scikit-image && \ $PIP install rasterio && \ $PIP install geopandas && \ $PIP install matplotlib COPY sentools sentools COPY data data COPY $file_to_copy . ``` And the build the image like that: ``` docker build --buid-arg file_to_copy=vegetation.py . ``` or like that ``` docker build --buid-arg file_to_copy=forest.py . ```
8,294
18,238,558
I am new to python language. My problem is I have two python scripts : Automation script A and a main script B. Script A internally calls script B. Script B exits whenever an exception is caught using sys.exit(1) functionality. Now, whenever script B exits it result in exit of script A also. Is there any way to stop exiting script A and continue its rest of execution, even if script B exits. Thanks in advance.
2013/08/14
[ "https://Stackoverflow.com/questions/18238558", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2286286/" ]
You should encapsulate the code in a try except block. That will catch your exception, and continue executing script A.
`sys.exit()` actually raises a `SystemExit` exception which is caught and handled by the Python interpreter. All you have to do is put the call into to "script B" into a try/except block that catches `SystemExit` before it bubbles all the way up. For example: ``` try: script_b.do_stuff() except SystemExit as e: print('Script B exited with return code {0}'.format(e.code)) ```
8,297
63,867,203
I wrote some code in python to see how many times one number can be divided by a number, until it gets a value of one. ``` counter_var = 1 quotient = num1/num2 if quotient<1: print('1 time') else: while quotient >= 1: quotient = num1/num2 counter_var = counter_var + 1 print(counter_var) ``` It is not ending the process but neither is it giving any output.
2020/09/13
[ "https://Stackoverflow.com/questions/63867203", "https://Stackoverflow.com", "https://Stackoverflow.com/users/14108602/" ]
you are not changing the value of quotient in the while loop. it remains constant. instead of **quotient = num1/num2** it should be **quotient /= num2** if I understand your problem correctly.
well to start, you're missing assignment to numbers, in the case of num1>num2 , you will be entering an endless while loop and hence you will never get to the `print(counter_var)` snippet
8,298
3,300,716
I'm attempting to use mysql after only having worked with sqlite in the past. I've installed `XAMPP` on Linux (ubuntu) and have `mysql` up and running fine (seems like that with phpMyadmin at least). However, I'm having trouble getting the MySQLdb (the python lib) working {installed this using apt}. to be exact: ```py >>> import MySQLdb >>> db = MySQLdb.connect(host="localhost",db="opfine") Traceback (most recent call last): File "<input>", line 1, in <module> File "/usr/lib/pymodules/python2.6/MySQLdb/__init__.py", line 81, in Connect return Connection(*args, **kwargs) File "/usr/lib/pymodules/python2.6/MySQLdb/connections.py", line 170, in __init_ ... super(Connection, self).__init__(*args, **kwargs2) ``` > > OperationalError: (2002, "Can't connect to local MySQL server through > socket '/var > /run/mysqld/mysqld.sock' (2)") > > > I'm guessing > > Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock > > > means its expecting some sort of local installation (i.e. not within `XAMPP`), but I can't figure out how to go about modding this to get it to work with the `XAMMP` flavor of `mysql`. Help is much appreciated!
2010/07/21
[ "https://Stackoverflow.com/questions/3300716", "https://Stackoverflow.com", "https://Stackoverflow.com/users/264875/" ]
For the record (and thanks to a pointer from Igancio), I found that the below works (terrible I didn't think of this before): ``` db=MySQLdb.connect( user="root" ,passwd="" ,db="my_db" ,unix_socket="/opt/lampp/var/mysql/mysql.sock") ```
It means that you didn't start the MySQL server, or it's configured to not use a domain socket.
8,303
53,798,252
I'm fairly new to python and attempting to add lines 1-10 of a csv into a JSON file, however, I only seem to be getting the 10th line of the CSV. I can't seem to figure out what is incorrect about my argument. Any help appcreated! ``` import csv, json, itertools csvFilePath = "example.csv" jsonFilePath = "example.json" # Read the CSV and add data to a dictionary data = {} with open(csvFilePath) as csvFile: csvReader = csv.DictReader(csvFile) for csvRow in itertools.islice(csv.DictReader(csvFile), 0,10): data = csvRow print(data) #Write the data to a JSON file with open(jsonFilePath, "w") as jsonFile: jsonFile.write(json.dumps(data, indent=4)) ```
2018/12/15
[ "https://Stackoverflow.com/questions/53798252", "https://Stackoverflow.com", "https://Stackoverflow.com/users/10796111/" ]
At `data = csvRow`, the `data` variable keeps getting overwritten, so at the end only the last line you read will be inside `data`. Try something like this: ``` import csv, json, itertools csvFilePath = "example.csv" jsonFilePath = "example.json" # Read the CSV and add data to a dictionary data = {} with open(csvFilePath) as csvFile: csvReader = csv.DictReader(csvFile) for csvRow in itertools.islice(csv.DictReader(csvFile), 0,10): #email = csvRow["email"] data[len(data)] = csvRow print(data) # Write the data to a JSON file with open(jsonFilePath, "w") as jsonFile: jsonFile.write(json.dumps(line, indent=4)) ``` (Didn't test this, but the idea is to add `csvRow` as new elements of the dict `data`)
Assuming that the input CSV is ``` 1,2,3,4,5 a,b,c,d,e ``` We have the following code: ``` import json import csv inpf = open("test.csv", "r") csv_reader = csv.reader(inpf) # here you slice the columns with [2:4] for example lines = [row[2:4] for row in csv_reader] inpf.close() lines_json = json.dumps(lines) outpf = open("out.json", "w") outpf.write(lines_json) outpf.close() ``` which creates ``` [ [ "3", "4" ], [ "c", "d" ] ] ```
8,306
43,732,642
I need the status of the task like if it is running or upforretry or failed within the same dag. So i tried to get it using the below code, though i got no output... ``` Auto = PythonOperator( task_id='test_sleep', python_callable=execute_on_emr, op_kwargs={'cmd':'python /home/hadoop/test/testsleep.py'}, dag=dag) logger.info(Auto) ``` The intention is to kill certain running tasks once a particular task on airflow completes. Question is **how do i get the state of a task like is it in the running state or failed or success**
2017/05/02
[ "https://Stackoverflow.com/questions/43732642", "https://Stackoverflow.com", "https://Stackoverflow.com/users/6714806/" ]
Okay, I think I know what you're doing and I don't really agree with it, but I'll start with an answer. A straightforward, but hackish, way would be to query the task\_instance table. I'm in postgres, but the structure should be the same. Start by grabbing the task\_ids and state of the task you're interested in with a db call. ``` SELECT task_id, state FROM task_instance WHERE dag_id = '<dag_id_attrib>' AND execution_date = '<execution_date_attrib>' AND task_id = '<task_to_check>' ``` That should give you the state (and name, for reference) of the task you're trying to monitor. State is stored as a simple lowercase string.
You can use the command line Interface for this: ``` airflow task_state [-h] [-sd SUBDIR] dag_id task_id execution_date ``` For more on this you can refer official airflow documentation: <http://airflow.incubator.apache.org/cli.html>
8,308
12,125,362
In a [previous question](https://stackoverflow.com/questions/12124275/splitting-a-string-by-capital-letters-python), it was suggested that, in order to divide a string and store it, I should use a list, like so: ``` [a for a in re.split(r'([A-Z][a-z]*)', 'MgSO4') if a] ['Mg', u'S', u'O', u'4'] ``` What I'd like to ask this time around is how would I be able to use that to store the different strings created into variables so I can look them up in the CSV file I have, if it's at all possible. Where it says 'MgSO4' would be coming from a variable called 'formula', which is produced from a raw\_input, like so: ``` formula = raw_input("Enter formula: ") ``` Full program code can be found [here](http://pastebin.com/3G761hb0), and I've included the more relevant part below. Thanks in advance for any help! ``` formula = raw_input("Enter formula: ") [a for a in re.split(r'([A-Z][a-z]*)', 'MgSO4') if a] weight_sum = sum(float(formul_data.get(elem.lower())) for elem in elements) print "Total weight =", weightSum ```
2012/08/25
[ "https://Stackoverflow.com/questions/12125362", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1423819/" ]
If your goal is to be able to add up the molecular weights of the atoms comprising a molecule, I suggest doing your regular expressions a bit differently. Instead of having the numbers mixed in with the element symbols in your split list, attach them to the preceding element instead (and attach a 1 if there was no number). Here's how I'd do that: ``` import re # a partial table of atomic weights, replace with something better! weights = { "H" : 1, "Na": 11, "Mg": 12, "C" : 12, "N" : 14, "O" : 16, "F" : 19, "Al": 27, "S" : 32, "Cl": 35, } def molecularWeight(formula): matches = re.findall(r"([A-Z][a-z]?)([0-9]*)", formula) return sum(weights[symbol] * (int(count) if count else 1) for (symbol, count) in matches) ``` To make this fit with the code you've shown, replace `weights[symbol]` with something like `formul_data.get(symbol.lower(), 0)` (or whatever is necessary to get appropriate atomic weights by symbol in your code). This should handle any empiric formula, and many structural ones, as long as there are no parentheses. To solve fully parenthesized formulas you'll need to make a better parser, as simple regular expressions won't work.
After running ``` >>> import re >>> elements = [a for a in re.split(r'([A-Z][a-z]*)', 'MgSO4') if a] ``` you can access the splitted parts using indices ``` >>> print elements[0] 'Mg' >>> print elements[-1] # print the last element '4' ```
8,313
34,794,417
I am trying to make kivy work with SDL2 on centos 7 but when I run my main.py I get the following messages: ``` [INFO ] [Logger ] Record log in /home/etienne/.kivy/logs/kivy_16-01-14_51.txt [INFO ] [Kivy ] v1.9.1 [INFO ] [Python ] v2.7.5 (default, Nov 20 2015, 02:00:19) [GCC 4.8.5 20150623 (Red Hat 4.8.5-4)] [INFO ] [Factory ] 179 symbols loaded [INFO ] [Image ] Providers: img_tex, img_dds, img_gif, img_pil (img_pygame, img_ffpyplayer ignored) [CRITICAL] [Window ] Unable to find any valuable Window provider at all! egl_rpi - ImportError: cannot import name bcm File "/home/etienne/Demo/Test/test_virtualenv/lib/python2.7/site-packages/kivy/core/__init__.py", line 59, in core_select_lib fromlist=[modulename], level=0) File "/home/etienne/Demo/Test/test_virtualenv/lib/python2.7/site-packages/kivy/core/window/window_egl_rpi.py", line 12, in <module> from kivy.lib.vidcore_lite import bcm, egl pygame - ImportError: No module named pygame File "/home/etienne/Demo/Test/test_virtualenv/lib/python2.7/site-packages/kivy/core/__init__.py", line 59, in core_select_lib fromlist=[modulename], level=0) File "/home/etienne/Demo/Test/test_virtualenv/lib/python2.7/site-packages/kivy/core/window/window_pygame.py", line 8, in <module> import pygame x11 - ImportError: No module named window_x11 File "/home/etienne/Demo/Test/test_virtualenv/lib/python2.7/site-packages/kivy/core/__init__.py", line 59, in core_select_lib fromlist=[modulename], level=0) [INFO ] [Text ] Provider: pil(['text_pygame'] ignored) [CRITICAL] [App ] Unable to get a Window, abort. Exception SystemExit: 1 in 'kivy.properties.dpi2px' ignored [CRITICAL] [App ] Unable to get a Window, abort. Exception SystemExit: 1 in 'kivy.properties.dpi2px' ignored [CRITICAL] [App ] Unable to get a Window, abort. Exception SystemExit: 1 in 'kivy.properties.dpi2px' ignored [CRITICAL] [App ] Unable to get a Window, abort. ``` I have installed the following libraries: ``` SDL.x86_64 1.2.15-14.el7 @base SDL-devel.x86_64 1.2.15-14.el7 @base SDL2.x86_64 2.0.3-9.el7 @epel SDL2-devel.x86_64 2.0.3-9.el7 @epel SDL_image.x86_64 1.2.12-11.el7 @epel SDL_mixer.x86_64 1.2.12-4.el7 @epel SDL_mixer-devel.x86_64 1.2.12-4.el7 @epel SDL_ttf.x86_64 2.0.11-6.el7 @epel SDL_ttf-devel.x86_64 2.0.11-6.el7 @epel ``` I make it work with the same main.py on Fedora 20 and it also work if I install pygame but he is too heavy so I would like to use SDL2. If you have any idea on how to make it work ;)
2016/01/14
[ "https://Stackoverflow.com/questions/34794417", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5269531/" ]
Actually `sprintf` didn't work for me, so if you don't mind a common dependency: ``` #reproducible example -- this happens with zip codes sometimes X <- data.frame(A = c('10002','8540','BIRD'), stringsAsFactors=FALSE) # X$A <- sprintf('%05s',X$A) didn't work for me # Note in ?sprintf: 0: For numbers, pad to the field width with leading zeros. # For characters, this zero-pads on some platforms and is ignored on others. library('stringr') X$A <- str_pad(X$A, width=5, side='left', pad='0') X # A #1 10002 #2 08540 #3 0BIRD ``` or, if you prefer a base solution, the following is equivalent: ``` X$A <- ifelse(nchar(X$A) < 5, paste(c(rep("0",5-nchar(X$A)), X$A), collapse=""), X$A) ``` (note this works on strings of length 4 or less, not just 4)
Try something like this (assuming data frame name and column name are right): ``` element_of_X$a <- with(element_of_X , ifelse(nchar(a) == 4, paste('0', a, sep = ''), a) ```
8,315
46,050,045
I would like to run a bigquery query from python only if it is below a certain cost estimation. Is there a way to programmatically check the estimated cost of a query before executing it, just like the Web UI (see attached image)? [![enter image description here](https://i.stack.imgur.com/UWdbL.png)](https://i.stack.imgur.com/UWdbL.png)
2017/09/05
[ "https://Stackoverflow.com/questions/46050045", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1134753/" ]
Yes, you can use the `dryRun` flag. This will return `totalBytesProcessed` i.e. the amount of data that will be processed if the query is executed. <https://cloud.google.com/bigquery/docs/reference/rest/v2/jobs#configuration.dryRun> [![enter image description here](https://i.stack.imgur.com/Rd2lO.png)](https://i.stack.imgur.com/Rd2lO.png)
> > I would like to run a bigquery query from python only if it is below a certain cost estimation > > > First, please note - BigQuery UI in fact uses DryRun which only estimates `Total Bytes Processed` leaving another important factor `Billing Tier` unknown. Use of DryRun of course useful and can help in certain scenarios! Meantime, I can propose using below two attributes [`configuration.query.maximumBillingTier`](https://cloud.google.com/bigquery/docs/reference/rest/v2/jobs#configuration.query.maximumBillingTier) and [`configuration.query.maximumBytesBilled`](https://cloud.google.com/bigquery/docs/reference/rest/v2/jobs#configuration.query.maximumBytesBilled) They respectively limit the billing tier and bytes billed for the job Queries that have resource usage beyond max tier or have bytes billed beyond max bytes will fail (`without incurring a charge`)
8,324
59,077,162
I am using Python 3.8 and Pip 3.8 I cannot seem to install certain modules using pip. For example, when attempting to install the keras module: ``` (venv) C:\Users\Spencer Pruitt\PycharmProjects\MNIST Analyzer>pip install keras Collecting keras Using cached https://files.pythonhosted.org/packages/ad/fd/6bfe87920d7f4fd475acd28500a42482b6b84479832bdc0fe9e589a60ceb/Keras-2.3.1-py2.py3-none-any.whl Collecting h5py (from keras) Using cached https://files.pythonhosted.org/packages/5f/97/a58afbcf40e8abecededd9512978b4e4915374e5b80049af082f49cebe9a/h5py-2.10.0.tar.gz Collecting keras-preprocessing>=1.0.5 (from keras) Using cached https://files.pythonhosted.org/packages/28/6a/8c1f62c37212d9fc441a7e26736df51ce6f0e38455816445471f10da4f0a/Keras_Preprocessing-1.1.0-py2.py3-none-any.whl Collecting numpy>=1.9.1 (from keras) Using cached https://files.pythonhosted.org/packages/ff/59/d3f6d46aa1fd220d020bdd61e76ca51f6548c6ad6d24ddb614f4037cf49d/numpy-1.17.4.zip Collecting six>=1.9.0 (from keras) Using cached https://files.pythonhosted.org/packages/65/26/32b8464df2a97e6dd1b656ed26b2c194606c16fe163c695a992b36c11cdf/six-1.13.0-py2.py3-none-any.whl Collecting pyyaml (from keras) Using cached https://files.pythonhosted.org/packages/29/16/e4d675da1275a3aabd5e2a35e868273ba3f4859993acb55e77792f806315/PyYAML-5.1.2-cp38-cp38m-win32.whl Collecting scipy>=0.14 (from keras) Using cached https://files.pythonhosted.org/packages/a7/5c/495190b8c7cc71977c3d3fafe788d99d43eeb4740ac56856095df6a23fbd/scipy-1.3.3.tar.gz Installing build dependencies ... error Complete output from command "C:\Users\Spencer Pruitt\PycharmProjects\MNIST Analyzer\venv\Scripts\python.exe" "C:\Users\Spencer Pruitt\PycharmProjects\MNIST Analyzer\venv\lib\site-packages\pip-19.0.3-py3.8.egg\pip" install --ignore-installed --no-user --prefix "C:\U sers\Spencer Pruitt\AppData\Local\Temp\pip-build-env-qcebzlj8\overlay" --no-warn-script-location --no-binary :none: --only-binary :none: -i https://pypi.org/simple -- wheel setuptools Cython>=0.29.13 "numpy==1.13.3; python_version=='3.5' and platform_system!='AIX'" "n umpy==1.13.3; python_version=='3.6' and platform_system!='AIX'" "numpy==1.14.5; python_version=='3.7' and platform_system!='AIX'" "numpy==1.17.3; python_version>='3.8' and platform_system!='AIX'" "numpy==1.16.0; python_version=='3.5' and platform_system=='AIX'" "numpy ==1.16.0; python_version=='3.6' and platform_system=='AIX'" "numpy==1.16.0; python_version=='3.7' and platform_system=='AIX'" "numpy==1.17.3; python_version>='3.8' and platform_system=='AIX'": Ignoring numpy: markers 'python_version == "3.5" and platform_system != "AIX"' don't match your environment Ignoring numpy: markers 'python_version == "3.6" and platform_system != "AIX"' don't match your environment Ignoring numpy: markers 'python_version == "3.7" and platform_system != "AIX"' don't match your environment Ignoring numpy: markers 'python_version == "3.5" and platform_system == "AIX"' don't match your environment Ignoring numpy: markers 'python_version == "3.6" and platform_system == "AIX"' don't match your environment Ignoring numpy: markers 'python_version == "3.7" and platform_system == "AIX"' don't match your environment Ignoring numpy: markers 'python_version >= "3.8" and platform_system == "AIX"' don't match your environment Collecting wheel Using cached https://files.pythonhosted.org/packages/00/83/b4a77d044e78ad1a45610eb88f745be2fd2c6d658f9798a15e384b7d57c9/wheel-0.33.6-py2.py3-none-any.whl Collecting setuptools Using cached https://files.pythonhosted.org/packages/9e/d5/444a443d890f09fc1ca1a2c3c9fc7e84cb148177b05ac94fe5084e3d9abb/setuptools-42.0.1-py2.py3-none-any.whl Collecting Cython>=0.29.13 Using cached https://files.pythonhosted.org/packages/9c/9b/706dac7338c2860cd063a28cdbf5e9670995eaea408abbf2e88ba070d90d/Cython-0.29.14.tar.gz Collecting numpy==1.17.3 Using cached https://files.pythonhosted.org/packages/b6/d6/be8f975f5322336f62371c9abeb936d592c98c047ad63035f1b38ae08efe/numpy-1.17.3.zip Installing collected packages: wheel, setuptools, Cython, numpy Running setup.py install for Cython: started Running setup.py install for Cython: finished with status 'done' Running setup.py install for numpy: started Running setup.py install for numpy: still running... Running setup.py install for numpy: finished with status 'done' Could not install packages due to an EnvironmentError: [WinError 123] The filename, directory name, or volume label syntax is incorrect: '"C:' ---------------------------------------- Command ""C:\Users\Spencer Pruitt\PycharmProjects\MNIST Analyzer\venv\Scripts\python.exe" "C:\Users\Spencer Pruitt\PycharmProjects\MNIST Analyzer\venv\lib\site-packages\pip-19.0.3-py3.8.egg\pip" install --ignore-installed --no-user --prefix "C:\Users\Spencer Pruitt\Ap pData\Local\Temp\pip-build-env-qcebzlj8\overlay" --no-warn-script-location --no-binary :none: --only-binary :none: -i https://pypi.org/simple -- wheel setuptools Cython>=0.29.13 "numpy==1.13.3; python_version=='3.5' and platform_system!='AIX'" "numpy==1.13.3; python_v ersion=='3.6' and platform_system!='AIX'" "numpy==1.14.5; python_version=='3.7' and platform_system!='AIX'" "numpy==1.17.3; python_version>='3.8' and platform_system!='AIX'" "numpy==1.16.0; python_version=='3.5' and platform_system=='AIX'" "numpy==1.16.0; python_versi on=='3.6' and platform_system=='AIX'" "numpy==1.16.0; python_version=='3.7' and platform_system=='AIX'" "numpy==1.17.3; python_version>='3.8' and platform_system=='AIX'"" failed with error code 1 in None ``` A similar problem occurs when I attempted to install numpy. ``` (venv) C:\Users\Spencer Pruitt\PycharmProjects\MNIST Analyzer>pip install numpy Collecting numpy Using cached https://files.pythonhosted.org/packages/ff/59/d3f6d46aa1fd220d020bdd61e76ca51f6548c6ad6d24ddb614f4037cf49d/numpy-1.17.4.zip Installing collected packages: numpy Running setup.py install for numpy ... done Could not install packages due to an EnvironmentError: [WinError 123] The filename, directory name, or volume label syntax is incorrect: '"C:' ``` **Expected behavior** The installation process would complete satisfactorily and the command prompt would read "[module name] is ready to use!" or something to that effect. **Misc.** I am new to programming and am not familiar with the terminology being thrown around here. I am entirely self taught and this is my first time using Python. I would sincerely appreciate someone giving me a run-down of how to install these modules properly or at least get pip to work. I've been looking at other issues, but the answers do not make sense to me or the problems do not appear to be the same as mine. Thank you!
2019/11/27
[ "https://Stackoverflow.com/questions/59077162", "https://Stackoverflow.com", "https://Stackoverflow.com/users/12447974/" ]
Please use version <3.7 of python for numpy installation. Or see module/package required which version of python. So, just upgrade and degrade the python version to work with packages.
See [this thread](https://github.com/numpy/numpy/issues/11451) re: spaces in path causing issues with installing numpy. Any possibility of moving your virtual environment/project to something like "C:\Temp\MNIST\_Analyzer"? [This thread](https://stackoverflow.com/questions/15472430/using-virtualenv-with-spaces-in-a-path) for getting around spaces in the path is for Mac, but may be relevant.
8,325
28,664,632
This is my project set up: ``` my_project ./my_project ./__init__.py ./foo ./__init__.py ./bar.py ./tests ./__init__.py ./test_bar.py ``` Inside `test_bar.py` I have the following import statement: `from foo import bar` However when I run `python /my_project/tests/test_bar.py` I get this error: `ImportError: No module named foo`. Any ideas on how to fix this?
2015/02/22
[ "https://Stackoverflow.com/questions/28664632", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2680879/" ]
``` import sys sys.path.append('/path/to/my_project/') ``` Now you can import ``` from foo import bar ```
You can use relative imports: ``` from ..foo import bar ``` <https://docs.python.org/2/whatsnew/2.5.html#pep-328-absolute-and-relative-imports> but i think too that using absolute paths by [installing](https://docs.python.org/2/distutils/setupscript.html) your project in venv is better way.
8,326
62,126,379
Very sorry in advance for the long paste. The code is straight from the text. It may be due to class `Scene`, that seems to have the instruction to: subclass it and implement enter(). But I don't know what that means. ```py from sys import exit from random import randint from textwrap import dedent class Scene(object): def enter(self): print("This scene is not yet configured.") print("Subclass it and implement enter()") exit(1) # Skeleton code: a base class for Scene that will have # the common things that all scenes do. class Engine(object): def __init__(self, scene_map): self.scene_map = scene_map def play(self): current_scene = self.scene_map.opening_scene() last_scene = self.scene_map.next_scene('finished') while current_scene != last_scene: next_scene_name = current_scene.enter() current_scene = self.scene_map.next_scene(next_scene_name) # be sure to print out the last scene current_scene.enter() class Death(Scene): quips = [ "You died. You kinda suck at this.", "Your Mom would be proud...if she were smarter.", "Such a luser.", "I have a small puppy that's better at this.", "You're worse than your Dad's jokes." ] def enter(self): print(Death.quips[randint(0, len(self.quips)-1)]) class CentralCorridor(Scene): def enter(self): print(dedent(""" The Gothons of Planet Percal #25 have invaded your ship and destroyed your entire crew. You are the last surviving member member and your last mission is to get the neutron destruct bomb from the Weapons Armory, put it in the bridge, and blow the ship up after getting into an escape pod. You're running down the central corridor to the Weapons Armory when a Gothon jumps out, red scaly skin, dark grimy teeth, and evil clown costume flowing around his hate filled body. He's blocking the door to the Armory and about to pull a weapon to blast you. """)) action = input("> ") if action == "shoot!": print(dedent(""" Quick on the draw you yank out your blaster and fire it at the Gothon. His clown costume is flowing and moving around his body, which throws off your aim. Your laster hits his costume but misses him entirely. This completely ruins his brand new costume his mother bought him, which makes him fly into an insane rage and blast you repeatedly in the face until you are dead. Then he eats you. """)) return 'death' elif action == "dodge!": print(dedent(""" Like a world class boxer you dodge, weave, slip and slide right as the Gothon's blaster cranks a laser past your head. In the middle of your artful dodge your foot slips and you bang your head on the metal wall and pass out. You wake up shortly after only to die as the Gothon stomps on your head and eats you. """)) return 'death' elif action == "tell a joke": print(dedent(""" Lucky for you they made you learn Gothon insults in the academy. You tell the one Gothon joke you know: Lbhe zbgure vf fb sng, jura fur fvgf nebhaq gur ubhfr, fur fvgf nebhaq gur ubhfr. The Gothon stops, tries not to laugh, then busts out laughing and can't move. While he's laughing you run up and shoot him square in the head putting him down, then jump through the Weapon Armory door. """)) return 'laster_weapon_armory' else: print("DOES NOT COMPUTE!") return 'central_corridor' class LaserWeaponArmory(Scene): def enter(self): print(dedent(""" You do a dive roll into the Weapon Armory, crouch and scan the room for more GOthon that might be hiding. It's dead quiet, too quiet. You stand up and run to the far side of the room and find the neutron bomb in its container. There's a keypad lock on the box and you need the code to get the bomb out. If you get the code wrong 10 times then the lock closes forever and you can't get the bomb. The code is 3 digits. """)) code = f"{randint(1,9)}{randint(1,9)}{randint(1,9)}" guess = input("[keypad]> ") guesses = 0 while guess != code and guesses <10: print("BZZZZEDDD!") guesses += 1 guess = input("[keypad]> ") if guess == code: print(dedent(""" The container clicks open and the seal breaks, letting gas out. You grab the neutron bomb and run as fast as you can to the bridge where you must place it in the right spot. """)) return 'the_bridge' else: print(dedent(""" The lock buzzes one last time and then you hear a sickening melting sound as the mechanism is fused together. You decide to sit there, and finally the Gothons blow up the ship from their ship and you die. """)) return 'death' class Finished(Scene): def enter(self): print("You won! Good job.") return 'finished' class Map(object): scenes = { 'central_corridor': CentralCorridor(), 'laster_weapon_armory': LaserWeaponArmory(), 'the_bridge': TheBridge(), 'escape_pod': EscapePod(), 'death': Death(), 'finished': Finished() } def __init__(self, start_scene): self.start_scene = start_scene def next_scene(self, scene_name): val = Map.scenes.get(scene_name) return val def opening_scene(self): return self.next_scene(self.start_scene) # Finally the code that runs the game by making a Map, handing that map to # an Engine before calling play to make game work. a_map = Map('central_corridor') a_game = Engine(a_map) a_game.play() ``` I'm trying to understand what's going on here, but I don't see it. I've also deleted two of the larger classes because they don't seem to be at issue here. EDIT: The return I'm putting in is: ``` (base) ➜ Python python ex43.py ``` Then, ``` dodge! ``` This returns: ``` Like a world class boxer you dodge, weave, slip and slide right as the Gothon's blaster cranks a laser past your head. In the middle of your artful dodge your foot slips and you bang your head on the metal wall and pass out. You wake up shortly after only to die as the Gothon stomps on your head and eats you. You're worse than your Dad's jokes. Traceback (most recent call last): File "ex43.py", line 283, in <module> a_game.play() File "ex43.py", line 45, in play next_scene_name = current_scene.enter() AttributeError: 'NoneType' object has no attribute 'enter' ```
2020/06/01
[ "https://Stackoverflow.com/questions/62126379", "https://Stackoverflow.com", "https://Stackoverflow.com/users/9808986/" ]
The `enter` method for the `Death` Scene doesn't return anything. In Python, all functions without an explicit return statement return `None`, which explains the error you're getting. ``` class Death(Scene): quips = [ "You died. You kinda suck at this.", "Your Mom would be proud...if she were smarter.", "Such a luser.", "I have a small puppy that's better at this.", "You're worse than your Dad's jokes." ] def enter(self): print(Death.quips[randint(0, len(self.quips)-1)]) # NEED TO ADD RETURN LINE HERE <--- ```
As the error says, ``` next_scene_name = current_scene.enter() AttributeError: 'Nonetype' object has no attribute 'enter' ``` That means that your `current_scene` variable is equal to `None` when you call `current_scene.enter()` in the `play` method. You need to make sure the variable `current_scene` is properly instantiated so that its not `None`. One way to debug this is to add a `print(current_scene)` line right after every mention of `current_scene`. This will tell you when its equal to `None` in your code.
8,329
16,178,519
I wrote a metaclass that I'm using for logging purposes in my python project. It makes every class automatically log all activity. The only issue is that I don't want to go into every file and have to add in: ``` __metaclass__ = myMeta ``` Is there a way to set the metaclass in the top level folder so that all the files underneath use that metaclass?
2013/04/23
[ "https://Stackoverflow.com/questions/16178519", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1226565/" ]
No, you can only specify the metaclass per class or per module. You cannot set it for the whole package. In Python 3.1 and onwards, you *can* intercept the `builtins.__build_class__` hook and insert a metaclass programatically, see [Overriding the default type() metaclass before Python runs](https://stackoverflow.com/questions/15293172/overriding-the-default-type-metaclass-before-python-runs/15392133#15392133). In Python 2.7, you could replace `__builtins__.object` with a subclass that uses your metaclass. Like the `builtins.__build_class__` hook, this is *advanced hackery* and break your code as much as getting your metaclass in everwhere. Do so by replacing the `object` reference on the [`__builtin__` module](https://docs.python.org/2/library/__builtin__.html): ``` import __builtin__ class MetaClass(type): def __new__(mcls, name, *args): # do something in the metaclass return super(MetaClass, mcls).__new__(mcls, name, *args) orig_object = __builtin__.orig_object class metaobject(orig_object): __metaclass__ = MetaClass def enable(): # *replace* object with one that uses your metaclass __builtin__.object = metaobject def disable(): __builtin__.object = orig_object ``` Run `enable()` this before importing your package and all new-style classes (those that *can* support a metaclass) will have your metaclass. Note that this behaviour now will propagate to **all** Python code not already loaded, including the standard library, as your package imports code. You probably want to use: ``` enable() import package disable() ``` to limit the effects.
Here's a simple technique. Just *subclass* the *class* itself with `__metaclass__` attribute in the subclass. This process can be automated. util.py ``` class A(object): def __init__(self, first, second): self.first = first self.second = second def __str__(self): return '{} {}'.format(self.first, self.second) ``` main.py ``` from datetime import datetime from util import A def meta(*args): cls = type(*args) setattr(cls, 'created', datetime.now().ctime()) return cls try: print(A.created) except AttributeError as e: print(e) class A(A): __metaclass__ = meta print(A.created, str(A('Michael', 'Jackson'))) ``` Test; ``` $ python main.py type object 'A' has no attribute 'created' ('Wed Mar 9 22:58:16 2016', 'Michael Jackson') ```
8,330
20,428,784
Is it possible to write every line, I receive from this script, into a mysql table ? I want to have 2 columns: The ip-adress I need for the command (ipAdresse) and a part of the output of the command itself (I want to split some content of the output).. I do not want to ask for any code but I just want to know whether it's even possible to keep this code as it is and add some stuff to it or I have to rewrite it to get the results I want :) Now I just write the output of the command into a text file. ``` #!/usr/bin/python import subprocess import commands import subprocess ipAdresse_4 = 0 datei = open("pointerRecord.txt", "w") while (ipAdresse_4 < 255): ipAdresse_4 = ipAdresse_4 + 1 ipAdresse = '82.198.205.%d' % (ipAdresse_4,) subprocess.Popen("host %s" % ipAdresse, stdout=datei, shell=True) ```
2013/12/06
[ "https://Stackoverflow.com/questions/20428784", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2968265/" ]
Instead of using `respond_to?` why don't you do: ``` def date_or_time?(obj) obj.kind_of?(Date) || obj.kind_of?(Time) end [19] pry(main)> a = Date.new => #<Date: -4712-01-01 ((0j,0s,0n),+0s,2299161j)> [20] pry(main)> date_or_time? a => true [21] pry(main)> b = DateTime.new => #<DateTime: -4712-01-01T00:00:00+00:00 ((0j,0s,0n),+0s,2299161j)> [22] pry(main)> date_or_time? b => true [23] pry(main)> c = Time.new => 2013-12-06 10:44:57 -0600 [24] pry(main)> date_or_time? c => true ```
Alternatively you could still use `respond_to?` with `:iso8601`. I believe only 'date-y' types will respond to that (Date, Time, DateTime).
8,331
11,174,532
I connect to a mysql database using pymysql and after executing a request I got the following string: `\xd0\xbc\xd0\xb0\xd1\x80\xd0\xba\xd0\xb0`. This should be 5 characters in utf8, but when I do `print s.encode('utf-8')` I get this: `╨╝╨░╤А╨║╨░`. The string looks like byte representation of unicode characters, which python fails to recognize. So what do I do to make python process them properly?
2012/06/24
[ "https://Stackoverflow.com/questions/11174532", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1477552/" ]
You want to `decode` (not `encode`) to get a unicode string from a byte string. ``` >>> s = '\xd0\xbc\xd0\xb0\xd1\x80\xd0\xba\xd0\xb0' >>> us = s.decode('utf-8') >>> print us марка ``` Note that you may not be able to `print` it because it contains characters outside ASCII. But you should be able to see its value in a Unicode-aware debugger. I ran the above in IDLE. **Update** It seems what you actually have is this: ``` >>> s = u'\xd0\xbc\xd0\xb0\xd1\x80\xd0\xba\xd0\xb0' ``` This is trickier because you first have to get those bytes into a bytestring before you call `decode`. I'm not sure what the "best" way to do that is, but this works: ``` >>> us = ''.join(chr(ord(c)) for c in s).decode('utf-8') >>> print us марка ``` Note that you should of course be decoding it *before* you store it in the database as a string.
Mark is right: you need to decode the string. Byte strings become Unicode strings by decoding them, encoding goes the other way. This and many other details are at [Pragmatic Unicode, or, How Do I Stop The Pain?](http://bit.ly/unipain).
8,332
9,425,556
I'm trying to use the app wapiti to make some security test in a web project running in localhost, but i have some problems with the syntax of Python. I follow the instructions that they give in wapiti project site and write this: ``` C:\Python27\python C:\Wapiti\wapiti.py http://server.com/base/url/ ``` but i get this: ``` SintaxError: Invalid Sintax ``` I had read that the syntax of python changed in that version... I really need help please.
2012/02/24
[ "https://Stackoverflow.com/questions/9425556", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1229915/" ]
MapKit does not expose a means of performing driving directions. So, it's not as simple as asking the map to display a course from location A to location B. You have two options: 1) Integrate with Google's API to get the driving directions, and overlay your own lines onto the MapKit map. or 2) Simply direct your users out of app and delegate this functionality to the built in Map app. I have no experience with the former, but the later is very easy. Simply: ``` CLLocationCoordinate2D location = [[map userLocation] location].coordinate; double currentLat = location.latitude; double currentLong = location.longitude; NSString *googleUrl = [[NSString alloc] initWithFormat:@"http://maps.google.com/maps?saddr=%f,%f&daddr=%f,%f", currentLat, currentLong, item.latitude, item.longitude]; NSLog(@"%@", googleUrl); [[UIApplication sharedApplication] openURL:[[NSURL alloc] initWithString:googleUrl]]; ```
Actually there is no api supported by iPhone sdk to draw route on map. There a repo on github which is using google maps api to draw route on map by using map overlay. It has some limitation but you can take help from this repo - <https://github.com/kishikawakatsumi/MapKit-Route-Directions>
8,333
32,788,322
I want to add a column in a `DataFrame` with some arbitrary value (that is the same for each row). I get an error when I use `withColumn` as follows: ``` dt.withColumn('new_column', 10).head(5) ``` ```none --------------------------------------------------------------------------- AttributeError Traceback (most recent call last) <ipython-input-50-a6d0257ca2be> in <module>() 1 dt = (messages 2 .select(messages.fromuserid, messages.messagetype, floor(messages.datetime/(1000*60*5)).alias("dt"))) ----> 3 dt.withColumn('new_column', 10).head(5) /Users/evanzamir/spark-1.4.1/python/pyspark/sql/dataframe.pyc in withColumn(self, colName, col) 1166 [Row(age=2, name=u'Alice', age2=4), Row(age=5, name=u'Bob', age2=7)] 1167 """ -> 1168 return self.select('*', col.alias(colName)) 1169 1170 @ignore_unicode_prefix AttributeError: 'int' object has no attribute 'alias' ``` It seems that I can trick the function into working as I want by adding and subtracting one of the other columns (so they add to zero) and then adding the number I want (10 in this case): ``` dt.withColumn('new_column', dt.messagetype - dt.messagetype + 10).head(5) ``` ```none [Row(fromuserid=425, messagetype=1, dt=4809600.0, new_column=10), Row(fromuserid=47019141, messagetype=1, dt=4809600.0, new_column=10), Row(fromuserid=49746356, messagetype=1, dt=4809600.0, new_column=10), Row(fromuserid=93506471, messagetype=1, dt=4809600.0, new_column=10), Row(fromuserid=80488242, messagetype=1, dt=4809600.0, new_column=10)] ``` This is supremely hacky, right? I assume there is a more legit way to do this?
2015/09/25
[ "https://Stackoverflow.com/questions/32788322", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1245418/" ]
**Spark 2.2+** Spark 2.2 introduces `typedLit` to support `Seq`, `Map`, and `Tuples` ([SPARK-19254](https://issues.apache.org/jira/browse/SPARK-19254)) and following calls should be supported (Scala): ```scala import org.apache.spark.sql.functions.typedLit df.withColumn("some_array", typedLit(Seq(1, 2, 3))) df.withColumn("some_struct", typedLit(("foo", 1, 0.3))) df.withColumn("some_map", typedLit(Map("key1" -> 1, "key2" -> 2))) ``` **Spark 1.3+** (`lit`), **1.4+** (`array`, `struct`), **2.0+** (`map`): The second argument for `DataFrame.withColumn` should be a `Column` so you have to use a literal: ``` from pyspark.sql.functions import lit df.withColumn('new_column', lit(10)) ``` If you need complex columns you can build these using blocks like `array`: ``` from pyspark.sql.functions import array, create_map, struct df.withColumn("some_array", array(lit(1), lit(2), lit(3))) df.withColumn("some_struct", struct(lit("foo"), lit(1), lit(.3))) df.withColumn("some_map", create_map(lit("key1"), lit(1), lit("key2"), lit(2))) ``` Exactly the same methods can be used in Scala. ``` import org.apache.spark.sql.functions.{array, lit, map, struct} df.withColumn("new_column", lit(10)) df.withColumn("map", map(lit("key1"), lit(1), lit("key2"), lit(2))) ``` To provide names for `structs` use either `alias` on each field: ``` df.withColumn( "some_struct", struct(lit("foo").alias("x"), lit(1).alias("y"), lit(0.3).alias("z")) ) ``` or `cast` on the whole object ``` df.withColumn( "some_struct", struct(lit("foo"), lit(1), lit(0.3)).cast("struct<x: string, y: integer, z: double>") ) ``` It is also possible, although slower, to use an UDF. **Note**: The same constructs can be used to pass constant arguments to UDFs or SQL functions.
In spark 2.2 there are two ways to add constant value in a column in DataFrame: 1) Using `lit` 2) Using `typedLit`. The difference between the two is that `typedLit` can also handle parameterized scala types e.g. List, Seq, and Map **Sample DataFrame:** ``` val df = spark.createDataFrame(Seq((0,"a"),(1,"b"),(2,"c"))).toDF("id", "col1") +---+----+ | id|col1| +---+----+ | 0| a| | 1| b| +---+----+ ``` **1) Using `lit`:** Adding constant string value in new column named newcol: ``` import org.apache.spark.sql.functions.lit val newdf = df.withColumn("newcol",lit("myval")) ``` Result: ``` +---+----+------+ | id|col1|newcol| +---+----+------+ | 0| a| myval| | 1| b| myval| +---+----+------+ ``` **2) Using `typedLit`:** ``` import org.apache.spark.sql.functions.typedLit df.withColumn("newcol", typedLit(("sample", 10, .044))) ``` Result: ``` +---+----+-----------------+ | id|col1| newcol| +---+----+-----------------+ | 0| a|[sample,10,0.044]| | 1| b|[sample,10,0.044]| | 2| c|[sample,10,0.044]| +---+----+-----------------+ ```
8,334
36,791,792
I am using django-cors-headers to overcome cors issues in python django. But I am getting. > > 'Access-Control-Allow-Origin' header contains multiple values '\*, \*', but only one is allowed. while trying to access using angularjs from <http://localhost:8000> > > > here is my settings for CORS that I am using. ``` INSTALLED_APPS = INSTALLED_APPS + ['corsheaders'] MIDDLEWARE_CLASSES = MIDDLEWARE_CLASSES + ['corsheaders.middleware.CorsMiddleware', 'corsheaders.middleware.CorsPostCsrfMiddleware'] CORS_ORIGIN_ALLOW_ALL = True CORS_REPLACE_HTTPS_REFERER = True CORS_ALLOW_HEADERS = ( 'x-requested-with', 'content-type', 'accept', 'origin', 'authorization', 'x-csrftoken', 'accept-encoding' ) CORS_ALLOW_METHODS = ( 'GET', 'POST', 'PUT', 'PATCH', 'DELETE', 'OPTIONS' ) ``` if anyone has resolved this issue please let me know.
2016/04/22
[ "https://Stackoverflow.com/questions/36791792", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1433639/" ]
You need to do ``` MIDDLEWARE_CLASSES = ( ... 'corsheaders.middleware.CorsMiddleware', 'django.middleware.common.CommonMiddleware', ... ) CORS_ORIGIN_ALLOW_ALL = True #for testing. ``` Look `CorsMiddleware` is on top of `CommonMiddleware`. Hope this helps.
``` CORS_ORIGIN_ALLOW_ALL = False ``` change allow all to false
8,337
30,930,052
(I'm using Python 3.4 for this, on Windows) So, I have this code I whipped out to better show my troubles: ``` #!/usr/bin/env python # -*- coding: utf-8 -*- import os os.startfile('C:\\téxt.txt') ``` On IDLE it works as it should (it just opens that file I specified), but on Console (double-click) it keeps saying Windows can't find the file. Of course, if I try to open "text.txt" instead it works perfectly, as long as it exists. It's slowly driving me insane. Someone help me, please.
2015/06/19
[ "https://Stackoverflow.com/questions/30930052", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5026708/" ]
try this. Select tableview and go to Attribute Inspector. In separator just set the table view separator color to clear color.
Try this. Select tableview and go to Attribute Inspector. Find the property Separator and make it Default to None. And also set the color to Clear Color. I hope it will work for you...good luck !! :)
8,338
11,697,096
I am trying to send a message through GCM (Google Cloud Messaging). I have registered through Google APIs, I can send a regID to my website (which is a Google App Engine Backend) from multiple Android test phones. However, I can't send anything to GCM from Google App Engine. Here is what I am trying to use. ``` regId = "APA91b..." json_data = {"collapse_key" : "Food-Promo", "data" : { "Category" : "FOOD", "Type": "VEG", }, "registration_ids": [regId], } url = 'https://android.googleapis.com/gcm/send' apiKey = "AI..." myKey = "key=" + apiKey headers = {'Content-Type': 'application/json', 'Authorization': myKey} data = urllib.urlencode(json_data) data2 = {"title": title} data3 = urllib.urlencode(data2) req = urllib2.Request(url, data, headers) f = urllib2.urlopen(req) response = f.read() f.close() logging.debug("***!!!!!!!WriteEntry TEST ----- Response: " + response) ``` And here is the error that I am receiving. ``` Traceback (most recent call last): File "/base/python_runtime/python_lib/versions/1/google/appengine/ext/webapp/_webapp25.py", line 703, in __call__ handler.post(*groups) File "/base/data/home/apps/s~journaltestza/26.360625174851783344/main.py", line 213, in post f = urllib2.urlopen(req) File "/base/python_runtime/python_dist/lib/python2.5/urllib2.py", line 124, in urlopen return _opener.open(url, data) File "/base/python_runtime/python_dist/lib/python2.5/urllib2.py", line 387, in open response = meth(req, response) File "/base/python_runtime/python_dist/lib/python2.5/urllib2.py", line 498, in http_response 'http', request, response, code, msg, hdrs) File "/base/python_runtime/python_dist/lib/python2.5/urllib2.py", line 425, in error return self._call_chain(*args) File "/base/python_runtime/python_dist/lib/python2.5/urllib2.py", line 360, in _call_chain result = func(*args) File "/base/python_runtime/python_dist/lib/python2.5/urllib2.py", line 506, in http_error_default raise HTTPError(req.get_full_url(), code, msg, hdrs, fp) HTTPError: HTTP Error 400: Bad Request ``` Thanks!
2012/07/28
[ "https://Stackoverflow.com/questions/11697096", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1256336/" ]
What are data2 and data3 used for ? The data you are posting was not proper json so you need to use json.dumps(data).Code should be like this : ``` json_data = {"collapse_key" : "Food-Promo", "data" : { "Category" : "FOOD", "Type": "VEG", }, "registration_ids": [regId], } url = 'https://android.googleapis.com/gcm/send' apiKey = "AI..." myKey = "key=" + apiKey data = json.dumps(json_data) headers = {'Content-Type': 'application/json', 'Authorization': myKey} req = urllib2.Request(url, data, headers) f = urllib2.urlopen(req) response = json.loads(f.read()) reply = {} if response ['failure'] == 0: reply['error'] = '0' else: response ['error'] = '1' return HttpResponse(json.dumps(reply), mimetype="application/javascript") ```
Try using [python-gcm](https://github.com/geeknam/python-gcm). It can handle errors as well.
8,340
67,828,477
Iterable objects are those that implement `__iter__` function, which returns an iterator object, i.e. and object providing the functions `__iter__` and `__next__` and behaving correctly. Usually the size of the iterable object is not known beforehand, and iterable object is not expected to know how long the iteration will last; however, there are some cases in which knowing the length of the iterable is valuable, for example, when creating an array. `list(x for x in range(1000000))`, for example, creates an initial array of small size, copies it after it is full, and repeats for many times as explained [here](https://stackoverflow.com/a/33045038/6087087). Of course, it is not that important in this example, but it explains the point. Is there a protocol in use for those iterable objects who know their length beforehand? That is, is there a protocol extending [Sized and Iterable](https://docs.python.org/3/library/collections.abc.html#collections-abstract-base-classes) but not [Collection or Reversible](https://docs.python.org/3/library/collections.abc.html#collections-abstract-base-classes)? It seems like there is no such protocol in language features, is there such a protocol for well-known third-party libraries? How this discussion relates to generators?
2021/06/03
[ "https://Stackoverflow.com/questions/67828477", "https://Stackoverflow.com", "https://Stackoverflow.com/users/6087087/" ]
It sounds like you're asking about something like `__length_hint__`. Excerpts from [PEP 424 – A method for exposing a length hint](https://peps.python.org/pep-0424/): > > CPython currently defines a `__length_hint__` method on several types, such as various iterators. This method is then used by various other functions (such as `list`) to presize lists based on the estimate returned by `__length_hint__`. Types which are not sized, and thus should not define `__len__`, can then define `__length_hint__`, to allow estimating or computing a size (such as many iterators). > > > > > Being able to pre-allocate lists based on the expected size, as estimated by `__length_hint__`, can be a significant optimization. CPython has been observed to run some code faster than PyPy, purely because of this optimization being present. > > > For example, `range` iterators support this ([Try it online!](https://tio.run/##K6gsycjPM7YoKPr/P7NEwVYhsyS1SKMoMS89VcPQwMBAU5OroCgzr0Qjs0QvPj4nNS@9JCM@AygQH68BlMtLrQBJ4VX0/z8A "Python 3.8 (pre-release) – Try It Online")): ``` it = iter(range(1000)) print(it.__length_hint__()) # prints 1000 next(it) print(it.__length_hint__()) # prints 999 ``` And `list` iterators even take list length changes into account ([Try it online!](https://tio.run/##K6gsycjPM7YoKPr/P1HBViHaLz8vNVZBS8HQgCuzBCiQWZJapJGoyVVQlJlXopFZohcfn5Oal16SEZ8BFIiP19DU5MpLrQBJ4VeUqFeQX6BBSE1iQUFqXooGyBV4lf7/DwA "Python 3.8 (pre-release) – Try It Online")): ``` a = [None] * 10 it = iter(a) print(it.__length_hint__()) # prints 10 next(it) print(it.__length_hint__()) # prints 9 a.pop() print(it.__length_hint__()) # prints 8 a.append(None) print(it.__length_hint__()) # prints 9 ``` Generator iterators don't support it, but you can support it in other iterators you write. Here's a demo iterator that... * Produces 10,000 elements. * Hints at having 5,000 elements. * After every 1,000 elements it shows the memory size of the list being built. ``` import gc beacon = object() class MyIterator: def __init__(self): self.n = 10_000 def __iter__(self): return self def __length_hint__(self): print('__length_hint__ called') return 5_000 def __next__(self): if self.n == 0: raise StopIteration self.n -= 1 if self.n % 1_000 == 0: for obj in gc.get_objects(): if isinstance(obj, list) and obj and obj[0] is beacon: print(obj.__sizeof__()) return beacon list(MyIterator()) ``` Output ([Try it online!](https://tio.run/##bZHBSsQwEIbveYq5yCagJYsIIuwDePDkUSRk02l3pE5KEsH15WvSLnZtm0vI8OWbyZ/@nE6e7x/7MAz02fuQoHVCHNE6z3AAf/xAl6QSwnU2Rng5PycMNvnwJCCvGhswhpiSMTJi16ipXlY5VkWy10Zrfc1nx5oPmL4Cj9eu2A65TSdzIt5o0YdclrsFBc52HdY7tVQ/LOZg/N6QUvM3@gH0XB9FliLCa/L9lAN5Xr73Lj94w3UD@9J9w9n4UHIG4px91WIyU@pRqv/gRUiROCbLDmUGb6GjmBRYrkfLZX/T7xmE6SPXmjm8zFbGRPpB3@Qs1Cq0SSFEaSPn/8/kMPwC "Python 3.8 (pre-release) – Try It Online")): ``` __length_hint__ called 45088 45088 45088 45088 45088 50776 57168 64360 72456 81560 ``` We see that `list` asks for a length hint and from the start pre-allocates enough memory for 5,000 references of 8 bytes each, plus 12.5% overallocation. After the first 5,000 elements, it doesn't ask for length hints anymore, and keeps increasing its size bit by bit. If my `__length_hint__` instead accurately returns 10,000, then `list` instead pre-allocates `90088` bytes and that remains until the end.
If I now understand your question, you're still trying to combine two concepts that don't combine in quite this way. `generator` is a subclass of `iterator`; it's a process. `len` applies to data objects -- in particular, to the *iterable* object, as opposed to the *iterator* that traverses the object. Therefore, a generator doesn't really have a length of its own. It returns a sequence of values, and that sequence has a length (when the generator finishes). Can you describe the concept you have of "generator with length" -- if it differs from what I just described? If you keep that distinction in mind, then yes, you can implement `__len__` as an extension to your class. You can add anything you like -- say, a `sqrt` function (See Conway's surreal numbers for details).
8,343
4,364,087
Can this be somehow overcome? Can a child process create a subprocess? The problem is, I have a ready application which needs to call a Python script. This script on its own works perfectly, but it needs to call existing shell scripts. Schematically the problem is in the following code: ### parent.py ``` import subprocess subprocess.call(['/usr/sfw/bin/python', '/usr/apps/openet/bmsystest/relAuto/variousSW/child.py','1', '2']) ``` ### child.py ``` import sys import subprocess print sys.argv[0] print sys.argv[1] subprocess.call(['ls -l'], shell=True) exit ``` ### Running child.py ``` python child.py 1 2 all is ok ``` ### Running parent.py ``` python parent.py Traceback (most recent call last): File "/usr/apps/openet/bmsystest/relAuto/variousSW/child.py", line 2, in ? import subprocess ImportError: No module named subprocess ```
2010/12/06
[ "https://Stackoverflow.com/questions/4364087", "https://Stackoverflow.com", "https://Stackoverflow.com/users/457921/" ]
> > There should be nothing stopping you from using subprocess in both child.py and parent.py > > > I am able to run it perfectly fine. :) **Issue Debugging**: > > You are using `python` and `/usr/sfw/bin/python`. > > > 1. Is bare python pointing to the same python? 2. Can you check by typing 'which python'? I am sure if you did the following, it will work for you. ``` /usr/sfw/bin/python parent.py ``` Alternatively, Can you change your `parent.py code` to ``` import subprocess subprocess.call(['python', '/usr/apps/openet/bmsystest/relAuto/variousSW/child.py','1', '2']) ```
Using `subprocess.call` is not the proper way to do it. In my view, `subprocess.Popen` would be better. parent.py: ``` 1 import subprocess 2 3 process = subprocess.Popen(['python', './child.py', 'arg1', 'arg2'],\ 4 stdin=subprocess.PIPE, stdout=subprocess.PIPE,\ 5 stderr=subprocess.PIPE) 6 process.wait() 7 print process.stdout.read() ``` child.py ``` 1 import subprocess 2 import sys 3 4 print sys.argv[1:] 5 6 process = subprocess.Popen(['ls', '-a'], stdout = subprocess.PIPE) 7 8 process.wait() 9 print process.stdout.read() ``` Out of program: ``` python parent.py ['arg1', 'arg2'] . .. chid.py child.py .child.py.swp parent.py .ropeproject ```
8,344
38,909,543
I am trying to convert a string to hex character by character, but I cant figure it out in Python3. In older python versions, what I have below works: ``` test = "This is a test" for c in range(0, len(test) ): print( "0x%s"%string_value[i].encode("hex") ) ``` But with python3 I am getting the following error: LookupError: 'hex' is not a text encoding; use codecs.encode() to handle arbitrary codecs. Can anyone help to tell me what the conversion would be in python3. Thanks in advance
2016/08/12
[ "https://Stackoverflow.com/questions/38909543", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1902666/" ]
In python 3x Use [`binascii`](https://docs.python.org/3.1/library/binascii.html) instead of hex: ``` >>> import binascii >>> binascii.hexlify(b'< character / string>') ```
How about: ``` >>> test = "This is a test" >>> for c in range(0, len(test) ): ... print( "0x%x"%ord(test[c])) ... 0x54 0x68 0x69 0x73 0x20 0x69 0x73 0x20 0x61 0x20 0x74 0x65 0x73 0x74 ```
8,346
14,307,518
I am only an hour into learning how [cron](http://en.wikipedia.org/wiki/Cron) jobs work, and this is what I have done so far. I’m using `crontab -e` to add my cron command, which is: `0/1 * * * * /usr/bin/python /home/my_username/hello.py > /home/my_username/log.txt` `crontab -l` confirms that my command is there. Hello.py: ``` #!/usr/bin/python # Hello world python program print "Hello World!" ``` But I don’t see anything in the log file. Can someone please explain what am I doing wrong?
2013/01/13
[ "https://Stackoverflow.com/questions/14307518", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1972942/" ]
Experiment shows that the `0/1` seems to be the problem. `0/1` *should* be equivalent to `*`. If you replace `0/1` with `*`, it should work. Here's my experimental crontab: ``` 0/1 * * * * echo 0/1 >> cron0.log * * * * * echo star >> cron1.log ``` This creates `cron1.log` but not `cron0.log`. I'll look into this and try to figure out why `0/1` isn't working, but for now just use `*` and it should work. Update: The `foo/bar` syntax is specific to the Vixie cron implementation, which is used by most Linux systems and by MacOS X but is not universal. The usual way to run a command every minute is to specify just `*` in the first field. To run a command every 5 minutes, *if* your cron supports it, specify `*/5`. Here's what the `crontab(5)` man page says: > > Step values can be used in conjunction with ranges. Following a range > with `/<number>` specifies skips of the number's value through the > range. For example, `0-23/2` can be used in the hours field to specify > command execution every other hour (the alternative in the V7 standard > is `0,2,4,6,8,10,12,14,16,18,20,22`). Steps are also permitted after > an asterisk, so if you want to say "every two hours", just use `*/2`. > > > I'm not even sure what `0/1` means. **UPDATE 2:** Ok, here's what I've found. Given that fields 2 through 5 are all `*`, setting the first field (specifying minutes) to `*` causes the job to run once a minute. `*/2` runs every 2 minutes, and `*/3` runs every 3 minutes. This is all as expected. Setting the first field to any of `0/1`, `0/2`, or `0/3` causes the job to run only at the top of the hour, i.e., it's equivalent to just `0`. This is not what I would have expected from the description in the man page. The [Wikipedia quote](http://en.wikipedia.org/wiki/Cron#cite_ref-8) in [jgritty's answer](https://stackoverflow.com/a/14307763/827263): > > Some versions of cron may not accept a value preceding "/" if it is > not a range, such as "0". An alternative would be replacing the zero > with an asterisk. > > > doesn't seem to be entirely correct, at least for the version of Vixie cron I'm using; the `0/1` is accepted without complaint, but it doesn't mean what I'd expect and it doesn't seem particularly useful.
`0/1` seems to be formatted wrong for your version of cron. I found this on [wikipedia](http://en.wikipedia.org/wiki/Cron#cite_ref-8): > > Some versions of cron may not accept a value preceding "/" if it is not a range, > such as "0". An alternative would be replacing the zero with an asterisk. > > > So Keith Thompson's answer should work, and so should: `*/1 * * * *`
8,349
60,963,452
I am loading in a very large image (60,000 x 80,000 pixels) and am exceeding the max pixels I can load: ```none cv2.error: OpenCV(4.2.0) /Users/travis/build/skvark/opencv-python/opencv/modules/imgcodecs/src/loadsave.cpp:75: error: (-215:Assertion failed) pixels <= CV_IO_MAX_IMAGE_PIXELS in function 'validateInputImageSize' ``` From what I have found this is referring to the limitation imposed on [line 65](https://github.com/opencv/opencv/blob/8eba3c1e7e8975ff1d263a41a5753efaa51d54fc/modules/imgcodecs/src/loadsave.cpp#L65) Ideally I'd change that to deal with at least 5 gigapixel images ``` #define CV_IO_MAX_IMAGE_PIXELS (1<<33) ``` I have seen some workarounds for this ([OpenCV image size limit](https://stackoverflow.com/questions/51493373/opencv-image-size-limit)) but those don't seem to address the problem which is an arbitrary definition (I'm working off a high performance server with 700gb ram so compute not an issue). My issue is that **I have no idea where this file is**. The error points me towards this "travis" directory which doesn't exist locally for me and in my local environment the c++ files aren't available. Any idea on where to look to find the c++ library?
2020/04/01
[ "https://Stackoverflow.com/questions/60963452", "https://Stackoverflow.com", "https://Stackoverflow.com/users/10537728/" ]
You have to modify the openCV source files and then compile it your own. EDIT: You can also modify environment variables ``` export CV_IO_MAX_IMAGE_PIXELS=1099511627776 ```
For my problem I should have specified it was a .tif file (NOTE most large images will be in this file format anyway). In which case a very easy way to load it in to a numpy array (so it can then work with OpenCV) is with the package tifffile. ``` pip install tifffile as tifi ``` This will install it in your python environment. ``` import tifffile as tifi img = tifi.imread("VeryLargeFile.tif") ``` From here you can use it as you would with any numpy array and it is fully compatible with OpenCV etc.
8,350
40,012,264
I am new to python. I am trying to print sum of all duplicates nos and products of non-duplicates nos from the python list. for examples list = [2,2,4,4,5,7,8,9,9]. what i want is sum= 2+2+4+4+9+9 and product=5\*7\*8.
2016/10/13
[ "https://Stackoverflow.com/questions/40012264", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4024000/" ]
You should not create a new `User` object when writing the parcel. You are operating on the current object instance. I guess you can perform all the logic for object creation and reading the parcel in the `createFromParcel()` method but I have seen the pattern below more often where you pass the parcel into a constructor for the object and handle it there. Make sure you read and write the fields to the parcel in the same exact order. ``` public class User implements Parcelable { private String userName; private String passWord; private boolean oldUser; public User(Parcel in) { userName = in.readString(); passWord = in.readString(); oldUser = in.readInt() == 1; } @Override public void writeToParcel(Parcel dest, int flags) { dest.writeString(userName); dest.writeString(passWord); dest.writeInt(oldUser ? 1 : 0); } public String getUserName() { return userName; } public String getPassWord() { return passWord; } public boolean getOldUser() { return oldUser; } public void setUserName(String userName) { this.userName = userName; } public void setPassWord(String passWord) { this.passWord = passWord; } public void setOldUser(boolean oldUser) { this.oldUser = oldUser; } @Override public int describeContents() { return 0; } public static final Parcelable.Creator<User> CREATOR = new Parcelable.Creator<User>() { public User createFromParcel(Parcel in) { return new User(in); } public User[] newArray(int size) { return new User[size]; } }; } ```
For really **Boolean** (not **boolean**) I would go with: ``` @Override public void writeToParcel(Parcel out, int flags) { if (open_now == null) { out.writeInt(-1); } else { out.writeInt(open_now ? 1 : 0); } ``` and ``` private MyClass(Parcel in) { switch (in.readInt()) { case 0: open_now = false; break; case 1: open_now = true; break; default: open_now = null; break; } ``` This will help you to keep "null" value correctly.
8,352
41,065,879
I am having trouble executing this python command and it keeps flagging this specific line. I've read the other posts about EOL, but I can't seem to find an issue with the types of quotes used. ``` logfile = "/Volumes/AC_SMN/03_DIGITAL_12/MD5_CHECKSUM_REPORTS/Text_Files” + id + ".txt" SyntaxError: EOL while scanning string literal ```
2016/12/09
[ "https://Stackoverflow.com/questions/41065879", "https://Stackoverflow.com", "https://Stackoverflow.com/users/7273874/" ]
The quote character after Text\_Files is incorrect. You could try this: ``` logfile = "/Volumes/AC_SMN/03_DIGITAL_12/MD5_CHECKSUM_REPORTS/Text_Files" + id + ".txt" ``` However, I would recommend using the string formatting syntax instead: ``` logfile = "/Volumes/AC_SMN/03_DIGITAL_12/MD5_CHECKSUM_REPORTS/Text_Files{}.txt".format(id) ``` Also, your variable named id is shadowing the built-in id, so best practice is to use another variable name.
you have used wrong quote at the end of `Text_Files” + id +` ``` logfile = "/Volumes/AC_SMN/03_DIGITAL_12/MD5_CHECKSUM_REPORTS/Text_Files” + id + ".txt" ``` instead use this (double quotes at the end of the string) ``` logfile = "/Volumes/AC_SMN/03_DIGITAL_12/MD5_CHECKSUM_REPORTS/Text_Files" + id + ".txt" ```
8,353
2,366,056
I'm learning python with 'Dive Into Python 3' and It's very hard to remember everything, without writing something, but there are no exercises in this book. So I ask here, where can i find them to remember everything better.
2010/03/02
[ "https://Stackoverflow.com/questions/2366056", "https://Stackoverflow.com", "https://Stackoverflow.com/users/271388/" ]
I used [ProjectEuler.net](http://projecteuler.net/) when learning Python. It also helped sharpen my math skills.
Find a good Code Kata website: Here's a list I compiled. <http://slott-softwarearchitect.blogspot.com/2009/08/code-kata-resources.html> I've also collected lots of exercises: <http://homepage.mac.com/s_lott/books/python.html> This book, however, covers only Python 2.6, so it may be more confusing than helpful.
8,354
57,901,183
I am using python to parse CSV file but I face an issue how to extract "Davies" element from second row. CSV looks like this ``` "_submissionusersID","_submissionresponseID","username","firstname","lastname","userid","phone","emailaddress","load_date" "b838b35d-ca18-4c7c-874a-828298ae3345","e9cde2ff-33a7-477e-b3b9-12ceb0d214e0","DAVIESJO","John","Davies","16293","","john_davies@test2.com","2019-08-30 15:37:03" "00ec3205-6fcb-4d6d-b806-25579b49911a","e9cde2ff-11a7-477e-b3b9-12ceb0d934e0","MORANJO","John","Moran","16972","+1 (425) 7404555","brian_moran2@test2.com","2019-08-30 15:37:03" "cc44e6bb-af76-4165-8839-433ed8cf6036","e9cde2ff-33a7-477e-b3b9-12ceb0d934e0","TESTNAN","Nancy","Test","75791","+1 (412) 7402344","nancy_test@test2.com","2019-08-30 15:37:03" "a8ecd4db-6c8d-453c-a2a7-032553e2f0e6","e9cde2ff-33a7-477e-b3b9-12ceb0d234e0","SMITHJO","John","Smith","197448","+1 (415) 5940445","john_smith@test2.com","2019-08-30 15:37:03" ``` I'm stuck here: ``` with open('Docs/CSV/submis/submis.csv') as csv_file: csv_reader = csv.DictReader(csv_file) for row in csv_reader: ```
2019/09/12
[ "https://Stackoverflow.com/questions/57901183", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4655668/" ]
In the end I sort of solved this by repeatedly subscribing and unsubscribing from the ZMQ socket. ``` # This is run every time the subscriber receive function is called socket.setsockopt(zmq.SUBSCRIBE, '') md = socket.recv_json() msg = socket.recv() socket.setsockopt(zmq.UNSUBSCRIBE, '') ``` Essentially, I made it so that my subscriber socket doesn't care about the other messages that come in other than the one message that it has received until the next time it tries to grab a message. I don't believe that this is the best solution for this problem, as there are costs involved when repeatedly subscribing and unsubscribing. Hoping that there might be a better way to do this but so far I haven't been able to find it.
Are you looking for `zmq.CONFLATE` option ("Last Message Only")? Something like this in subscriber side: ``` context = zmq.Context() socket = context.socket(zmq.SUB) socket.setsockopt(zmq.SUBSCRIBE, '') socket.setsockopt(zmq.CONFLATE, 1) # last msg only. socket.connect("tcp://localhost:%s" % port) # must be placed after above options. ``` --- [Relevant post](https://stackoverflow.com/a/48461030/3702377) [Learn more](http://api.zeromq.org/4-0:zmq-setsockopt)
8,356
18,046,817
I have been trying to add sub-directories to an "items" list and have settled on accomplishing this with the below code. ``` root, dirs, files = iter(os.walk(PATH_TO_DIRECTORY)).next() items = [{ 'label': directory, 'path': plugin.url_for('test') } for count, directory in enumerate(dirs)] ``` The above works, but it is surprisingly slow. The os.walk is very quick, but the loop is slow for some reason. I tried to do it all in one go, adding to the "items" list during the os.walk like below ``` for root, dirs, files in os.walk(PATH_TO_DIRECTORY): ``` but couldn't quite get the right syntax to add the directories to the list. Every single example of os.walk I could find online simple did a `print` of dirs or files, which is fine as an example of its use - but not very useful in the real world. I am new to python, only just started to look at it today. Could some advise how to get a list like in my first example but without the separate loop? (I realise it's called a "directory" or something in python, not a list. Let's just call it an array and be done with it... :-) Thanks
2013/08/04
[ "https://Stackoverflow.com/questions/18046817", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1743833/" ]
I have no idea what plugin.url\_for() does, but you should be able to speed it a bit doit it this way: ``` plugin_url_for = plugin.url_for _, dirs, _ = iter(os.walk(PATH_TO_DIRECTORY)).next() items = [{ 'label': directory, 'path': plugin_url_for('test') } for directory in dirs] ``` I dropped root, files variables as it seems you are not using it, also removed enumerate on dirs as you are not making any use of it. However, put it back if you need it for some weird reason. Please test it and let me know if it helped. I can not test it properly myself for obvious reasons.
``` dirlist = [] for root, dirs, files in os.walk(PATH_TO_DIRECTORY): dirlist += dirs ``` Should do the trick! For your revised question, I think what you really need is probably the output of: ``` Dirdict = {} for (root, dirs, files) in os.walk (START): Dirdict [root] = dirs ``` You might wish or need some encoding of root with plugin\_url(root), this would give you a single dictionary where you could lookup plugin\_url(some\_path) and get a list of all the directories in that path. What you are doing is creating a list of dictionaries all with a single key. I suspect that you might be after the namedtupple available from collections in python 2.7 and as a builtin in python 3.
8,357
28,147,183
I was reading about builder.connect\_signals which maps handlers of glade files with methods in your python file. Apparently works, except for the Main Window, which is not destroying when you close it. If you run it from terminal is still running and have to Ctrl-C to completely close the application. Here is my python code: ``` #!/usr/bin/env python import pygtk import gtk #from gi.repository import Gtk import gtk.glade class Mixer: def __init__(self): self.gladefile = "mixer3.glade" self.wTree = gtk.Builder() self.wTree.add_from_file(self.gladefile) window = self.wTree.get_object("window1") #if (window): # window.connect("destroy", gtk.main_quit) #line_btn = self.wTree.get_object("toggle_linein") #line_btn.connect("on_toggle_linein_activate", btn_linein_activated) self.wTree.connect_signals(self) window.show_all() # must have! def on_toggle_linein_clicked(self, widget): print "Clicked" def Destroy(self, obj): gtk.main_quit() if __name__ == "__main__": m = Mixer() gtk.main() ```
2015/01/26
[ "https://Stackoverflow.com/questions/28147183", "https://Stackoverflow.com", "https://Stackoverflow.com/users/598070/" ]
On closing window your window destroying but main loop of program don't stop, you must connect **destroy** event to the method/function that quit from this loop that ran from last line of code. Make some change in below lines of codes: ``` #if (window): # window.connect("destroy", gtk.main_quit) ``` change to: ``` if (window): window.connect("destroy", self.Destroy) ```
You can use `GtkApplication` and `GtkApplicationWindow` to manage it for you. When Application has no more open windows, it will automatically terminate. ``` #!/usr/bin/env python import gi gi.require_version('Gtk', '3.0') from gi.repository import Gtk from gi.repository import Gio class Mixer(Gtk.Application): def __init__(self): super(Mixer, self).__init__(application_id="org.test", flags=Gio.ApplicationFlags.FLAGS_NONE) def do_activate(self): self.gladefile = "mixer3.glade" self.wTree = Gtk.Builder() self.wTree.add_from_file(self.gladefile) # window1 must be an ApplicationWindow in glade file window = self.wTree.get_object("window1") self.add_window(window) # window should be added to application # but only after 'activate' signal window.show_all() if __name__ == "__main__": m = Mixer() m.run() # No gtk.main(), GtkApplication manages it ```
8,358
69,398,944
I have this easy code to connect to download some data using `GRPC` ``` creds = grpc.ssl_channel_credentials() channel = grpc.secure_channel(f'{HOST}:{PORT}', credentials=creds) stub = liveops_pb2_grpc.LiveOpsStub(channel=channel) request = project_pb2.ListProjectsRequest(organization=ORGANIZATION) projects = stub.ListProjects(request=request) print(projects) ``` This worked fine on wednesday. It runs in a docker container with `Python 3.8.10` and `protobuf==3.18.0`,`grpcio==1.40.0`, `grpcio-tools==1.40.0`. Today I updated `MAC OS Big Sur to 11.6` and after finishing some extra features on the code I see that it returns: ``` E0930 21:12:04.108551900 1 ssl_transport_security.cc:1468] Handshake failed with fatal error SSL_ERROR_SSL: error:1000007d:SSL routines:OPENSSL_internal:CERTIFICATE_VERIFY_FAILED. E0930 21:12:04.194319000 1 ssl_transport_security.cc:1468] Handshake failed with fatal error SSL_ERROR_SSL: error:1000007d:SSL routines:OPENSSL_internal:CERTIFICATE_VERIFY_FAILED. E0930 21:12:04.286163700 1 ssl_transport_security.cc:1468] Handshake failed with fatal error SSL_ERROR_SSL: error:1000007d:SSL routines:OPENSSL_internal:CERTIFICATE_VERIFY_FAILED. Traceback (most recent call last): File "", line 302, in <module> projects = liveops_stub.ListProjects(request=request) File "/home/airflow/.local/lib/python3.8/site-packages/grpc/_channel.py", line 946, in __call__ return _end_unary_response_blocking(state, call, False, None) File "/home/airflow/.local/lib/python3.8/site-packages/grpc/_channel.py", line 849, in _end_unary_response_blocking raise _InactiveRpcError(state) grpc._channel._InactiveRpcError: <_InactiveRpcError of RPC that terminated with: status = StatusCode.UNAVAILABLE details = "failed to connect to all addresses" debug_error_string = "{"created":"@1633036324.286560700","description":"Failed to pick subchannel","file":"src/core/ext/filters/client_channel/client_channel.cc","file_line":3186,"referenced_errors":[{"created":"@1633036324.286548700","description":"failed to connect to all addresses","file":"src/core/lib/transport/error_utils.cc","file_line":146,"grpc_status":14}]}" > ``` Seems to be something related to SSL Certificates. If I check `/etc/ssl/certs` folder it is empty, so could be that SSL SO certificate has been erased? How can I fix it?
2021/09/30
[ "https://Stackoverflow.com/questions/69398944", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5556466/" ]
You can do that with the method [String#[]](https://ruby-doc.org/core-2.7.0/String.html#method-i-5B-5D) with an argument that is a regular expression. ``` r = /.*?\.(?:rb|com|net|br)(?!\.br)/ 'giovanna.macedo@lojas100.com.br-215000695716b.ct.domain.com.br'[r] #=> "giovanna.macedo@lojas100.com.br" 'alvaro-neves@stockshop.com-215000695716b.ct.domain.com.br'[r] #=> "alvaro-neves@stockshop.com" 'filiallojas123@filiallojas.net-215000695716b.ct.domain.com.br'[r] #=> "filiallojas123@filiallojas.net" ``` The regular expression reads as follows: "Match zero or more characters non-greedily (`?`), follow by a period, followed by `'rb'` or `'com'` or `'net'` or `'br'`, which is not followed by `.br`. `(?!\.br)` is a *negative lookahead*. Alternatively the regular expression can be written in *free-spacing mode* to make it self-documenting: ``` r = / .*? # match zero or more characters non-greedily \. # match '.' (?: # begin a non-capture group rb # match 'rb' | # or com # match 'com' | # or net # match 'net' | # or br # match 'br' ) # end non-capture group (?! # begin a negative lookahead \.br # match '.br' ) # end negative lookahead /x # invoke free-spacing regex definition mode ```
This should work for your scenario: ```rb expr = /^(.+\.(?:br|com|net))-[^']+(')$/ str = "email = 'giovanna.macedo@lojas100.com.br-215000695716b.ct.domain.com.br'" str.gsub(expr, '\1\2') ```
8,359
50,598,438
I am tracing a python script like this: ``` python -m trace --ignore-dir=$HOME/lib64:$HOME/lib:/usr -t bin/myscript.py ``` Some lines look like this: ``` --- modulename: __init__, funcname: getEffectiveLevel __init__.py(1325): logger = self __init__.py(1326): while logger: __init__.py(1327): if logger.level: __init__.py(1329): logger = logger.parent __init__.py(1326): while logger: __init__.py(1327): if logger.level: __init__.py(1328): return logger.level ``` Unfortunately I have no clue where this code comes from. Is there a way to see the file name (including the path) of `getEffectiveLevel()`? Of course I could search through all installed python code for a method with this name, but I would like to see the file path immediately. In this context **Python 2.7** gets used. > > I am not fixed to the standard library "trace". I could use a different library, if it provides the needed feature. > > >
2018/05/30
[ "https://Stackoverflow.com/questions/50598438", "https://Stackoverflow.com", "https://Stackoverflow.com/users/633961/" ]
if the purpose is finding the full path, then check [hunter](https://python-hunter.readthedocs.io/en/latest/readme.html#id1) project, it even has support for [query-style](https://python-hunter.readthedocs.io/en/latest/cookbook.html) tracing. ``` # a modified example from docs # do check the documentation it is easy to start with from hunter import trace, Q, Debugger from pdb import Pdb trace( # drop into a Pdb session on``myscript.mainMethod()`` call Q(module="myscript", function="getEffectiveLevel", kind="call", action=Debugger(klass=Pdb))) import myscript myscript.mainMethod() ```
Unfortunately there is no flag/command-line option to enable that. So the immediate (and probably correct) answer is: **No**. If you're okay with messing with the built-in libraries you can easily make it possible by changing the line that reads: ``` print (" --- modulename: %s, funcname: %s" % (modulename, code.co_name)) ``` with: ``` print ("filename: %s, modulename: %s, funcname: %s" % (filename, modulename, code.co_name)) ``` in the `trace.py` file of your Python installation. You can find the path to that file with: ``` >>> import trace >>> trace.__file__ ``` But I really don't want to suggest that modifying libraries that way is something I would recommend. Before you do this (if you decide you really want to do that) create a backup of the file and if possible replace it again after you're done. A better way (although still messy) is to copy the above mentioned `trace.py` file (for example into the current working directory) and modify the copied file. Then you can run the modified version: ``` python path_to_modified_trace_file your_options_for_trace ``` Without the `-m` option and with the modified path, but otherwise identical to your original.
8,362
69,271,213
There are several ways in python to generate a greyscale image from an RGB version. One of those is just to read an image as greyscale using OpenCV. ``` im = cv2.imread(img, 0) ``` While `0` equals `cv2.IMREAD_GRAYSCALE` There are many different algorithms to handle this operation [well explained here.](https://www.dynamsoft.com/blog/insights/image-processing/image-processing-101-color-space-conversion/) I'm wondering how OpenCV handles this task and which algorithm stands behind `cv2.IMREAD_GRAYSCALE` but could neither find any documentation nor reference. Does someone have any idea? A paper reference would be great. Thanks in advance p.s. I'm working with jpg and png.
2021/09/21
[ "https://Stackoverflow.com/questions/69271213", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5152497/" ]
I think basically @Dan Mašek already answered the question in the comment section. I will try to summarize the findings for jpg files as an answer and I am glad about any improvements. CMYK to Grayscale ----------------- If you want to convert your jpg file from CMYK we have to look into [grfmt\_jpeg.cpp](https://github.com/opencv/opencv/blob/master/modules/imgcodecs/src/grfmt_jpeg.cpp#L433-L458). There exist other files like this for different image codes. Depending on the numbers of color channels `cinfo` is assigned. For CMYK images the `cinfo` is set to `4` and the function on [line 504](https://github.com/opencv/opencv/blob/master/modules/imgcodecs/src/grfmt_jpeg.cpp#L504) `icvCvt_CMYK2Gray_8u_C4C1R` is called. This function can be found in [utils.cpp](https://github.com/opencv/opencv/blob/2558ab3de7cdd57c91935eb64755afb2afd05f00/modules/imgcodecs/src/utils.cpp): ``` void icvCvt_CMYK2Gray_8u_C4C1R( const uchar* cmyk, int cmyk_step, uchar* gray, int gray_step, Size size ) { int i; for( ; size.height--; ) { for( i = 0; i < size.width; i++, cmyk += 4 ) { int c = cmyk[0], m = cmyk[1], y = cmyk[2], k = cmyk[3]; c = k - ((255 - c)*k>>8); m = k - ((255 - m)*k>>8); y = k - ((255 - y)*k>>8); int t = descale( y*cB + m*cG + c*cR, SCALE ); gray[i] = (uchar)t; } gray += gray_step; cmyk += cmyk_step - size.width*4; } } ``` and uses fixed variables for the conversion: ``` #define SCALE 14 #define cR (int)(0.299*(1 << SCALE) + 0.5) #define cG (int)(0.587*(1 << SCALE) + 0.5) #define cB ((1 << SCALE) - cR - cG) ``` RGB/BGR to Grayscale -------------------- If your image only contains three color channels it seems that [libjpeg](https://github.com/opencv/opencv/blob/master/3rdparty/libjpeg/jdcolor.c) is used for the conversion. This can be seen in [line 717](https://github.com/opencv/opencv/blob/master/modules/imgcodecs/src/grfmt_jpeg.cpp#L717). (I am not 100% sure if this is the correct line). In [jdcolor.c](https://github.com/opencv/opencv/blob/master/3rdparty/libjpeg/jdcolor.c) it can be seen that there a definitions and standards for converting color channels starting from [line 41](https://github.com/opencv/opencv/blob/master/3rdparty/libjpeg/jdcolor.c#L41). The most important part for your specific question is: ``` the conversion equations to be implemented are therefore R = Y + 1.402 * Cr G = Y - 0.344136286 * Cb - 0.714136286 * Cr B = Y + 1.772 * Cb Y = 0.299 * R + 0.587 * G + 0.114 * B ``` which relate to standards of the [ITU-R](https://en.wikipedia.org/wiki/ITU-R) and are used in many other sources I found. More detailed information can be found [here](https://en.wikipedia.org/wiki/Luma_(video)) and [here](https://stackoverflow.com/questions/687261/converting-rgb-to-grayscale-intensity). The second source relating to a StackOverflow question makes it clear that the conversion does not only depend on the pure RGB values but also on other parameters as gamma value. The standard OpenCV uses seems to be [Rec. 601](https://en.wikipedia.org/wiki/Rec._601).
in OpenCV [documentation](https://github.com/opencv/opencv/blob/master/modules/imgcodecs/include/opencv2/imgcodecs.hpp) you can find: ``` IMREAD_GRAYSCALE = 0, //!< If set, always convert image to the single channel grayscale image (codec internal conversion). ``` Also > > When using IMREAD\_GRAYSCALE, the codec's internal grayscale conversion > will be used, if available. > Results may differ to the output of cvtColor() > > > So it depends on codec's internal grayscale conversion. **More Info:** from [OpenCV documentation](https://docs.opencv.org/3.4.13/d4/da8/group__imgcodecs.html#ga288b8b3da0892bd651fce07b3bbd3a56) > > When using IMREAD\_GRAYSCALE, the codec's internal grayscale conversion will be used, if available. Results may differ to the output of cvtColor() > On Microsoft Windows\* OS and MacOSX\*, the codecs shipped with an OpenCV image (libjpeg, libpng, libtiff, and libjasper) are used by default. So, OpenCV can always read JPEGs, PNGs, and TIFFs. On MacOSX, there is also an option to use native MacOSX image readers. But beware that currently these native image loaders give images with different pixel values because of the color management embedded into MacOSX. > On Linux\*, BSD flavors and other Unix-like open-source operating systems, OpenCV looks for codecs supplied with an OS image. Install the relevant packages (do not forget the development files, for example, "libjpeg-dev", in Debian\* and Ubuntu\*) to get the codec support or turn on the OPENCV\_BUILD\_3RDPARTY\_LIBS flag in CMake. > > >
8,364
30,196,585
I've been struggling for hours on a problem that is making me insane. I installed Python 2.7 with Cygwin and added Scipy, Numpy, Matplotlib (1.4.3) and Ipython. When I decided to run `ipython --pylab` I get the following error: ``` /usr/lib/python2.7/site-packages/matplotlib/transforms.py in <module>() 37 import numpy as np 38 from numpy import ma ----> 39 from matplotlib._path import (affine_transform, count_bboxes_overlapping_bbox, 40 update_path_extents) 41 from numpy.linalg import inv ImportError: No module named _path ``` I spent hours on the internet, looking for a solution but nothing worked. I did notice that I am missing \_path.so files in the matplotlib directory that everybody seems to have. Instead, I have two files: path.py and path.pyc. But I installed matplotlib directly from the official website using `pip install` and reinstalling it didn't make any difference. Does anyone have a little clue on what is going wrong? I would be incredibly grateful !!!
2015/05/12
[ "https://Stackoverflow.com/questions/30196585", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4892337/" ]
For others having this problem, in my case, the solution was simple. The problem was caused by having the wrong matplot library installed on your computer; creating an error in finding the correct matplotlib path. In my case, I had installed matplotlib on a different version of python. Simply update matplotlib on your computer, so that it is compatible with your current version of python: ``` pip install --upgrade matplotlib ``` As for the post, I am unsure of what caused these big issues. Hope my tip can help anyone else stumbling upon this issue!
I doubt that most of you brought here by Google have the problem I had, but just in case: I got the above "ImportError: No module named \_path" (on Fedora 17) because I was trying to make use of matplotlib by just setting sys.path to point to where I had built the latest version (1.5.1 at the time). Don't do that. Once I ran "python setup.py install" (as root) to do a proper install (and got rid of my sys.path hack), the error was fixed.
8,365
46,368,931
Here is my main.py: ``` #!/usr/bin/env python3 from kivy.app import App from kivy.lang import Builder from kivy.metrics import dp from kivy.properties import ObjectProperty from kivy.uix.image import Image from kivy.uix.widget import Widget from kivy.uix.boxlayout import BoxLayout from kivymd.bottomsheet import MDListBottomSheet, MDGridBottomSheet from kivymd.button import MDIconButton from kivymd.date_picker import MDDatePicker from kivymd.dialog import MDDialog from kivymd.label import MDLabel from kivymd.list import ILeftBody, ILeftBodyTouch, IRightBodyTouch, BaseListItem from kivymd.material_resources import DEVICE_TYPE from kivymd.navigationdrawer import MDNavigationDrawer, NavigationDrawerHeaderBase from kivymd.selectioncontrols import MDCheckbox from kivymd.snackbar import Snackbar from kivymd.theming import ThemeManager from kivymd.time_picker import MDTimePicker Builder.load_file('main.kv') class BoxTopLevel(BoxLayout): def on_release_next_button(self): # self.ids['sm'].current="mainscreen" if not self.ids['username'].text: Snackbar(text="Please enter a username first").show() return 1 elif ' ' in self.ids['username'].text: Snackbar(text="Invalid username").show() return 1 elif '\\' in self.ids['username'].text: Snackbar(text="Invalid username").show() return 1 # elif '\\' in self.ids['username'].text: # Snackbar(text="No slashes please").show() if self.check_if_user_exists(): Snackbar(text="Welcome %s!" % (self.ids['username'].text)).show() self.ids['sm'].current='mainscreen' return 0 def check_if_user_exists(self): return True def set_previous_date(self, date_obj): self.previous_date = date_obj # self.root.ids.date_picker_label.text = str(date_obj) def show_date_picker(self): self.date_dialog = MDDatePicker(self.set_previous_date) self.date_dialog.open() def show_time_picker(self): self.time_dialog = MDTimePicker() self.time_dialog.open() def show_send_error_dialog(self): content = MDLabel(font_style='Body1', theme_text_color='Secondary', text="This is a dialog with a title and some text. That's pretty awesome right!", size_hint_y=None, valign='top') content.bind(texture_size=content.setter('size')) self.dialog = MDDialog(title="This is a test dialog", content=content, size_hint=(.8, None), height=dp(200), auto_dismiss=False) self.dialog.add_action_button("Dismiss", action=lambda *x: self.dialog.dismiss()) self.dialog.open() def stop_record(self): print("[INFO] Recording Stopped") Snackbar(text="Recording stopped").show() self.stop_record_button = self.ids['stop_record_button'] self.stop_record_button.disabled = True self.ids['record_button'].disabled = False rec_file_path = '' def record(self): print("[INFO] Recording") Snackbar(text="Recording started").show() self.record_button = self.ids['record_button'] self.record_button.disabled = True self.ids['stop_record_button'].disabled = False def export(self, username, date, time, *args, **kwargs): username, date, time = str(username), str(date), str(time) file_path = '/home/cocoa/KES/' file_name = username+'_'+date+'_'+time+'.csv' csv_string = username+','+date+','+time for arg in args: if type(arg) == str: csv_string += ','+arg f = open(file_path+file_name, 'w') f.write(csv_string+'\n') f.close return True, file_path+file_name def upload(self, csv_file_path, recording_file_path): print(csv_file_path, recording_file_path) def submit(self): try: date = str(self.date_dialog.day)+'-'+str(self.date_dialog.month)+'-'+str(self.date_dialog.year) print(date) if self.export(self.ids['username'].text, date, self.time_dialog.time, 'answer1', 'answer2')[0]: Snackbar(text="Woo hoo! It worked!").show() else: self.show_send_error_dialog() except: Snackbar(text="Please enter the date and time and try again").show() class SamplerApp(App): theme_cls = ThemeManager() def build(self, *args, **kwargs): return BoxTopLevel() SamplerApp().run() ``` And here is my main.kv: ``` #:import Toolbar kivymd.toolbar.Toolbar #:import ThemeManager kivymd.theming.ThemeManager #:import MDNavigationDrawer kivymd.navigationdrawer.MDNavigationDrawer #:import NavigationLayout kivymd.navigationdrawer.NavigationLayout #:import NavigationDrawerDivider kivymd.navigationdrawer.NavigationDrawerDivider #:import NavigationDrawerToolbar kivymd.navigationdrawer.NavigationDrawerToolbar #:import NavigationDrawerSubheader kivymd.navigationdrawer.NavigationDrawerSubheader #:import MDCheckbox kivymd.selectioncontrols.MDCheckbox #:import MDSwitch kivymd.selectioncontrols.MDSwitch #:import MDList kivymd.list.MDList #:import OneLineListItem kivymd.list.OneLineListItem #:import TwoLineListItem kivymd.list.TwoLineListItem #:import ThreeLineListItem kivymd.list.ThreeLineListItem #:import OneLineAvatarListItem kivymd.list.OneLineAvatarListItem #:import OneLineIconListItem kivymd.list.OneLineIconListItem #:import OneLineAvatarIconListItem kivymd.list.OneLineAvatarIconListItem #:import MDTextField kivymd.textfields.MDTextField #:import MDSpinner kivymd.spinner.MDSpinner #:import MDCard kivymd.card.MDCard #:import MDSeparator kivymd.card.MDSeparator #:import MDDropdownMenu kivymd.menu.MDDropdownMenu #:import get_color_from_hex kivy.utils.get_color_from_hex #:import colors kivymd.color_definitions.colors #:import SmartTile kivymd.grid.SmartTile #:import MDSlider kivymd.slider.MDSlider #:import MDTabbedPanel kivymd.tabs.MDTabbedPanel #:import MDTab kivymd.tabs.MDTab #:import MDProgressBar kivymd.progressbar.MDProgressBar #:import MDAccordion kivymd.accordion.MDAccordion #:import MDAccordionItem kivymd.accordion.MDAccordionItem #:import MDAccordionSubItem kivymd.accordion.MDAccordionSubItem #:import MDThemePicker kivymd.theme_picker.MDThemePicker #:import MDBottomNavigation kivymd.tabs.MDBottomNavigation #:import MDBottomNavigationItem kivymd.tabs.MDBottomNavigationItem <BoxTopLevel>: orientation: 'vertical' Toolbar: id: toolbar title: 'Sampler' md_bg_color: app.theme_cls.primary_color background_palette: 'Primary' background_hue: '500' #left_action_items: [['menu', lambda x: app.root.toggle_nav_drawer()]] #right_action_items: [['dots-vertical', lambda x: app.root.toggle_nav_drawer()]] ScreenManager: id: sm Screen: name: "loginscreen" BoxLayout: spacing: 20 padding: 20 orientation: 'vertical' Widget: BoxLayout: orientation: 'vertical' padding: 10 spacing: 10 MDTextField: id: username hint_text: "Please enter your unique username" MDCard: size_hint_x: 1 BoxLayout: padding: 10 spacing: 10 orientation: 'vertical' MDLabel: text: 'Please don\'t share this username' theme_text_color: 'Secondary' font_style: "Title" size_hint_y: None height: dp(36) MDSeparator: height: dp(1) MDLabel: text: "This application was developed in a hurry, So I didn't have the time to implement a proper login system. This system is temporary And I will impliment proper logins at later stages of development" theme_text_color: "Primary" MDRaisedButton: size_hint_x: 1 text: "Next ->" on_release: root.on_release_next_button() Screen: name: "mainscreen" MDBottomNavigation: id: bottom_navigation_demo MDBottomNavigationItem: name: 'record_page' text: "Record" icon: "microphone" BoxLayout: orientation: 'vertical' padding: 10 spacing: 10 MDCard: size_hint: 1, 0.2 BoxLayout: padding: 10 spacing: 10 orientation: 'vertical' MDLabel: text: 'Hello!' theme_text_color: 'Secondary' font_style: "Title" size_hint_y: None height: dp(36) MDSeparator: height: dp(1) MDLabel: text: "Since the buzzer went off, now is the time when you freely record your thought through this app. I want you to be as free as possible, without having to worry about whether or not anyone else will find any meaning in what you're saying. You can go on for as long as you like, but please try and go on for three minutes. You don't have to be talking throughout, it's okay to fill the time with silence if you can't freely associate in that moment. There isn't any right or wrong here, it's not possible for there to be any right or wrong here. Do log in your stats before you start here:" theme_text_color: "Primary" Widget: size_hint_y: 0.02 BoxLayout: padding: 10 spacing: 10 MDRaisedButton: id: record_button text: "Start Recording" on_press: root.record() MDRaisedButton: id: stop_record_button text: "Stop Recording" on_press: root.stop_record() disabled: True MDBottomNavigationItem: name: 'questions' text: "Questions" icon: "help" GridLayout: rows: 7 cols: 1 padding: dp(48) spacing: 10 MDTextField: id: location multiline: True hint_text: "Where are you?" MDTextField: id: task multiline: True hint_text: "What were you doing?" MDTextField: id: person_with multiline: True hint_text: "Who are you with" MDTextField: id: special_circumstances multiline: True hint_text: "Are there any special circumstances? (Inebriated, very sad, something big happened)" MDRaisedButton: id: date size_hint: None, None size: 3 * dp(48), dp(48) on_press: root.show_date_picker() text: "What date is it?" MDRaisedButton: text: "What time is it?" size_hint: None, None size: 3 * dp(48), dp(48) on_press: root.show_time_picker() MDRaisedButton: id: submit_button disabled: False text: "Submit!" size_hint: None, None size: 3 * dp(48), dp(48) on_press: root.submit() MDBottomNavigationItem: name: 'info' text: "Info" icon: "information" GridLayout: spacing: 20 padding: 20 rows: 4 cols: 1 MDRaisedButton: size_hint_x: 1 MDRaisedButton: size_hint_x: 1 MDRaisedButton: size_hint_x: 1 MDRaisedButton: size_hint_x: 1 ``` I copied a lot of the kv code from the kitchen sink that kivymd provides. I found another answer on Stack Overflow but didn't quite understand what was causing the error. And since my code seems worse, I'd really appreciate it if someone could explain what exactly is causing the error and why. In my python file, I've just use iteration once in the `export` function. Also, here is the output of the code: ``` [Command: /usr/bin/env -u /home/cocoa/KES/main.py] [INFO ] [Logger ] Record log in /home/cocoa/.kivy/logs/kivy_17-09-22_83.txt [INFO ] [Kivy ] v1.10.0 [INFO ] [Python ] v3.6.2 (default, Jul 20 2017, 03:52:27) [GCC 7.1.1 20170630] [INFO ] [Factory ] 194 symbols loaded [INFO ] [Image ] Providers: img_tex, img_dds, img_sdl2, img_gif (img_pil, img_ffpyplayer ignored) [INFO ] [KivyMD ] KivyMD version: 0.1.2 [INFO ] [Text ] Provider: sdl2 [INFO ] [OSC ] using <multiprocessing> for socket [INFO ] [Window ] Provider: sdl2(['window_egl_rpi'] ignored) [INFO ] [GL ] Using the "OpenGL" graphics system [INFO ] [GL ] Backend used <gl> [INFO ] [GL ] OpenGL version <b'4.5.0 NVIDIA 384.69'> [INFO ] [GL ] OpenGL vendor <b'NVIDIA Corporation'> [INFO ] [GL ] OpenGL renderer <b'GeForce GT 705/PCIe/SSE2'> [INFO ] [GL ] OpenGL parsed version: 4, 5 [INFO ] [GL ] Shading version <b'4.50 NVIDIA'> [INFO ] [GL ] Texture max size <16384> [INFO ] [GL ] Texture max units <32> [INFO ] [Window ] auto add sdl2 input provider [INFO ] [Window ] virtual keyboard not allowed, single mode, not docked [INFO ] [GL ] NPOT texture support is available [INFO ] [Clipboard ] Provider: sdl2(['clipboard_dbusklipper', 'clipboard_gtk3', 'clipboard_xclip', 'clipboard_xsel'] ignored) [CRITICAL] [Cutbuffer ] Unable to find any valuable Cutbuffer provider. xclip - FileNotFoundError: [Errno 2] No such file or directory: 'xclip' File "/usr/lib/python3.6/site-packages/kivy/core/__init__.py", line 59, in core_select_lib fromlist=[modulename], level=0) File "/usr/lib/python3.6/site-packages/kivy/core/clipboard/clipboard_xclip.py", line 17, in <module> p = subprocess.Popen(['xclip', '-version'], stdout=subprocess.PIPE) File "/usr/lib/python3.6/subprocess.py", line 707, in __init__ restore_signals, start_new_session) File "/usr/lib/python3.6/subprocess.py", line 1333, in _execute_child raise child_exception_type(errno_num, err_msg) xsel - FileNotFoundError: [Errno 2] No such file or directory: 'xsel' File "/usr/lib/python3.6/site-packages/kivy/core/__init__.py", line 59, in core_select_lib fromlist=[modulename], level=0) File "/usr/lib/python3.6/site-packages/kivy/core/clipboard/clipboard_xsel.py", line 16, in <module> p = subprocess.Popen(['xsel'], stdout=subprocess.PIPE) File "/usr/lib/python3.6/subprocess.py", line 707, in __init__ restore_signals, start_new_session) File "/usr/lib/python3.6/subprocess.py", line 1333, in _execute_child raise child_exception_type(errno_num, err_msg) [WARNING] [MDBottomNavigation] 50.0dp is less than the minimum size of 80dp for a MDBottomNavigationItem. We must now expand to 168dp. [WARNING] [MDBottomNavigation] 33.333333333333336dp is less than the minimum size of 80dp for a MDBottomNavigationItem. We must now expand to 168dp. [INFO ] [Base ] Start application main loop [CRITICAL] [Clock ] Warning, too much iteration done before the next frame. Check your code, or increase the Clock.max_iteration attribute [CRITICAL] [Clock ] Warning, too much iteration done before the next frame. Check your code, or increase the Clock.max_iteration attribute [CRITICAL] [Clock ] Warning, too much iteration done before the next frame. Check your code, or increase the Clock.max_iteration attribute [CRITICAL] [Clock ] Warning, too much iteration done before the next frame. Check your code, or increase the Clock.max_iteration attribute [CRITICAL] [Clock ] Warning, too much iteration done before the next frame. Check your code, or increase the Clock.max_iteration attribute [CRITICAL] [Clock ] Warning, too much iteration done before the next frame. Check your code, or increase the Clock.max_iteration attribute [CRITICAL] [Clock ] Warning, too much iteration done before the next frame. Check your code, or increase the Clock.max_iteration attribute [CRITICAL] [Clock ] Warning, too much iteration done before the next frame. Check your code, or increase the Clock.max_iteration attribute [CRITICAL] [Clock ] Warning, too much iteration done before the next frame. Check your code, or increase the Clock.max_iteration attribute [CRITICAL] [Clock ] Warning, too much iteration done before the next frame. Check your code, or increase the Clock.max_iteration attribute [CRITICAL] [Clock ] Warning, too much iteration done before the next frame. Check your code, or increase the Clock.max_iteration attribute [CRITICAL] [Clock ] Warning, too much iteration done before the next frame. Check your code, or increase the Clock.max_iteration attribute [CRITICAL] [Clock ] Warning, too much iteration done before the next frame. Check your code, or increase the Clock.max_iteration attribute [CRITICAL] [Clock ] Warning, too much iteration done before the next frame. Check your code, or increase the Clock.max_iteration attribute [CRITICAL] [Clock ] Warning, too much iteration done before the next frame. Check your code, or increase the Clock.max_iteration attribute [CRITICAL] [Clock ] Warning, too much iteration done before the next frame. Check your code, or increase the Clock.max_iteration attribute [CRITICAL] [Clock ] Warning, too much iteration done before the next frame. Check your code, or increase the Clock.max_iteration attribute [CRITICAL] [Clock ] Warning, too much iteration done before the next frame. Check your code, or increase the Clock.max_iteration attribute [CRITICAL] [Clock ] Warning, too much iteration done before the next frame. Check your code, or increase the Clock.max_iteration attribute [CRITICAL] [Clock ] Warning, too much iteration done before the next frame. Check your code, or increase the Clock.max_iteration attribute [CRITICAL] [Clock ] Warning, too much iteration done before the next frame. Check your code, or increase the Clock.max_iteration attribute [CRITICAL] [Clock ] Warning, too much iteration done before the next frame. Check your code, or increase the Clock.max_iteration attribute [CRITICAL] [Clock ] Warning, too much iteration done before the next frame. Check your code, or increase the Clock.max_iteration attribute [CRITICAL] [Clock ] Warning, too much iteration done before the next frame. Check your code, or increase the Clock.max_iteration attribute [INFO ] [Base ] Leaving application in progress... [Finished in 7.724s] ```
2017/09/22
[ "https://Stackoverflow.com/questions/46368931", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4085793/" ]
This works for me Try not to change MDRaisedButton size\_hint to 1 it raised this clock error , my suggestion is not to change any kivymd button size\_hint it is by default None rather you can change size in dp
This bug still persists in `MDRaisedButton` of KivyMD. A simple workaround to solve it is using `size_hint` instead of `size_hint_x`. For example in your case, replace ``` MDRaisedButton: size_hint_x: 1 ``` by ``` MDRaisedButton: size_hint: 1., None ```
8,367
14,206,637
I am really new to the use of Python and the associated packages that can be installed. As a biologist I am looking for a lot of new packages that would help me model species systems, ecological change etc.. and after a lot of "Google-ing" I came across scikit-learn. However, I am having trouble installing it. And I will take this moment now to apologise for the length of this post. I am using 64bit Python 3.3 and have the associated NumPy (MKL 1.7.0) and SciPy. I installed distribute which worked fine and allows me to use easy\_install. So to install scikit-learn, I tried using the cmd prompt (Windows) run in administrator mode, and then also through Python command line. I placed the downloaded and extracted tar.gz file in the Lib\site-packages folder. When I run the command `easy_install scikit-learn` in cmd prompt. Then this is the following output: ``` C:\Python33\Lib\site-packages>easy_install -U scikit-learn Searching for scikit-learn Reading http://pypi.python.org/simple/scikit-learn/ Reading http://scikit-learn.org Reading http://sourceforge.net/projects/scikit-learn/files/ Reading http://scikit-learn.sourceforge.net Best match: scikit-learn 0.12.1 Downloading http://pypi.python.org/packages/source/s/scikit-learn/scikit-learn-0 .12.1.tar.gz#md5=7e8b3434f9e8198b82dc3774f8bc9394 Processing scikit-learn-0.12.1.tar.gz Writing c:\users\nuvraj~1\appdata\local\temp\easy_install-kvr2q0\scikit-learn-0. 12.1\setup.cfg Running scikit-learn-0.12.1\setup.py -q bdist_egg --dist-dir c:\users\nuvraj~1\a ppdata\local\temp\easy_install-kvr2q0\scikit-learn-0.12.1\egg-dist-tmp-l618ie Traceback (most recent call last): File "C:\Python33\Scripts\easy_install-script.py", line 9, in <module> load_entry_point('distribute==0.6.33', 'console_scripts', 'easy_install')() File "C:\Python33\lib\site-packages\setuptools\command\easy_install.py", line 1937, in main with_ei_usage(lambda: File "C:\Python33\lib\site-packages\setuptools\command\easy_install.py", line 1918, in with_ei_usage return f() File "C:\Python33\lib\site-packages\setuptools\command\easy_install.py", line 1941, in <lambda> distclass=DistributionWithoutHelpCommands, **kw File "C:\Python33\lib\distutils\core.py", line 148, in setup dist.run_commands() File "C:\Python33\lib\distutils\dist.py", line 917, in run_commands self.run_command(cmd) File "C:\Python33\lib\distutils\dist.py", line 936, in run_command cmd_obj.run() File "C:\Python33\lib\site-packages\setuptools\command\easy_install.py", line 358, in run self.easy_install(spec, not self.no_deps) File "C:\Python33\lib\site-packages\setuptools\command\easy_install.py", line 598, in easy_install return self.install_item(spec, dist.location, tmpdir, deps) File "C:\Python33\lib\site-packages\setuptools\command\easy_install.py", line 628, in install_item dists = self.install_eggs(spec, download, tmpdir) File "C:\Python33\lib\site-packages\setuptools\command\easy_install.py", line 823, in install_eggs return self.build_and_install(setup_script, setup_base) File "C:\Python33\lib\site-packages\setuptools\command\easy_install.py", line 1103, in build_and_install self.run_setup(setup_script, setup_base, args) File "C:\Python33\lib\site-packages\setuptools\command\easy_install.py", line 1089, in run_setup run_setup(setup_script, args) File "C:\Python33\lib\site-packages\setuptools\sandbox.py", line 34, in run_se tup lambda: exec(compile(open( File "C:\Python33\lib\site-packages\setuptools\sandbox.py", line 82, in run return func() File "C:\Python33\lib\site-packages\setuptools\sandbox.py", line 37, in <lambd a> {'__file__':setup_script, '__name__':'__main__'}) File "setup.py", line 33, in <module> File "c:\users\nuvraj~1\appdata\local\temp\easy_install-kvr2q0\scikit-learn-0. 12.1\sklearn\__init__.py", line 86 print "I: Seeding RNGs with %r" % _random_seed ^ SyntaxError: invalid syntax C:\Python33\Lib\site-packages> ``` So the little ^ seems to be pointing at the " after RNGS with %r. Which from what I can tell is in the '*init*' file from the sklearn folder found in the .tar.gz file. I also get the same result when running it in the Python GUI and command line. How can I install scikit-learn with Python 3.3? Is there some way of building it or editing the file to get past this invalid syntax error? Any help would be greatly appreciated. And I am very sorry for it being such a long post. I was just trying to get all the details in there. Thanks Simon
2013/01/08
[ "https://Stackoverflow.com/questions/14206637", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1956404/" ]
scikit-learn does not support Python 3 yet. For now you need Python 2.7. Proper support for Python 3 is expected for the 0.14 release scheduled for Q2-2013.
I am no expert, but in my understanding the print statement in Python 3.\* is now a function, called like: print(). So, a quick solution in this case is to change ``` print "I: Seeding RNGs with %r" % _random_seed ``` to ``` print("I: Seeding RNGs with %r" % _random_seed) ```
8,368
9,331,000
I'm trying to remove large blocks of text from a file using python. Each block of text begins with /translation="SOMETEXT" Ending with the second quote. Can anyone give me some advice on how to accomplish this? Thank you
2012/02/17
[ "https://Stackoverflow.com/questions/9331000", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1216584/" ]
You can use re.sub like this: ``` import re re.sub("/translation=\".*?\" ", "", s) ```
If performance doesn't matter, you could do something like this. Regular expressions would probably be faster, but this is simpler. ``` def remtxt(s,startstr,endstr): while startstr in s: startpos=s.index(startstr) try: endpos=s.index(endstr,startpos+len(startstr))+len(endstr) except: return s=s[:startpos]+s[endpos:] return s new_string=remtxt(my_string,'/translation="','"') ```
8,369
71,155,282
Below is the html tag. I want to return the value in span as an integer in python selenium. Can you help me out? ```html <span class="pendingCount"> <img src="/static/media/sandPot.a436d753.svg" alt="sandPot"> <span>2</span> </span> ```
2022/02/17
[ "https://Stackoverflow.com/questions/71155282", "https://Stackoverflow.com", "https://Stackoverflow.com/users/14902563/" ]
You could use the dates for the x-axis, the 'constant' column for the y-axis, and the Cluster id for the coloring. You can create a custom legend using a list of colored rectangles. ```py import matplotlib.pyplot as plt from matplotlib.ticker import MaxNLocator import pandas as pd import numpy as np N = 100 df = pd.DataFrame({'Date': pd.date_range('2020-08-07', periods=N, freq='D'), 'order': np.arange(1, N + 1), 'ClusterNo2': np.random.randint(0, 4, N).astype(float), 'constant': 1}) df['ClusterNo2'] = df['ClusterNo2'].astype(int) # convert to integers fig, ax = plt.subplots(figsize=(15, 3)) num_clusters = df['ClusterNo2'].max() + 1 colors = plt.cm.Set2.colors ax.bar(x=range(len(df)), height=df['constant'], width=1, color=[colors[i] for i in df['ClusterNo2']], edgecolor='none') ax.set_xticks(range(len(df))) labels = ['' if i % 3 != 0 else day.strftime('%d\n%b %Y') if i == 0 or day.day <= 3 else day.strftime('%d') for i, day in enumerate(df['Date'])] ax.set_xticklabels(labels) ax.margins(x=0, y=0) ax.yaxis.set_major_locator(MaxNLocator(integer=True)) legend_handles = [plt.Rectangle((0, 0), 0, 0, color=colors[i], label=f'{i}') for i in range(num_clusters)] ax.legend(handles=legend_handles, title='Clusters', bbox_to_anchor=(1.01, 1.01), loc='upper left') fig.tight_layout() plt.show() ``` [![bar plot for clusters](https://i.stack.imgur.com/1034J.png)](https://i.stack.imgur.com/1034J.png)
You could just plot a normal bar graph, with 1 bar corresponding to 1 day. If you make the width also 1, it will look as if the patches are contiguous. [![enter image description here](https://i.stack.imgur.com/jtFlV.png)](https://i.stack.imgur.com/jtFlV.png) ``` import numpy as np import matplotlib.pyplot as plt from matplotlib.colors import BoundaryNorm # simulate data total_datapoints = 16 total_clusters = 4 order = np.arange(total_datapoints) clusters = np.random.randint(0, total_clusters, size=total_datapoints) # map clusters to colors cmap = plt.cm.tab10 bounds = np.arange(total_clusters + 1) norm = BoundaryNorm(bounds, cmap.N) colors = [cmap(norm(cluster)) for cluster in clusters] # plot fig, ax = plt.subplots() ax.bar(order, np.ones_like(order), width=1, color=colors, align='edge') # xticks change_points = np.where(np.diff(clusters) != 0)[0] + 1 change_points = np.unique([0] + change_points.tolist() + [total_datapoints]) ax.set_xticks(change_points) # annotate clusters for ii, dx in enumerate(np.diff(change_points)): xx = change_points[ii] + dx/2 ax.text(xx, 0.5, str(clusters[int(xx)]), ha='center', va='center') ax.set_xlabel('Time (days)') plt.show() ```
8,370
48,655,638
I think what I am trying to do is pretty much like [github issue in zeep repo](https://github.com/mvantellingen/python-zeep/issues/412) --- but sadly there is no response to this issue yet. I researched suds and installed and tried -- did not even get sending parameter to work and thought zeep seems better maintained? Edit 1: For sure I am not talking about [this](http://docs.python-zeep.org/en/latest/client.html#creating-the-raw-xml-documents)
2018/02/07
[ "https://Stackoverflow.com/questions/48655638", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2379736/" ]
You can use a Plugin for editing the xml as a plain string. I used this plugin for keeping the characters '<' and '>' in a CDATA element. ``` from xml import etree from zeep import Plugin class my_plugin(Plugin): def egress(self, envelope, http_headers, operation, binding_options): xml_string = etree.ElementTree.tostring(envelope) xml_string = xml_string.replace("&lt;", "<") xml_string = xml_string.replace("&gt;", ">") parser = etree.ElementTree.XMLParser(strip_cdata=False) new_envelope = etree.ElementTree.XML(xml_string, parser=parser) return new_envelope, http_headers ``` Then just import the plugin on the client: ``` client = Client(wsdl='url', transport=transport, plugins=[my_plugin()]) ``` Take a look at the docs: <http://docs.python-zeep.org/en/master/plugins.html>
On Python 3.9, @David Ortiz answer didn't work for me, maybe something has changed. The `etree_to_string` was failing to convert the XML to string. What worked for me, instead of a plugin, I created a custom transport, that replaced the stripped tags with the correct characters, just like David's code, before the post was sent. ``` import zeep from zeep.transports import Transport from xml.etree import ElementTree class CustomTransport(Transport): def post_xml(self, address, envelope, headers): message = ElementTree.tostring(envelope, encoding="unicode") message = message.replace("&lt;", "<") message = message.replace("&gt;", ">") return self.post(address, message, headers) client = zeep.Client('wsdl_url', transport=CustomTransport()) ```
8,371
56,316,244
I have some strange behavior on python 3.7 with a nested list comprehension that involves a generator. **This works:** ``` i = range(20) n = [1, 2, 3] result = [min(x + y for x in i) for y in n] ``` It does **not work** if `i` is a generator: ``` i = (p for p in range(20)) n = [1, 2, 3] result = [min(x + y for x in i) for y in n] ``` This raises a `ValueError: min() arg is an empty sequence` Now even if the generator `i` is wrapped with `list` it still creates the same error: ``` i = (p for p in range(20)) n = [1, 2, 3] result = [min(x + y for x in list(i)) for y in n] ``` Is this a python bug or is it expected behavior? If it is expected behavior, can you explain why this does not work?
2019/05/26
[ "https://Stackoverflow.com/questions/56316244", "https://Stackoverflow.com", "https://Stackoverflow.com/users/51627/" ]
In both of your last examples, you try to iterate on the generator again after it got exhausted. In your last example, `list(i)` is evaluated again for each value of `y`, so `i` will be exhausted after the first run. You have to make a list of the values it yields once before, as in: ``` i = (p for p in range(20)) n = [1, 2, 3] list_i = list(i) result = [min(x + y for x in list_i) for y in n] ```
The generator is emptied after the first for loop for both `for x in i` or `for x in list(i)`, instead you need to convert the generator to a list, (which essentially iterates over the generator and empties it) beforehand and use that list Note that this essentially defeats the purpose of a generator, since now this becomes the same as the first approach ``` In [14]: list(range(20)) == list(p for p in range(20)) Out[14]: True ``` Hence the updated code will be ``` #Create generator and convert to list i = list(p for p in range(20)) n = [1, 2, 3] #Use that list in the list comprehension result = [min(x + y for x in i) for y in n] print(result) ``` The output will be ``` [1, 2, 3] ``` Hence the better approach hence is to stick with the first approach itself, or you can have the generator inline, which, again is the same as the first approach with range ``` n = [1, 2, 3] result = [min(x + y for x in (p for p in range(20))) for y in n] print(result) #[1, 2, 3] ```
8,372
70,969,920
I am looping over a list of dictionaries and I have to drop/ignore either one or more keys of the each dictionary in the list and write it to a MongoDB. What is the efficient pythonic way of doing this ? **Example:** ``` employees = [ {'name': "Tom", 'age': 10, 'salary': 10000, 'floor': 10}, {'name': "Mark", 'age': 5, 'salary': 12000, 'floor': 11}, {'name': "Pam", 'age': 7, 'salary': 9500, 'floor': 9} ] ``` Let's say I want to drop key = 'floor' or keys = ['floor', 'salary']. Currently I am using del employees['floor'] inside the loop to delete the key and my\_collection.insert\_one() to simply write the dictionary into my MongoDB. **My code:** ``` for d in employees: del d['floor'] my_collection.insert_one(d) ```
2022/02/03
[ "https://Stackoverflow.com/questions/70969920", "https://Stackoverflow.com", "https://Stackoverflow.com/users/14066217/" ]
In the context you ask about you can think that closure is a function that references to some variables that are defined in its outer scope (for other cases see the answer by @phipsgabler). Here is a minimal example: ``` julia> function est_mean(x) function fun(m) return m - mean(x) end val = find_zero(fun, 0.0) @show val, mean(x) return fun # explicitly return the inner function to inspect it end est_mean (generic function with 1 method) julia> x = rand(10) 10-element Vector{Float64}: 0.6699650145575134 0.8208379672036165 0.4299946498764684 0.1321653923513042 0.5552854476018734 0.8729613266067378 0.5423030870674236 0.15751882823315777 0.4227087678654101 0.8594042895489912 julia> fun = est_mean(x) (val, mean(x)) = (0.5463144770912497, 0.5463144770912497) fun (generic function with 1 method) julia> dump(fun) fun (function of type var"#fun#3"{Vector{Float64}}) x: Array{Float64}((10,)) [0.6699650145575134, 0.8208379672036165, 0.4299946498764684, 0.1321653923513042, 0.5552854476018734, 0.8729613266067378, 0.5423030870674236, 0.15751882823315777, 0.4227087678654101, 0.8594042895489912] julia> fun.x 10-element Vector{Float64}: 0.6699650145575134 0.8208379672036165 0.4299946498764684 0.1321653923513042 0.5552854476018734 0.8729613266067378 0.5423030870674236 0.15751882823315777 0.4227087678654101 0.8594042895489912 julia> fun(10) 9.453685522908751 ``` As you can see `fun` holds the reference to the `x` variable from the outer scope (in this case the scope introduced by the `est_mean` function). Moreover, I have shown you that you can even retrieve this value from outside of `fun` as its field (this is typically not recommended but I show this to you to prove that indeed `fun` stores a reference to the object `x` defined in its outer scope; it needs to store this reference as the variabe `x` is used inside the body of the `fun` function). In the context of estimation, as you have noted, this is useful because `find_zero` in my case requires the function to take only one argument - the `m` variable in my case, while you want the return value to depend both on passed `m` and on `x`. What is important that once `x` is captured in the `fun` closure it does not have to be in current scope. For instance when I call `fun(10)` the code executes correctly although we are outside of the scope of function `est_mean`. But this is not a problem because `fun` function has captured `x` variable. Let me give one more example: ``` julia> function gen() x = [] return v -> push!(x, v) end gen (generic function with 1 method) julia> fun2 = gen() #4 (generic function with 1 method) julia> fun2.x Any[] julia> fun2(1) 1-element Vector{Any}: 1 julia> fun2.x 1-element Vector{Any}: 1 julia> fun2(100) 2-element Vector{Any}: 1 100 julia> fun2.x 2-element Vector{Any}: 1 100 ``` Here you see that the `x` variable defined within `gen` function is captured by the anonymous function `v -> push!(x, v)` that I bind to the `fun2` variable. Later when you call `fun2` the object bound to the `x` variable gets updated (and can be referenced to) although it was defined in the `gen` function scope. Although we left the `gen` scope the object bound to the `x` variable outlives the scope because it is captured by the anonymous function we defined. If something is unclear please comment.
I'm going to complement Bogumił's answer by showing you what he has deliberately left out: a closure does not have to be a function in the strict sense. In fact, you could write them on your own, if nested functions were disallowed in Julia: ``` struct LikelihoodClosure X y end (l::LikelihoodClosure)(β) = -log_likelihood(l.X, l.y, β) make_closures(X, y) = LikelihoodClosure(X, y) nll = make_closures(X, y) ``` Now you are allowed to call `nll(β₀)`, which is an object of type `LikelihoodClosure` with a defined application method. And that's really all to it. Anonymous functions are just syntactic sugar for creating instances of objects storing the "fixed variables" from a context. ``` julia> f(x) = y -> x + y f (generic function with 1 method) julia> f(1) # that's the closure value #1 (generic function with 1 method) julia> typeof(f(1)) # that's the closure type var"#1#2"{Int64} julia> f(1).x 1 julia> propertynames(f(1)) # behold, it has a field `x`! (:x,) ``` And we can even cheat a bit and construct an instance: ``` julia> eval(Expr(:new, var"#1#2"{Int64}, 22)) #1 (generic function with 1 method) julia> eval(Expr(:new, var"#1#2"{Int64}, 22))(2) 24 ```
8,375
21,811,851
This question has been troubling me for some days now and I've tried asking in many places for advice, but it seems that nobody can answer it clearly or even provide a reference to an answer. I've also tried searching for tutorials, but I just cannot find any type of tutorial that explains how you would use a reusable third-party django app (most tutorials explain how to write them, none explain how to use them). Also, I've taken a look here: [How to re-use a reusable app in Django](https://stackoverflow.com/questions/557171/how-to-re-use-a-reusable-app-in-django) - it doesn't explain how to actually use it IN a project itself and here: [How to bind multiple reusable Django apps together?](https://stackoverflow.com/questions/11579232/how-to-bind-multiple-reusable-django-apps-together) - the answer by aquaplanet kind of makes sense, but I thought I would ask this question to solve the mental block I am facing in trying to understand this. --- **In order to best explain this, let me do so by example (note, it is not something I am actually building).** I am creating a project that acts like Reddit. I will have users, links and voting/points. Based on this crude example, I will want to reuse 3 (arbitrary) third-party apps: user, voting/points and links. I decide to use each of them as any other python package (meaning that they will be treated as a package and none of their code should be touched) [would this method actually work? Or do you have to be able to edit third-party apps to build a project??) With these apps now within my project, I will use a main app to handle all the template stuff (meaning everything I see on the frontend will be in a single app). I will then either use that same main app for custom logic (in views.py) or I will break up that logic among different apps (but will still use a single frontend-only app). From the 3 paragraphs above, is this structure applicable (or can it work) ? --- Now lets say that this structure **is applicable** and I am using a single main app for the frontend and custom logic. What would I write in models.py? How would I integrate things from the 3 reusable apps into the *main* models.py file? How would I reference the reusable apps in views.py? Lets take the example of contrib.auth With this built-in app, for logging out I would write: ```py from django.contrib.auth import logout from django.contrib.auth.decorators import login_required from django.shortcuts import redirect @login_required def user_logout(request): logout(request) return redirect('/home/') ``` Although the above code is simple, is that basically how things would be done with any reusable app? My question is very long, but I think that this reusable app issue is something a lot of developers aren't quite clear about themselves and maybe this answer will help a lot of others who have heard about the promises of reusable apps, but fail to understand how to *actually* use them.
2014/02/16
[ "https://Stackoverflow.com/questions/21811851", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1663535/" ]
TL;DR: ------ Nope & it depends... Some (Very) Common Reusable Apps -------------------------------- * [django.contrib.admin](https://docs.djangoproject.com/en/dev/ref/contrib/admin/) * [django.contrib.auth](https://docs.djangoproject.com/en/dev/ref/contrib/auth/) * [django.contrib.staticfiles](https://docs.djangoproject.com/en/dev/ref/contrib/staticfiles/) ... those are all reusable Django apps, that happen to be shipped with Django (most of them were not, at some point in time) Ok, some other reusable apps that don't ship with Django: * [django-rest-framework](http://www.django-rest-framework.org/) * [django-registration](https://github.com/ubernostrum/django-registration) * [South](http://south.aeracode.org/) Those are all truly reusable apps, and nothing less. There are very many more apps like that. How do they do it? ------------------ To me your question looks more like "how do I build reusable apps", then "how to use them". Actually using them is very different from app to app, because they do very different things. There is only one rule: [RTFM](http://en.wikipedia.org/wiki/RTFM) No way around that either. Often, they rely on one or more of the following: * additional value(s) in `settings.py` * addition (usually one `include` statement) to `urls.py` * subclassing and/or mixins for Models, Forms, Fields, Views etc. * template tags and/or filters * management commands * ... Those are all powerful ways though which your app can **provide** functionality to other apps. There is no recipe (AFAIK) to make a reusable app, because there are so many different scenarios to consider. It all depends on what exactly your app should do. Reusable apps **provide** functionalities ----------------------------------------- I'd argue that it's important to not think of reusable apps as "working together" with other app, but instead recognize that that they "provide functionality." The details of the functionality provided should dictate the way the target developer is supposed to use your library. Not everything should be reusable --------------------------------- Obviously enough, even though many apps can "in principle" be reusable, it often makes little sense to do so, because it is way faster to clump things together (and make them just "work together").
I'm not sure why you think you need a main app for the "frontend" stuff. The point of a reusable app is that it takes care of everything, you just add (usually) a single URL to include the urls.py of the app, plus your own templates and styling as required. And you certainly don't need to wrap the app's views in your own views, unless you specifically want to override some functionality. I don't understand at all your question about models. There's no such thing as a "main" models file, and using a reusable app's models is just the same as using models from any of your own apps. Normally you would not edit a third-party app, that would make it very hard to integrate updates. Just install the app in your virtualenv (you are using virtualenv, of course!) with pip, which will put it in the lib directory, and you can reference it just like any other app. Make sure you add it to INSTALLED\_APPS.
8,376
48,642,572
I'm trying to port a custom class from Python 2 to Python 3. I can't find the right syntax to port the iterator for the class. Here is a MVCE of the real class and my attempts to solve this so far: Working Python 2 code: ``` class Temp: def __init__(self): self.d = dict() def __iter__(self): return self.d.iteritems() temp = Temp() for thing in temp: print(thing) ``` In the above code iteritems() breaks in Python 3. According to [this](https://stackoverflow.com/q/13998492/4490400) highly voted answer, "`dict.items` now does the thing `dict.iteritems` did in python 2". So I tried that next: ``` class Temp: def __init__(self): self.d = dict() def __iter__(self): return self.d.items() ``` The above code yields "`TypeError: iter() returned non-iterator of type 'dict_items'`" According to [this](https://stackoverflow.com/a/24377/4490400) answer, Python 3 requires iterable objects to provide a next() method in addition to the iter method. Well, a dictionary is also iterable, so in my use case I should be able to just pass dictionary's next and iter methods, right? ``` class Temp: def __init__(self): self.d = dict() def __iter__(self): return self.d.__iter__ def next(self): return self.d.next ``` This time it's giving me "`TypeError: iter() returned non-iterator of type 'method-wrapper'`". What am I missing here?
2018/02/06
[ "https://Stackoverflow.com/questions/48642572", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4490400/" ]
As the error message suggests, your `__iter__` function does not return an iterator, which you can easily fix using the built-in `iter` function ``` class Temp: def __init__(self): self.d = {} def __iter__(self): return iter(self.d.items()) ``` This will make your class iterable. Alternatively, you may write a generator yourself, like so: ``` def __iter__(self): for key,item in self.d.items(): yield key,item ``` If you want to be able to iterate over keys and items separately, i.e. in the form that the usual python3 dictionary can, you can provide additional functions, for example ``` class Temp: def __init__(self, dic): self.d = dic def __iter__(self): return iter(self.d) def keys(self): return self.d.keys() def items(self): return self.d.items() def values(self): return self.d.values() ``` I'm guessing from the way you phrased it that you don't actually want the `next()` method to be implemented if not needed. If you would, you would have to somehow turn your whole class into an iterator and somehow keep track of where you are momentarily in this iterator, because dictionaries themselves are not iterators. See also [this](https://stackoverflow.com/questions/38700734/how-to-implement-next-for-a-dictionary-object-to-be-iterable) answer.
I don't know what works in Python 2. But on Python 3 iterators can be most easily created using something called a [generator](https://www.pythoncentral.io/python-generators-and-yield-keyword/). I am providing the name and the link so that you can research further. ``` class Temp: def __init__(self): self.d = {} def __iter__(self): for thing in self.d.items(): yield thing ```
8,377
21,179,140
Okay, so in a terminal, after importing and making the necessary objects--I typed: ``` for links in soup.find_all('a'): print(links.get('href')) ``` which gave me all the links on a wikipedia page (roughly 250). No problems. However, in a program I am coding, I only receive about 60 links (and this is scraping the same wikipedia page) and the ones I DO get are mostly not worth anything. I double checked that I initialized both exactly the same--the only difference is the names of variables. For reference, here is the function where I setup the BS4 object, and grab the desired page: ``` def get_site(hyperLink): userSite = urllib3.PoolManager() siteData = userSite.request("GET", hyperLink) bsd = BeautifulSoup(siteData.data) return bsd ``` Later, I grab the elements and append them to a list I will then manipulate: ``` def find_urls(bsd, urls, currentNetloc): for links in bsd.find_all('a'): urls.append(links.get('href')) return urls ``` Other relevant info: * I am using Python 3.3 * I am using urllib3, BeautifulSoup 4, and urlparse (from urllib) * I am working in PyCharm (for the actual program) * Using Lubuntu, if it matters. After running a command line instance of python3 and importing "sys" I typed and received: ``` $ sys.executable '/usr/bin/python3' $ sys.path ['', '/usr/local/lib/python3.3/dist-packages/setuptools-1.1.5-py3.3.egg', '/usr/local/lib/python3.3/dist-packages/pip-1.4.1-py3.3.egg', '/usr/local/lib/python3.3/dist-packages/beautifulsoup4-4.3.2-py3.3.egg', '/usr/lib/python3.3', '/usr/lib/python3.3/plat-i386-linux-gnu', '/usr/lib/python3.3/lib-dynload', '/usr/local/lib/python3.3/dist-packages', '/usr/lib/python3/dist-packages'] ``` After running these commands in a Pycharm project, I received exactly the same results, with the exception that the directories containing my pycharm projects were included in the list.
2014/01/17
[ "https://Stackoverflow.com/questions/21179140", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3204909/" ]
This is not my answer. I got it from [here](http://fryandata.wordpress.com/2014/06/17/24/), which has helped me before. ``` from bs4 import BeautifulSoup import csv # Create .csv file with headers f=csv.writer(open("nyccMeetings.csv","w")) f.writerow(["Name", "Date", "Time", "Location", "Topic"]) # Use python html parser to avoid truncation htmlContent = open("nyccMeetings.html") soup = BeautifulSoup(htmlContent,"html.parser") # Find each row rows = soup.find_all('tr') for tr in rows: cols = tr.find_all('td') # Find each column try: names = cols[0].get_text().encode('utf-8') date = cols[1].get_text().encode('utf-8') time = cols[2].get_text().encode('utf-8') location = cols[3].get_text().encode('utf-8') topic = cols[4].get_text().encode('utf-8') except: continue # Write to .csv file f.writerow([names, date, time, location, topic]) ``` I think it would be useful to note some of the troubles I ran into while writing this script: Specify your parser. It is very important to specify the type of html parser that BeautifulSoup will use to parse through the html tree form. The html file that I read into Python was not formatted correctly so BeautifulSoup truncated the html and I was only able to access about a quarter of the records. By telling BeautifulSoup to explicitly use the built-in Python html parser, I was able to avoid this issue and retrieve all records. Encode to UTF-8. get\_text() had some issues with encoding the text inside the html tags. As such, I was unable to write data to the comma-delimited file. By explicitly telling the program to encode to UTF-8, we avoid this issue altogether.
I have encountered many problems in my web scraping projects; however, BeautifulSoup was never the culprit. I highly suspect you are having the same problem I had scraping Wikipedia. Wikipedia did not like my user-agent and was returning a page other than what I requested. Try adding a user-agent in your code e.g. `Mozilla/5.0 (Windows NT 6.2; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/33.0.1750.146 Safari/537.36` You mentioned you were using urllib3 so [here](http://urllib3.readthedocs.org/en/1.2.1/pools.html) is where you can read on how to use a custom user-agent. Also if you want to diagnose your problem try this: In the terminal where you said everything was working fine, add an extra line `print len(html)` Then do the same in your program to see if you are in fact getting the links from the same page.
8,378
13,925,355
I wanted to run the command: `repo init -u https://android.googlesource.com/platform/manifest -b android-4.1.1_r6` and got the following output: `Traceback (most recent call last): File "/home/anu/bin/repo", line 91, in <module> import readline ImportError: No module named readline` So to fix the above, I tried to install readline using commands `pip install readline` and `easy_install readline`, but both the commands outputs the following: ``` /usr/bin/ld: cannot find -lncurses collect2: ld returned 1 exit status error: command 'gcc' failed with exit status 1 ``` I have ubuntu 12.04 with python 2.7.3 and want to build android source code. I searched a lot to fix it but no success... Can anybody point to me what I am missing?
2012/12/18
[ "https://Stackoverflow.com/questions/13925355", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1649409/" ]
``` sudo apt-get install libncurses5-dev ``` And then rerun you command
If you are running an 64 bit OS, you might have to install the i386 versions of the libraries. A lot (all?) of the Android host commands are 32-bit only.
8,379
43,748,464
I have a large number of files with $Log expanded-keyword text at the end that needs to be deleted. I am looking to modify an existing python 2.7 script to do this but cannot get the regex working correctly. The text to strip from the end of a file looks like this: ``` /* one or more lines of .. .. possible text $Log: oldfile.c,v $ Revision 11.4 2000/01/20 19:01:41 userid a bunch more text .. .. of unknown number of lines */ ``` I want to strip all of the text shown above, *including* the comment anchors `/*` and `*/` and everything in between. I looked at these questions/answers and a few others: [Python re.sub non-greedy mode ..](https://stackoverflow.com/questions/4273987/python-re-sub-use-non-greedy-mode-with-end-of-string-it-comes-greedy) [Python non-greedy rebexes](https://stackoverflow.com/questions/766372/python-non-greedy-regexes) The closest I have been able to get is with: ``` content = re.sub(re.compile(r'\$Log:.*', re.DOTALL), '', content) ``` Which of course leaves behind the opening `/*`. The following deleted my whole sample test file because the file opens with a matching comment (I thought the non-greedy `?` modifier would prevent this): ``` content = re.sub(re.compile(r'^/\*.*?\$Log:.*', re.DOTALL), '', content) ``` I experimented with using re.MULTILINE without success. How can a regex be defined in Python to grab the whole $Log comment -- AND none of the previous comments in the file?
2017/05/02
[ "https://Stackoverflow.com/questions/43748464", "https://Stackoverflow.com", "https://Stackoverflow.com/users/947860/" ]
You can use: ``` result = re.sub(r"/\*\s+\*+\s+\$Log.*?\*/", "", subject, 0, re.DOTALL) ``` --- [![enter image description here](https://i.stack.imgur.com/YTKkn.jpg)](https://i.stack.imgur.com/YTKkn.jpg) --- [Regex Demo](https://regex101.com/r/6AgeQe/4) [Python Demo](https://ideone.com/QJmjUy)
It is a bit unclear what you are expecting as output. My understanding is that you are trying to extract the comment. I'm assuming that the comment appears on the 3rd line and you have to just extract the third line using regex. Regex Expression used: ``` (\$Log:.*[\r\n]*.*[\r\n])(.*) ``` After using the regex for matching, the **third group** will be the comment as demonstrated in the link and screenshot below. So `blah blah blah` can be fetched using `.group(2)`. Adding python code below: ``` matches = re.search(r"(\$Log:.*[\r\n]*.*[\r\n])(.*)", content) print matches.group(2) // Output: blah blah blah ``` [Regex101](https://regex101.com/r/TG8e1j/1): Sample code for python is available [here](https://regex101.com/r/TG8e1j/1/codegen?language=python). [Python Demo](https://ideone.com/M0OMzj) [![enter image description here](https://i.stack.imgur.com/zSHpV.png)](https://i.stack.imgur.com/zSHpV.png)
8,381
62,791,323
I have recently upgraded my python/opencv for a project to python 3.7 + opencv 4.3.0 and now I have an issue with opencvs imshow. I am running Ubuntu 18.04 and am using conda venvs. I tried to rerun this piece of code multiple times and half the time it correctly displays the white image, and half the time it displays the image below ([1](https://i.stack.imgur.com/Hwxes.png)). The printed output ([2](https://i.stack.imgur.com/nwOXu.png)) is always the same. I tried changing it from waitKey(0) to waitKey(1000) but that doesn't make a difference. Still about half the time a tiny black image is all I see. Does anybody know how to debug this? I tried looking at Pycharms log files but they dont have any more details. I also tried running it straight from the command line but that gives the same issue. Also tried to remove the environment and created a fresh one, reinstalled opencv and got the same issues. When I create a 3.6 environment I don't have the issue, but that's no longer an option. I need python3.7 for some other packages that don't support certain features in 3.6. I received a warning that libcanberra gtk was missing, and found that in another post that it could cause issues. So I installed it using `sudo apt install libcanberra-gtk-module libcanberra-gtk3-module` and the warning went away. Sadly the issue did not... ``` import numpy as np import cv2 if __name__ == '__main__': img = np.ones((255, 255, 3), dtype=np.uint8)*255 print(img.shape) print(img.dtype) print(img.min()) print(img.max()) cv2.imshow("i", img) cv2.waitKey(0) ``` [screenshot of the code + result](https://i.stack.imgur.com/Hwxes.png) [console output](https://i.stack.imgur.com/nwOXu.png)
2020/07/08
[ "https://Stackoverflow.com/questions/62791323", "https://Stackoverflow.com", "https://Stackoverflow.com/users/13890473/" ]
``` django_find_project = True ``` Add this to your `pytest.ini`. **EDIT:** It looks like you have spelled `DJANGO_SETTINGS_MODULE` wrong in your `pytest.ini`. Please fix it.
Pytest has an order of precedence when choosing which settings.py to be used in tests and the settings in the pytest.ini is only used as last resort. Pytest first looks at the `--ds` setting when running your tests, if that is not set it then used the environment variable `DJANGO_SETTINGS_MODULE`, if this also not set it uses the settings set in the `pytest.ini` file. source: <https://pytest-django.readthedocs.io/en/latest/configuring_django.html>
8,383
48,986,755
I have a list which contains zeros and non-zero values. I want to find the range of zeros and non-zero values in terms of tuple inside the list. I am looking for package free solution with pythonic way. E.g. ``` a = [ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 5, 11, 12, 12, 12, 13, 13, 17, 17, 17, 17, 17, 17, 17, 17, 17, 25, 42, 54, 61, 61, 68, 73, 103, 115, 138, 147, 170, 187, 192, 197, 201, 208, 210, 214, 216, 217, 217, 218, 219, 220, 221, 222, 222, 219, 220, 220, 221, 220, 216, 216, 217, 217, 217, 217, 216, 216, 216, 209, 204, 193, 185, 177, 161, 156, 143, 110, 103, 89, 82, 62, 62, 62, 60, 56, 55, 50, 49, 48, 47, 47, 45, 44, 43, 42, 40, 37, 23, 22, 14, 12, 6, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 5, 6, 6, 6, 7, 7, 7, 13, 29, 31, 32, 33, 41, 42, 43, 43, 44, 44, 44, 44, 44, 60, 70, 71, 72, 88, 95, 104, 105, 111, 124, 125, 131, 145, 157, 169, 174, 186, 190, 190, 191, 192, 192, 193, 193, 193, 194, 198, 201, 202, 203, 202, 203, 203, 203, 203, 203, 203, 197, 195, 186, 177, 171, 154, 153, 148, 141, 140, 135, 132, 120, 108, 94, 86, 78, 73, 60, 53, 46, 46, 45, 44, 43, 37, 35, 29, 26, 19, 11, 0]] ``` **Output: idx = [(0,9),(10,101),(102,128),...]**
2018/02/26
[ "https://Stackoverflow.com/questions/48986755", "https://Stackoverflow.com", "https://Stackoverflow.com/users/7819943/" ]
You could `enumerate()` to get the indexes, `itertools.groupby()` to group falsy (`0`) and truthy values together, and extract the start and end indexes with `operator.itemgetter(0, -1)`: ``` from operator import truth, itemgetter from itertools import groupby [itemgetter(0,-1)([i for i,v in g]) for _, g in groupby(enumerate(a), key = lambda x: truth(x[1]))] # [(0, 9), (10, 101), (102, 128), (129, 217), (218, 252), (253, 338), (339, 362), (363, 447), (448, 490), (491, 580), (581, 581)] ```
``` import numpy as np unique, counts = np.unique(a, return_counts=True) idx = tuple(zip(unique, counts)) ``` I think this will work for you.
8,386
72,404,096
Trying to run examples or telegram bots from official site - <https://github.com/python-telegram-bot/python-telegram-bot/tree/master/examples> Installed : ``` pip install python-telegram-bot ``` and when i run the example, i got error back that version is not compatible. ``` if __version_info__ < (20, 0, 0, "alpha", 1): raise RuntimeError( f"This example is not compatible with your current PTB version {TG_VER}. To view the " f"{TG_VER} version of this example, " f"visit https://github.com/python-telegram-bot/python-telegram-bot/tree/v{TG_VER}/examples" ) ``` It installs PyPi version 13.12 but example checks different version v20. So, the error is reasonable. How can I let example working?
2022/05/27
[ "https://Stackoverflow.com/questions/72404096", "https://Stackoverflow.com", "https://Stackoverflow.com/users/14333315/" ]
Assuming you have only one non-NaN per row, you can `stack`: ```py df.stack().droplevel(1).to_frame(name='Fruits') ``` Output: ``` Fruits 0 Apple 1 Pear 2 Orange 3 Mango 4 banana ``` #### Handling rows with only NaNs: ```py df.stack().droplevel(1).to_frame(name='Fruits').reindex(df.index) ``` Output assuming banana is a NaN: ``` Fruits 0 Apple 1 Pear 2 Orange 3 Mango 4 NaN ```
I think this should give the desired output - `df['Fruit1'].fillna(df['Fruit2'])`
8,395
33,919,806
Is there a recommended way for using BeautifulSoup 4 in python when you have a table with no class or attribute values? I was considering just using Get\_Text() to dump the text out but if I wanted to pick individual values out or break the table into more discrete sections how would I go about it ? ```html <table cellpadding="0" cellspacing="0" id="programmeDescriptor" width="100%"> <tr> <td> <table cellpadding="5" cellspacing="0" class="borders" width="100%"> <tr> <th colspan="1"> Awards </th> </tr> <tr> </tr> <tr> <td> Ordinary Bachelor Degree </td> </tr> </table> <table border="0" cellpadding="0" cellspacing="0" width="100%"> <tr> <td> <table cellpadding="5" cellspacing="0" class="borders"> <tr> <th width="160"> Programme Code: </th> <td width="150"> CodeValue </td> </tr> </table> </td> <td width="5"> </td> <td> <table cellpadding="5" cellspacing="0" class="borders"> <tr> <th width="160"> Mode of Delivery: </th> <td width="150"> Full Time </td> </tr> </table> </td> <td width="5"> </td> <td> <table cellpadding="5" cellspacing="0" class="borders"> <tr> <th width="160"> No. of Semesters: </th> <td width="150"> 6 </td> </tr> </table> </td> </tr> <tr> <td> <table cellpadding="5" cellspacing="0" class="borders"> <tr> <th width="160"> NFQ Level: </th> <td width="150"> 7 </td> </tr> </table> </td> </tr> <tr> <td> <table cellpadding="5" cellspacing="0" class="borders"> <tr> <th width="160"> Embedded Award: </th> <td width="150"> No </td> </tr> </table> </td> </tr> </table> <table cellpadding="5" cellspacing="0" class="borders" width="100%"> <tr> <th width="160"> Department: </th> <td> Computing </td> </tr> </table> <div class="pageBreak"> </div> <h3> Programme Outcomes </h3> <p class="info"> On successful completion of this programme the learner will be able to : </p> <table cellpadding="5" cellspacing="0" class="borders" width="100%"> <tr> <th width="30"> PO1 </th> <td class="head" colspan="2"> Knowledge - Breadth </td> </tr> <tr> <td class="head" width="30"> </td> <td class="head" width="30"> (a) </td> <td> • Some block of text </tr> <tr> <th width="30"> PO2 </th> <td class="head" colspan="2"> Knowledge - Kind </td> </tr> <tr> <td class="head" width="30"> </td> <td class="head" width="30"> (a) </td> <td> • Some block of text </td> </tr> <tr> <th width="30"> PO3 </th> <td class="head" colspan="2"> Skill - Range </td> </tr> <tr> <td class="head" width="30"> </td> <td class="head" width="30"> (a) </td> <td> • Some block of text </td> </tr> <tr> <th width="30"> PO4 </th> <td class="head" colspan="2"> Skill - Selectivity </td> </tr> <tr> <td class="head" width="30"> </td> <td class="head" width="30"> (a) </td> <td> • Some block of text </td> </tr> <tr> <th width="30"> PO5 </th> <td class="head" colspan="2"> Competence - Context </td> </tr> <tr> <td class="head" width="30"> </td> <td class="head" width="30"> (a) </td> <tdSome block of text </td> </tr> <tr> <th width="30"> PO6 </th> <td class="head" colspan="2"> Competence - Role </td> </tr> <tr> <td class="head" width="30"> </td> <td class="head" width="30"> (a) </td> <td> • Some block of text </td> </tr> <tr> <th width="30"> PO7 </th> <td class="head" colspan="2"> Competence - Learning to Learn </td> </tr> <tr> <td class="head" width="30"> </td> <td class="head" width="30"> (a) </td> <td> • Some block of text </td> </tr> <tr> <th width="30"> PO8 </th> <td class="head" colspan="2"> Competence - Insight </td> </tr> <tr> <td class="head" width="30"> </td> <td class="head" width="30"> (a) </td> <td> • The graduate will demonstrate the ability to specify, design and build an IT system or research &amp; report on a current IT topic </td> </tr> </table> <div class="pageBreak"> </div> <h3> Semester Schedules </h3> <table cellpadding="0" cellspacing="0" width="100%"> <tr> <td colspan="2"> <h4> Stage 1 / Semester 1 </h4> </td> </tr> <tr> <td colspan="2"> <table cellpadding="5" cellspacing="0" class="borders" width="100%"> <tr> <td class="head" colspan="2"> Mandatory </td> </tr> <tr> <th width="50"> Module Code </th> <th> Module Title </th> </tr> <tr> <td> Code </td> <td <a href="index.cfm/page/module/moduleId/3897" target="_blank"> Web &amp; User Experience </a> </td> </tr> <tr> <td> Code </td> <td> <a href="index.cfm/page/module/moduleId/3881" target="_blank"> Software Development 1 </a> </td> </tr> <tr> <td> Code </td> <td> <a href="index.cfm/page/module/moduleId/1645" target="_blank"> Computer Architecture </a> </td> </tr> <tr> <td> Code </td> <td> <a href="index.cfm/page/module/moduleId/2328" target="_blank"> Discrete Mathematics 1 </a> </td> </tr> <tr> <td> Code </td> <td> <a href="index.cfm/page/module/moduleId/3848" target="_blank"> Business &amp; Information Systems </a> </td> </tr> <tr> <td> Code </td> <td> <a href="index.cfm/page/module/moduleId/2054" target="_blank"> Learning to Learn at Third Level </a> </td> </tr> </table> </td> </tr> </table> <table cellpadding="0" cellspacing="0" width="100%"> <tr> <td colspan="2"> <h4> Stage 1 / Semester 2 </h4> </td> </tr> <tr> <td colspan="2"> <table cellpadding="5" cellspacing="0" class="borders" width="100%"> <tr> <td class="head" colspan="2"> Mandatory </td> </tr> <tr> <th width="50"> Module Code </th> <th> Module Title </th> </tr> <tr> <td> Code </td> <td> <a href="index.cfm/page/module/moduleId/3886" target="_blank"> Software Development 2 </a> </td> </tr> <tr> <td> Code </td> <td> <a href="index.cfm/page/module/moduleId/3895" target="_blank"> Object Oriented Systems Analysis </a> </td> </tr> <tr> <td> Code </td> <td> <a href="index.cfm/page/module/moduleId/3875" target="_blank"> Database Fundamentals </a> </td> </tr> <tr> <td> Code </td> <td> <a href="index.cfm/page/module/moduleId/3874" target="_blank"> Operating Systems Fundamentals </a> </td> </tr> <tr> <td> Code </td> <td> <a href="index.cfm/page/module/moduleId/2330" target="_blank"> Statistics </a> </td> </tr> <tr> <td> Code </td> <td> <a href="index.cfm/page/module/moduleId/2527" target="_blank"> Social Media Communications </a> </td> </tr> </table> </td> </tr> </table> <div class="pageBreak"> </div> <table cellpadding="0" cellspacing="0" width="100%"> <tr> <td colspan="2"> <h4> Stage 2 / Semester 1 </h4> </td> </tr> <tr> <td colspan="2"> <table cellpadding="5" cellspacing="0" class="borders" width="100%"> <tr> <td class="head" colspan="2"> Mandatory </td> </tr> <tr> <th width="50"> Module Code </th> <th> Module Title </th> </tr> <tr> <td> Code </td> <td> <a href="index.cfm/page/module/moduleId/3877" target="_blank"> Web &amp; Mobile Design &amp; Development </a> </td> </tr> <tr> <td> Code </td> <td> <a href="index.cfm/page/module/moduleId/3876" target="_blank"> Database Design And Programming </a> </td> </tr> <tr> <td> Code </td> <td> <a href="index.cfm/page/module/moduleId/3869" target="_blank"> Software Development 3 </a> </td> </tr> <tr> <td> Code </td> <td> <a href="index.cfm/page/module/moduleId/3873" target="_blank"> Software Quality Assurance and Testing </a> </td> </tr> <tr> <td> Code </td> <td> <a href="index.cfm/page/module/moduleId/3629" target="_blank"> Networking 1 </a> </td> </tr> <tr> <td> Code </td> <td> <a href="index.cfm/page/module/moduleId/2477" target="_blank"> Discrete Mathematics 2 </a> </td> </tr> </table> </td> </tr> </table> <table cellpadding="0" cellspacing="0" width="100%"> <tr> <td colspan="2"> <h4> Stage 2 / Semester 2 </h4> </td> </tr> <tr> <td colspan="2"> <table cellpadding="5" cellspacing="0" class="borders" width="100%"> <tr> <td class="head" colspan="2"> Mandatory </td> </tr> <tr> <th width="50"> Module Code </th> <th> Module Title </th> </tr> <tr> <td> Code </td> <td> <a href="index.cfm/page/module/moduleId/3862" target="_blank"> Project </a> </td> </tr> <tr> <td> Code </td> <td> <a href="index.cfm/page/module/moduleId/3911" target="_blank"> Object Oriented Analysis &amp; Design 1 </a> </td> </tr> <tr> <td> Code </td> <td> <a href="index.cfm/page/module/moduleId/3877" target="_blank"> Web &amp; Mobile Design &amp; Development </a> </td> </tr> <tr> <td> Code </td> <td> <a href="index.cfm/page/module/moduleId/3630" target="_blank"> Networking 2 </a> </td> </tr> <tr> <td> Code </td> <td> <a href="index.cfm/page/module/moduleId/3870" target="_blank"> Software Development 4 </a> </td> </tr> <tr> <td> Code </td> <td> <a href="index.cfm/page/module/moduleId/2476" target="_blank"> Management Science </a> </td> </tr> </table> </td> </tr> </table> <div class="pageBreak"> </div> <table cellpadding="0" cellspacing="0" width="100%"> <tr> <td colspan="2"> <h4> Stage 3 / Semester 1 </h4> </td> </tr> <tr> <td colspan="2"> <table cellpadding="5" cellspacing="0" class="borders" width="100%"> <tr> <td class="head" colspan="2"> Mandatory </td> </tr> <tr> <th width="50"> Module Code </th> <th> Module Title </th> </tr> <tr> <td> Code </td> <td> <a href="index.cfm/page/module/moduleId/3911" target="_blank"> Object Oriented Analysis &amp; Design 1 </a> </td> </tr> <tr> <td> Code </td> <td> <a href="index.cfm/page/module/moduleId/3899" target="_blank"> Operating Systems </a> </td> </tr> <tr> <td> Code </td> <td> <a href="index.cfm/page/module/moduleId/1721" target="_blank"> Cloud Services &amp; Distributed Computing </a> </td> </tr> <tr> <td> Code </td> <td> <a href="index.cfm/page/module/moduleId/2580" target="_blank"> Innovation &amp; Entrepreneurship </a> </td> </tr> <tr> <td> Code </td> <td> <a href="index.cfm/page/module/moduleId/3878" target="_blank"> Web Application Development </a> </td> </tr> <tr> <td> Code </td> <td> <a href="index.cfm/page/module/moduleId/1689" target="_blank"> Algorithms and Data Structures 1 </a> </td> </tr> <tr> <td> Code </td> <td> <a href="index.cfm/page/module/moduleId/2025" target="_blank"> Logic and Problem Solving </a> </td> </tr> <tr> <td> Code </td> <td> <a href="index.cfm/page/module/moduleId/3896" target="_blank"> Advanced Databases </a> </td> </tr> </table> </td> </tr> </table> <table cellpadding="0" cellspacing="0" width="100%"> <tr> <td colspan="2"> <h4> Stage 3 / Semester 2 </h4> </td> </tr> <tr> <td colspan="2"> <table cellpadding="5" cellspacing="0" class="borders" width="100%"> <tr> <td class="head" colspan="2"> Mandatory </td> </tr> <tr> <th width="50"> Module Code </th> <th> Module Title </th> </tr> <tr> <td> Code </td> <td> <a href="index.cfm/page/module/moduleId/2465" target="_blank"> Project </a> </td> </tr> <tr> <td> Code </td> <td> <a href="index.cfm/page/module/moduleId/1728" target="_blank"> Algorithms and Data Structures 2 </a> </td> </tr> <tr> <td> Code </td> <td> <a href="index.cfm/page/module/moduleId/1675" target="_blank"> Network Management </a> </td> </tr> <tr> <td> Code </td> <td> <a href="index.cfm/page/module/moduleId/2025" target="_blank"> Logic and Problem Solving </a> </td> </tr> <tr> <td> Code </td> <td> <a href="index.cfm/page/module/moduleId/3899" target="_blank"> Operating Systems </a> </td> </tr> <tr> <td> Code </td> <td> <a href="index.cfm/page/module/moduleId/2580" target="_blank"> Innovation &amp; Entrepreneurship </a> </td> </tr> <tr> <td> Code </td> <td> <a href="index.cfm/page/module/moduleId/1679" target="_blank"> Object Oriented Analysis &amp; Design 2 </a> </td> </tr> </table> </td> </tr> </table> </td> </tr> </table> ```
2015/11/25
[ "https://Stackoverflow.com/questions/33919806", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2645252/" ]
Let's look at your jQuery: ``` $('div#info.Data') // Gets <div> with id="info" and class="Data" // ^ You have id and class reversed! .eq(1) // This gets the 2nd element in the array // ^ You only tried to get 1 element. What is the 2nd? .text() // Returns combined text of selected elements ``` Also there's another issue. Your text is not in its own element. In order to get the textnodes in your current element, you could call [`.contents()`](https://api.jquery.com/contents/). jQuery does not treat text nodes like regular elements, though, so I would be careful doing any additional operations on them. You can retrieve the text like such: ``` $("#Data").contents().eq(0).text() // -> 'Domain Name: example.com' $("#Data").contents().eq(2).text() // -> 'Registry: 12345' ``` **[Fiddle](http://jsfiddle.net/xf4s16x1/)**
First of all, you have you're `id` and `class` the wrong way round but this is a simple fix. An alternative to your solution is to grab all the content, split it out into an array and then clean the empty strings caused by the new lines and `<br />` tags. This can then be used in any matter you like. ```js $(document).ready(function() { var content = $('div#Data.info').eq(0).text(); var lines = content.split("\n").filter(Boolean) console.log(lines); }); ``` ```html <script src="https://ajax.googleapis.com/ajax/libs/jquery/2.1.1/jquery.min.js"></script> <div class="info" id="Data"> Domain Name: example.com <br> Registry: 12345 <br> </div> ```
8,402
38,259,749
How can I split the a column into two separate ones. Would apply be the way to go about this? I want to keep the other columns in the DataFrame. For example I have a column called "last\_created" with a bunch of dates and times: "2016-07-01 09:50:09" I want to create two new columns "date" and "time" with the split values. This is what I tried but it's returning an error. For some reason my data was getting converted from str to float so I forced it to str. ``` def splitter(row): row = str(row) return row.split() df['date'],df['time'] = df['last_created'].apply(splitter) ``` Error: ``` --------------------------------------------------------------------------- ValueError Traceback (most recent call last) <ipython-input-47-e5a9cf968714> in <module>() 7 return row.split() 8 ----> 9 df['date'],df['time'] = df['last_created'].apply(splitter) 10 df 11 #splitter(df.iloc[1,1]) ValueError: too many values to unpack (expected 2) ```
2016/07/08
[ "https://Stackoverflow.com/questions/38259749", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5405782/" ]
`now()` is mysql function . In PHP we use **[time()](http://php.net/manual/en/function.time.php)** to get the current timestamp It is used as ``` $now = time();// get current timestamp ``` Use [**`strtotime()`**](http://php.net/manual/en/function.strtotime.php) to convert date in time stamp then use for comparison ``` $now = time(); foreach ($events->results() as $e): if (strtotime($e->event_date) >= $now) {// check for greater then equal to ?> <h1>Show Event</h1> <?php } else { echo "<h1>DON'T Show Event</h1>"; } endforeach; ```
You are in right Direction, change it like ``` <?php //SELECT QUERY... foreach($events->results() as $e): $now = date("Y-m-d H:i:s"); if(date("Y-m-d H:i:s", strtotime($e->event_date)) >= $now){ echo "<h1>Show Event</h1>"; }else{ echo "<h1>DON'T Show Event</h1>"; } endforeach; ?> ```
8,403
29,650,935
I am trying to convert a python game (made with pygame) into a exe file for windows, and I did using cx\_Freeze. No problems there. The thing is that when I launch myGame.exe, it opens the normal Pygame window and a console window(which I do not want). Is there a way to remove the console window? I read most of the documentation, but I saw nothing really (except base, but I don't get what that is). BTW, here is my setup file: ``` import cx_Freeze exe = [cx_Freeze.Executable("myGame.py")] cx_Freeze.setup( name = "GameName", version = "1.0", options = {"build_exe": {"packages": ["pygame", "random", "ConfigParser", "sys"], "include_files": [ "images", "settings.ini", "arialbd.ttf"]}}, executables = exe ) ``` Here's a screen shot of what happens when I launch the exe: ![ScreenShot](https://i.stack.imgur.com/LniBW.jpg)
2015/04/15
[ "https://Stackoverflow.com/questions/29650935", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3580258/" ]
So what was wrong, was that the setup.py file was missing a parameter. What you need to add is `base = "Win32GUI"` to declare that you do not need a console window upon launch of the application. Here's the code: ``` import cx_Freeze exe = [cx_Freeze.Executable("myGame.py", base = "Win32GUI")] # <-- HERE cx_Freeze.setup( name = "GameName", version = "1.0", options = {"build_exe": {"packages": ["pygame", "random", "ConfigParser", "sys"], "include_files": ["images", "settings.ini", "arialbd.ttf"]}}, executables = exe ) ```
The parameter can be passed also by the shell if you are making a quick executable like this: ``` cxfreeze my_program.py --base-name=WIN32GUI ```
8,404
16,797,850
I am trying to get gimp to use a reasonable default path in a "save as" plugin, and to do that I need to be able to specify the default with the return value of a function (I believe). Currently, my code is something like: ``` def do_the_foo(image, __unused_drawable, directory): # ... do something register( "python_fu_something", "Blah de blah", "Blah de blah", "Blah de blah", "Blah de blah", "2013", "<Image>/File/Save as blah...", "*", [ (PF_DIRNAME, "directory", "Directory to save files to", "/") ], [], do_the_foo ) ``` Naturally this means that the dialog pops up with "/" as the default directory. That's not ideal. I'd like it to start with the path to the currently loaded image if known and then fall back to "/" if the currently loaded image has no path (not saved, whatever). But to get there I need to know how to replace "/" with a function. I've tried doing just that (create a function, reference it in the PF\_DIRNAME line) but no joy (and no error message, either).
2013/05/28
[ "https://Stackoverflow.com/questions/16797850", "https://Stackoverflow.com", "https://Stackoverflow.com/users/-1/" ]
The fastest way would be to store the relevant data somewhere in a cache, and then print it, when you have time for it. Printing to the console is definitely slow, and using printf is maybe also not a good idea, especially if there are several variables to convert. Since I don't know the dynamics of your code, I can only give some recommendation. Define a datastructure for your data. Preallocate a big enough array, and then put ringbuffer mechanism, that handles the indexes of where the ISR can currently write. For the ISR this should be rather fast, because it just fills outr the values in the next empty slot. In the main routine you can then print at leisure. However, you have to synchronize the access and also take care that the ISR doesn't produce data much faster than you can discard it. At least with a proper ringbuffer, it shouldn't crash, but you might loose information.
You can switch buffering on by using C89's `setvbuf()`
8,405
64,964,188
Is there any simple way to swap character of string in python. In my case I want to swap `.` and `,` from `5.123.673,682`. So my string should become `5,123,673.682`. I have tried: ``` number = '5.123.673,682' number = number.replace('.', 'temp') number = number.replace(',', '.') number = number.replace('temp', ',') print(number) # 5,123,673.682 ``` Thanks
2020/11/23
[ "https://Stackoverflow.com/questions/64964188", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2975806/" ]
One way using `dict.get`: ``` mapper = {".": ",", ",":"."} "".join([mapper.get(s, s) for s in '5.123.673,682']) ``` Or using `str.maketrans` and `str.translate`: ``` '5.123.673,682'.translate(str.maketrans(".,", ",.")) ``` Output: ``` '5,123,673.682' ```
This is a pythonic and clean approach to do that ```py def swap(c): if c == ',': return '.' elif c == '.': return ',' else: return c number = '5.123.673,682' new_number = ''.join(swap(o) for o in number) ```
8,406
72,317,862
``` Please here is my code def train_apparentflow_net(): code_path = config.code_dir fold = int(sys.argv[1]) print('fold = {}'.format(fold)) if fold == 0: mode_train = 'all' mode_val = 'all' elif fold in range(1,6): mode_train = 'train' mode_val = 'val' else: print('Incorrect fold') ``` i receive this error: `IndexError: list index out of range` yet here are my files in my fold ``` model_apparentflow_net_fold0_epoch050.h5 model_apparentflow_net_fold1_epoch050.h5 model_apparentflow_net_fold2_epoch050.h5 il y a 7 jours17.9 MB model_apparentflow_net_fold3_epoch050.h5 model_apparentflow_net_fold4_epoch050.h5 model_apparentflow_net_fold5_epoch050.h5 module_apparentflow_net.py ``` so I don't understand why the python terminal tells me that I reference an index that does not exist yet it exists. Please could someone help me? thank you so much for all answers
2022/05/20
[ "https://Stackoverflow.com/questions/72317862", "https://Stackoverflow.com", "https://Stackoverflow.com/users/19131126/" ]
Please look at the following answer [sys.argv[1] description](https://stackoverflow.com/questions/4117530/sys-argv1-meaning-in-script) It is failing as there is no argument provided. You could try ``` fold = int(sys.argv[0]) ```
if all you need is to select the model, why not add the names in a list and select base on index like this ``` model_list = ["model_apparentflow_net_fold0_epoch050.h5", "model_apparentflow_net_fold1_epoch050.h5", "model_apparentflow_net_fold2_epoch050.h5", "model_apparentflow_net_fold3_epoch050.h5", "model_apparentflow_net_fold4_epoch050.h5", "model_apparentflow_net_fold5_epoch050.h5", ] def train_apparentflow_net(model_list): code_path = config.code_dir fold = model_list[1] # specify model here print('fold = {}'.format(fold)) if fold == 0: mode_train = 'all' mode_val = 'all' elif fold in range(1,6): mode_train = 'train' mode_val = 'val' else: print('Incorrect fold') ```
8,408
3,405,073
Working with deeply nested python dicts, I would like to be able to assign values in such a data structure like this: ``` mydict[key][subkey][subkey2]="value" ``` without having to check that mydict[key] etc. are actually set to be a dict, e.g. using ``` if not key in mydict: mydict[key]={} ``` The creation of subdictionaries should happen on the fly. What is the most elegant way to allow something equivalent - maybe using decorators on the standard `<type 'dict'>`?
2010/08/04
[ "https://Stackoverflow.com/questions/3405073", "https://Stackoverflow.com", "https://Stackoverflow.com/users/369113/" ]
``` class D(dict): def __missing__(self, key): self[key] = D() return self[key] d = D() d['a']['b']['c'] = 3 ```
You could use a tuple as the key for the dict and then you don't have to worry about subdictionaries at all: ``` mydict[(key,subkey,subkey2)] = "value" ``` Alternatively, if you really need to have subdictionaries for some reason you could use [`collections.defaultdict`](http://docs.python.org/library/collections.html#collections.defaultdict). For two levels this is straightforward: ``` >>> from collections import defaultdict >>> d = defaultdict(dict) >>> d['key']['subkey'] = 'value' >>> d['key']['subkey'] 'value' ``` For three it's slightly more complex: ``` >>> d = defaultdict(lambda: defaultdict(dict)) >>> d['key']['subkey']['subkey2'] = 'value' >>> d['key']['subkey']['subkey2'] 'value' ``` Four and more levels are left as an exercise for the reader. :-)
8,409
61,590,884
my list looks like: ``` lst ['78251'], ['18261'], ['435921'], ['74252'], ...] ``` I want to place that numbers into a url code <http://api.brain-map.org/api/v2/data/query.xml?criteria=model::SectionDataSet,rma::criteria,[failed>$eq%27false%27],products[abbreviation$eq%27Mouse%27],genes[entrez\_id$eq%27**inhere**%27]' I tried ``` for i in lst: b = 'http://api.brain-map.org/api/v2/data/query.xml?criteria=model::SectionDataSet,rma::criteria,[failed$eq%27false%27],products[abbreviation$eq%27Mouse%27],genes[entrez_id$eq%27%d%27]' %i ``` I get no error message but it says ``` b Traceback (most recent call last): File "<ipython-input-79-89e6c98d9288>", line 1, in <module> b NameError: name 'b' is not defined ``` So I think the difficult part is that there is no space in between the string... How can I handle this?
2020/05/04
[ "https://Stackoverflow.com/questions/61590884", "https://Stackoverflow.com", "https://Stackoverflow.com/users/9353795/" ]
The URL is complex, but see example URL: ``` a= 'http://api.brain-map.org/api/v2/data/query.xml?criteria=' + i + '&gene=model:' + i ```
Try this much more pythonic approach, using .format(): ``` for i in lst: b = "http://api.brain-map.org/api/v2/data/query.xml?criteria=model::SectionDataSet,rma::criteria,[failed$eq%27false%27],products[abbreviation$eq%27Mouse%27],genes[entrez_id$eq%27{}%27]".format(i[0]) ```
8,412
41,254,635
Basically I'm just starting out with python networking and python in general and I can't get my TCP client to send data. It says: ``` Traceback (most recent call last): File "script.py", line 14, in <module> client.send(data) #this is where I get the error TypeError: a bytes-like object is required, not 'str' ``` The code is as follows: ``` import socket target_host = "www.google.com" target_port = 80 #create socket object client = socket.socket(socket.AF_INET, socket.SOCK_STREAM) #connect the client client.connect((target_host,target_port)) #send some data data = "GET / HTTP/1.1\r\nHost: google.com\r\n\r\n" client.send(data) #this is where I get the error #receive some data response = client.recv(4096) print(response) ``` Thanks for your help in advance!
2016/12/21
[ "https://Stackoverflow.com/questions/41254635", "https://Stackoverflow.com", "https://Stackoverflow.com/users/7323709/" ]
You are probably using Python 3.X. [`socket.send()`](https://docs.python.org/3.5/library/socket.html#socket.socket.send) expected a bytes type argument but `data` is an unicode string. You must encode the string using [`str.encode()`](https://docs.python.org/3/library/stdtypes.html#str.encode) method. Similarly you would use [`bytes.decode()`](https://docs.python.org/3/library/stdtypes.html#bytes.decode) to receive the data: ``` import socket target_host = "www.google.com" target_port = 80 client = socket.socket(socket.AF_INET, socket.SOCK_STREAM) client.connect((target_host,target_port)) data = "GET / HTTP/1.1\r\nHost: google.com\r\n\r\n" client.send(data.encode('utf-8')) response = client.recv(4096).decode('utf-8') print(response) ```
If you are using python2.x your code is correct. As in the documentation for python2 [`socket.send()`](https://docs.python.org/2/library/socket.html#socket.socket.send) takes a string parameter. But if you are using python3.x you can see that [`socket.send()`](https://docs.python.org/3.6/library/socket.html#socket.socket.send) takes a bytes parameter. Thus you have to convert your string `data` into bytes using [`str.encode()`](https://docs.python.org/3/library/stdtypes.html#str.encode). So your code might look like this instead. ``` import socket target_host = "www.google.com" target_port = 80 #create socket object client = socket.socket(socket.AF_INET, socket.SOCK_STREAM) #connect the client client.connect((target_host,target_port)) #send some data data = "GET / HTTP/1.1\r\nHost: google.com\r\n\r\n" client.send(data.encode('utf-8')) #receive some data response = client.recv(4096) print(response) ```
8,415
3,246,021
I am posting to Hudson server using curl from the command line using the following-- ``` curl -X POST -d '<run><log encoding="hexBinary">4142430A</log><result>0</result><duration>2000</duration></run>' \ http://user:pass@myhost/hudson/job/_jobName_/postBuildResult ``` as shown in the hudson documentation..can I emulate the same thing using python..i don't want to use pyCurl or send this line through os.system()..is there ny way out using raw python??
2010/07/14
[ "https://Stackoverflow.com/questions/3246021", "https://Stackoverflow.com", "https://Stackoverflow.com/users/361179/" ]
``` import urllib2 req = urllib2.Request(url, data) response = urllib2.urlopen(req) result = response.read() ``` where data is the encoded data you want to POST. You can encode a dict using urllib like this: ``` import urllib values = { 'foo': 'bar' } data = urllib.urlencode(values) ```
The modern day solution to this is much simpler with the [requests](http://docs.python-requests.org/) module (tagline: *HTTP for humans!* :) ``` import requests r = requests.post('http://httpbin.org/post', data = {'key':'value'}, auth=('user', 'passwd')) r.text # response as a string r.content # response as a byte string # gzip and deflate transfer-encodings automatically decoded r.json() # return python object from json! this is what you probably want! ```
8,418
7,149,137
Within a python program I need to run a command in background, without displaying its output. Therefore I'm doing `os.system("nohup " + command + " &")` for now. Edit : `command` shouldn't be killed/closed when python program exits. However that will only work on Linux, and the content of the file will end up in `nohup.out` but I don't need it there. Therefore I'm looking for a platform independent solution. `os.spawnlp(os.P_DETACH, command)` doesn't work, even with the `*p` version so as to be able not to enter full path to application. NB. I know that `command` is generally platform dependent, but that's not the point of my question.
2011/08/22
[ "https://Stackoverflow.com/questions/7149137", "https://Stackoverflow.com", "https://Stackoverflow.com/users/692562/" ]
You are looking for a daemon process. Look at [How do you create a daemon in Python?](https://stackoverflow.com/questions/473620/how-do-you-create-a-daemon-in-python) or <http://blog.ianbicking.org/daemon-best-practices.html>
Look into the [subprocess](http://docs.python.org/library/subprocess.html) module. ``` from subprocess import Popen, PIPE process = Popen(['command', 'arg'], stdout=PIPE) ```
8,419
22,084,046
Python 2.7.5 I added the homebrew/science to my brew taps. I ran ``` brew install opencv. ``` bash profile I added ``` export PYTHONPATH=/usr/local/lib/python2.7/site-packages:$PYTHONPATH ``` I've opened the headgazer folder and run ``` python tracker.py Traceback (most recent call last): File "tracker.py", line 21, in <module> from roi_detector import ViolaJonesRoi File "/Users/username/Downloads/headtracker_version_0.0/roi_detector.py", line 21, in <module> import opencv as cv ImportError: No module named opencv ~/Downloads/headtracker_version_0.0:. ``` Ok, looks like it's called opencv2. So I swap out occurances of import opencv as cv with ``` import cv2 as cv ``` now in viola\_jones\_opencv.py I have ``` import cv2 as cv from cv import * from cv.highgui import * ``` And I get an error on importing highgui ``` ImportError: No module named highgui ```
2014/02/28
[ "https://Stackoverflow.com/questions/22084046", "https://Stackoverflow.com", "https://Stackoverflow.com/users/172232/" ]
Very straightforward with `awk`: ``` $ cat file 1 2 3 4 5 6 ``` ``` $ awk 'NR==3{print "hello\n"}1' file 1 2 hello 3 4 5 6 ``` Where `NR` is the line number. You can set it to any number you wish to insert text to.
Does it have to be sed? ``` head -2 infile ; echo Hello ; echo ; tail +3 infile ```
8,420
61,453,511
I have my flask app which would serve my flutter app using HTTP requests. Everything is okay when the mobile phone is connected to the PC. But once after we deploy the app to the mobile phone and detach it from the PC, how would the flask app serve the flutter app? Is there any way which would start the python script when the flutter app is launched in the mobile phone?
2020/04/27
[ "https://Stackoverflow.com/questions/61453511", "https://Stackoverflow.com", "https://Stackoverflow.com/users/11170382/" ]
for testing purpose you can use ngrok to deploy your flask server, for more info [ngrok](https://ngrok.com/docs)
Your phone when connected to the PC is accessing a Flask app via 'localhost'. You have two options to make it work (i.e. access Flask based REST APIs from the mobile when not connected to the PC). 1. Expose Flask app to the network. And make sure both PC and the mobile phone are on the same wifi. [This](https://stackoverflow.com/a/7027113/1367159) link might help for that. 2. Deploy the Flask based APIs somewhere. There are few free web servers available out there (e.g. MS Azure). [This](https://www.freecodecamp.org/news/how-to-build-a-web-application-using-flask-and-deploy-it-to-the-cloud-3551c985e492/) or [this](https://www.freecodecamp.org/news/how-to-host-lightweight-apps-for-free-a29773e5f39e/) link might help.
8,425