qid
int64
46k
74.7M
question
stringlengths
54
37.8k
date
stringlengths
10
10
metadata
listlengths
3
3
response_j
stringlengths
17
26k
response_k
stringlengths
26
26k
51,113,531
I am setting up `docker-for-windows` on my private pc. When I set it up a while ago on my office laptop I had the same issue but it just stopped happening. So I am stuck with this: I have a docker-working project (on my other computer) with a `docker-compose.yml` like this: ``` version: '2' services: web: depends_on: - db build: . env_file: ./docker-compose.env command: bash ./run_web_local.sh volumes: - .:/srv/project ports: - 8001:8001 links: - db - rabbit restart: always ``` **Dockerfile:** ``` ### STAGE 1: Build ### # We label our stage as 'builder' FROM node:8-alpine as builder RUN npm set progress=false && npm config set depth 0 && npm cache clean --force # build backend ADD package.json /tmp/package.json ADD package-lock.json /tmp/package-lock.json RUN cd /tmp && npm install RUN mkdir -p /backend-app && cp -a /tmp/node_modules /backend-app ### STAGE 2: Setup ### FROM python:3 # Install Python dependencies COPY requirements.txt /tmp/requirements.txt RUN pip3 install -U pip RUN pip3 install --no-cache-dir -r /tmp/requirements.txt # Set env variables used in this Dockerfile (add a unique prefix, such as DOCKYARD) # Local directory with project source ENV PROJECT_SRC=. # Directory in container for all project files ENV PROJECT_SRVHOME=/srv # Directory in container for project source files ENV PROJECT_SRVPROJ=/srv/project # Create application subdirectories WORKDIR $PROJECT_SRVPROJ RUN mkdir media static staticfiles logs # make folders available for other containers VOLUME ["$PROJECT_SRVHOME/media/", "$PROJECT_SRVHOME/logs/"] # Copy application source code to SRCDIR COPY $PROJECT_SRC $PROJECT_SRVPROJ COPY --from=builder /backend-app/node_modules $PROJECT_SRVPROJ/node_modules # Copy entrypoint script into the image WORKDIR $PROJECT_SRVPROJ # EXPOSE port 8000 to allow communication to/from server EXPOSE 8000 CMD ["./run_web.sh"] ``` **docker-compose.env:** ``` C_FORCE_ROOT=True DJANGO_CELERY_BROKER_URL=amqp://admin:mypass@rabbit:5672// DJANGO_DATABASE_ENGINE=django.db.backends.mysql DJANGO_DATABASE_NAME=project-db DJANGO_DATABASE_USER=project-user DJANGO_DATABASE_PASSWORD=mypassword DJANGO_DATABASE_HOST=db DJANGO_ALLOWED_HOSTS=127.0.0.1,localhost DJANGO_DEBUG=True DJANGO_USE_DEBUG_TOOLBAR=off DJANGO_TEST_RUN=off PYTHONUNBUFFERED=0 ``` **run\_web\_local.sh:** ``` #!/bin/bash echo django shell commands python ./manage.py migrate echo Starting django server on 127.0.0.1:8000 python ./manage.py runserver 127.0.0.1:8000 ``` When I call `docker-compose up web` I get the following error: > > web\_1 | bash: ./run\_web\_local.sh: No such file or directory > > > * I checked the line endings, they are UNIX * the file exists on the file system as well as inside the container * I can call `bash run_web_local.sh` from my windows powershell and inside the container * I changed the UNIX permissions inside the container * I left out the `bash` in the `command` in the `docker-compose` command. And tried with backslash, no dot etc. * I reinstalled docker * I tried switching to version 3 * docker claims to have a connection to my shared drive C And: The exact same setup **works** on my other laptop. Any ideas? All the two million github posts didn't solve the problem for me. Thanks! **Update** Removing `volumes:` from the docker-compose makes it work [like stated here](https://github.com/docker/compose/issues/2548) but I don't have an instant mapping. That's kind of important for me...
2018/06/30
[ "https://Stackoverflow.com/questions/51113531", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1331671/" ]
If possible can you please provide all the files related to this, so that i can try to reproduce the issue. Seems like command is not executing is the dir where run\_web\_local.sh exist. You can check the current workdir by replacing command in docker-compose.yml as `command: pwd && bash ./run_web_local.sh`
Do not use -d at the end. Instead of this command **docker-compose -f start\_tools.yaml up –d** Use **docker-compose -f start\_tools.yaml up**
51,113,531
I am setting up `docker-for-windows` on my private pc. When I set it up a while ago on my office laptop I had the same issue but it just stopped happening. So I am stuck with this: I have a docker-working project (on my other computer) with a `docker-compose.yml` like this: ``` version: '2' services: web: depends_on: - db build: . env_file: ./docker-compose.env command: bash ./run_web_local.sh volumes: - .:/srv/project ports: - 8001:8001 links: - db - rabbit restart: always ``` **Dockerfile:** ``` ### STAGE 1: Build ### # We label our stage as 'builder' FROM node:8-alpine as builder RUN npm set progress=false && npm config set depth 0 && npm cache clean --force # build backend ADD package.json /tmp/package.json ADD package-lock.json /tmp/package-lock.json RUN cd /tmp && npm install RUN mkdir -p /backend-app && cp -a /tmp/node_modules /backend-app ### STAGE 2: Setup ### FROM python:3 # Install Python dependencies COPY requirements.txt /tmp/requirements.txt RUN pip3 install -U pip RUN pip3 install --no-cache-dir -r /tmp/requirements.txt # Set env variables used in this Dockerfile (add a unique prefix, such as DOCKYARD) # Local directory with project source ENV PROJECT_SRC=. # Directory in container for all project files ENV PROJECT_SRVHOME=/srv # Directory in container for project source files ENV PROJECT_SRVPROJ=/srv/project # Create application subdirectories WORKDIR $PROJECT_SRVPROJ RUN mkdir media static staticfiles logs # make folders available for other containers VOLUME ["$PROJECT_SRVHOME/media/", "$PROJECT_SRVHOME/logs/"] # Copy application source code to SRCDIR COPY $PROJECT_SRC $PROJECT_SRVPROJ COPY --from=builder /backend-app/node_modules $PROJECT_SRVPROJ/node_modules # Copy entrypoint script into the image WORKDIR $PROJECT_SRVPROJ # EXPOSE port 8000 to allow communication to/from server EXPOSE 8000 CMD ["./run_web.sh"] ``` **docker-compose.env:** ``` C_FORCE_ROOT=True DJANGO_CELERY_BROKER_URL=amqp://admin:mypass@rabbit:5672// DJANGO_DATABASE_ENGINE=django.db.backends.mysql DJANGO_DATABASE_NAME=project-db DJANGO_DATABASE_USER=project-user DJANGO_DATABASE_PASSWORD=mypassword DJANGO_DATABASE_HOST=db DJANGO_ALLOWED_HOSTS=127.0.0.1,localhost DJANGO_DEBUG=True DJANGO_USE_DEBUG_TOOLBAR=off DJANGO_TEST_RUN=off PYTHONUNBUFFERED=0 ``` **run\_web\_local.sh:** ``` #!/bin/bash echo django shell commands python ./manage.py migrate echo Starting django server on 127.0.0.1:8000 python ./manage.py runserver 127.0.0.1:8000 ``` When I call `docker-compose up web` I get the following error: > > web\_1 | bash: ./run\_web\_local.sh: No such file or directory > > > * I checked the line endings, they are UNIX * the file exists on the file system as well as inside the container * I can call `bash run_web_local.sh` from my windows powershell and inside the container * I changed the UNIX permissions inside the container * I left out the `bash` in the `command` in the `docker-compose` command. And tried with backslash, no dot etc. * I reinstalled docker * I tried switching to version 3 * docker claims to have a connection to my shared drive C And: The exact same setup **works** on my other laptop. Any ideas? All the two million github posts didn't solve the problem for me. Thanks! **Update** Removing `volumes:` from the docker-compose makes it work [like stated here](https://github.com/docker/compose/issues/2548) but I don't have an instant mapping. That's kind of important for me...
2018/06/30
[ "https://Stackoverflow.com/questions/51113531", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1331671/" ]
It may be because the bash file is not in the root path or in the root path of workdir. Check where is it in the container and verify if the path is correct.
Do not use -d at the end. Instead of this command **docker-compose -f start\_tools.yaml up –d** Use **docker-compose -f start\_tools.yaml up**
51,113,531
I am setting up `docker-for-windows` on my private pc. When I set it up a while ago on my office laptop I had the same issue but it just stopped happening. So I am stuck with this: I have a docker-working project (on my other computer) with a `docker-compose.yml` like this: ``` version: '2' services: web: depends_on: - db build: . env_file: ./docker-compose.env command: bash ./run_web_local.sh volumes: - .:/srv/project ports: - 8001:8001 links: - db - rabbit restart: always ``` **Dockerfile:** ``` ### STAGE 1: Build ### # We label our stage as 'builder' FROM node:8-alpine as builder RUN npm set progress=false && npm config set depth 0 && npm cache clean --force # build backend ADD package.json /tmp/package.json ADD package-lock.json /tmp/package-lock.json RUN cd /tmp && npm install RUN mkdir -p /backend-app && cp -a /tmp/node_modules /backend-app ### STAGE 2: Setup ### FROM python:3 # Install Python dependencies COPY requirements.txt /tmp/requirements.txt RUN pip3 install -U pip RUN pip3 install --no-cache-dir -r /tmp/requirements.txt # Set env variables used in this Dockerfile (add a unique prefix, such as DOCKYARD) # Local directory with project source ENV PROJECT_SRC=. # Directory in container for all project files ENV PROJECT_SRVHOME=/srv # Directory in container for project source files ENV PROJECT_SRVPROJ=/srv/project # Create application subdirectories WORKDIR $PROJECT_SRVPROJ RUN mkdir media static staticfiles logs # make folders available for other containers VOLUME ["$PROJECT_SRVHOME/media/", "$PROJECT_SRVHOME/logs/"] # Copy application source code to SRCDIR COPY $PROJECT_SRC $PROJECT_SRVPROJ COPY --from=builder /backend-app/node_modules $PROJECT_SRVPROJ/node_modules # Copy entrypoint script into the image WORKDIR $PROJECT_SRVPROJ # EXPOSE port 8000 to allow communication to/from server EXPOSE 8000 CMD ["./run_web.sh"] ``` **docker-compose.env:** ``` C_FORCE_ROOT=True DJANGO_CELERY_BROKER_URL=amqp://admin:mypass@rabbit:5672// DJANGO_DATABASE_ENGINE=django.db.backends.mysql DJANGO_DATABASE_NAME=project-db DJANGO_DATABASE_USER=project-user DJANGO_DATABASE_PASSWORD=mypassword DJANGO_DATABASE_HOST=db DJANGO_ALLOWED_HOSTS=127.0.0.1,localhost DJANGO_DEBUG=True DJANGO_USE_DEBUG_TOOLBAR=off DJANGO_TEST_RUN=off PYTHONUNBUFFERED=0 ``` **run\_web\_local.sh:** ``` #!/bin/bash echo django shell commands python ./manage.py migrate echo Starting django server on 127.0.0.1:8000 python ./manage.py runserver 127.0.0.1:8000 ``` When I call `docker-compose up web` I get the following error: > > web\_1 | bash: ./run\_web\_local.sh: No such file or directory > > > * I checked the line endings, they are UNIX * the file exists on the file system as well as inside the container * I can call `bash run_web_local.sh` from my windows powershell and inside the container * I changed the UNIX permissions inside the container * I left out the `bash` in the `command` in the `docker-compose` command. And tried with backslash, no dot etc. * I reinstalled docker * I tried switching to version 3 * docker claims to have a connection to my shared drive C And: The exact same setup **works** on my other laptop. Any ideas? All the two million github posts didn't solve the problem for me. Thanks! **Update** Removing `volumes:` from the docker-compose makes it work [like stated here](https://github.com/docker/compose/issues/2548) but I don't have an instant mapping. That's kind of important for me...
2018/06/30
[ "https://Stackoverflow.com/questions/51113531", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1331671/" ]
I had same issue recently and the problems goes away using any advanced editor and changing line ending to unix style on sh entrypoint scripts. In my case, not sure why, because git handle it very well depending on linux or windows host I ended up in same situation. If you have files mounted in container and host(in windows) it dependes on how you edit them may be changed to linux from inside container but that affects outside windows host. After that git stated file .sh changed but no additions no deletions. a graphical compare tool showed up that only new line where changed. Another workaround to trouble shooting you can start your container overriden entrypoint script by sh for example and then from there you can check on the started container how linux sees the entrypoint script, even you can test it and will see exact same error.
It may be because the bash file is not in the root path or in the root path of workdir. Check where is it in the container and verify if the path is correct.
51,113,531
I am setting up `docker-for-windows` on my private pc. When I set it up a while ago on my office laptop I had the same issue but it just stopped happening. So I am stuck with this: I have a docker-working project (on my other computer) with a `docker-compose.yml` like this: ``` version: '2' services: web: depends_on: - db build: . env_file: ./docker-compose.env command: bash ./run_web_local.sh volumes: - .:/srv/project ports: - 8001:8001 links: - db - rabbit restart: always ``` **Dockerfile:** ``` ### STAGE 1: Build ### # We label our stage as 'builder' FROM node:8-alpine as builder RUN npm set progress=false && npm config set depth 0 && npm cache clean --force # build backend ADD package.json /tmp/package.json ADD package-lock.json /tmp/package-lock.json RUN cd /tmp && npm install RUN mkdir -p /backend-app && cp -a /tmp/node_modules /backend-app ### STAGE 2: Setup ### FROM python:3 # Install Python dependencies COPY requirements.txt /tmp/requirements.txt RUN pip3 install -U pip RUN pip3 install --no-cache-dir -r /tmp/requirements.txt # Set env variables used in this Dockerfile (add a unique prefix, such as DOCKYARD) # Local directory with project source ENV PROJECT_SRC=. # Directory in container for all project files ENV PROJECT_SRVHOME=/srv # Directory in container for project source files ENV PROJECT_SRVPROJ=/srv/project # Create application subdirectories WORKDIR $PROJECT_SRVPROJ RUN mkdir media static staticfiles logs # make folders available for other containers VOLUME ["$PROJECT_SRVHOME/media/", "$PROJECT_SRVHOME/logs/"] # Copy application source code to SRCDIR COPY $PROJECT_SRC $PROJECT_SRVPROJ COPY --from=builder /backend-app/node_modules $PROJECT_SRVPROJ/node_modules # Copy entrypoint script into the image WORKDIR $PROJECT_SRVPROJ # EXPOSE port 8000 to allow communication to/from server EXPOSE 8000 CMD ["./run_web.sh"] ``` **docker-compose.env:** ``` C_FORCE_ROOT=True DJANGO_CELERY_BROKER_URL=amqp://admin:mypass@rabbit:5672// DJANGO_DATABASE_ENGINE=django.db.backends.mysql DJANGO_DATABASE_NAME=project-db DJANGO_DATABASE_USER=project-user DJANGO_DATABASE_PASSWORD=mypassword DJANGO_DATABASE_HOST=db DJANGO_ALLOWED_HOSTS=127.0.0.1,localhost DJANGO_DEBUG=True DJANGO_USE_DEBUG_TOOLBAR=off DJANGO_TEST_RUN=off PYTHONUNBUFFERED=0 ``` **run\_web\_local.sh:** ``` #!/bin/bash echo django shell commands python ./manage.py migrate echo Starting django server on 127.0.0.1:8000 python ./manage.py runserver 127.0.0.1:8000 ``` When I call `docker-compose up web` I get the following error: > > web\_1 | bash: ./run\_web\_local.sh: No such file or directory > > > * I checked the line endings, they are UNIX * the file exists on the file system as well as inside the container * I can call `bash run_web_local.sh` from my windows powershell and inside the container * I changed the UNIX permissions inside the container * I left out the `bash` in the `command` in the `docker-compose` command. And tried with backslash, no dot etc. * I reinstalled docker * I tried switching to version 3 * docker claims to have a connection to my shared drive C And: The exact same setup **works** on my other laptop. Any ideas? All the two million github posts didn't solve the problem for me. Thanks! **Update** Removing `volumes:` from the docker-compose makes it work [like stated here](https://github.com/docker/compose/issues/2548) but I don't have an instant mapping. That's kind of important for me...
2018/06/30
[ "https://Stackoverflow.com/questions/51113531", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1331671/" ]
I had same issue recently and the problems goes away using any advanced editor and changing line ending to unix style on sh entrypoint scripts. In my case, not sure why, because git handle it very well depending on linux or windows host I ended up in same situation. If you have files mounted in container and host(in windows) it dependes on how you edit them may be changed to linux from inside container but that affects outside windows host. After that git stated file .sh changed but no additions no deletions. a graphical compare tool showed up that only new line where changed. Another workaround to trouble shooting you can start your container overriden entrypoint script by sh for example and then from there you can check on the started container how linux sees the entrypoint script, even you can test it and will see exact same error.
If possible can you please provide all the files related to this, so that i can try to reproduce the issue. Seems like command is not executing is the dir where run\_web\_local.sh exist. You can check the current workdir by replacing command in docker-compose.yml as `command: pwd && bash ./run_web_local.sh`
63,694,387
The below is a selenium python code where I am trying to click Sign In by sending the login details via selenium. However, when I am using `find_element_by_id` method to locate the username and password input area the scripts throws an error `Message: no such element: Unable to locate element: {"method":"css selector","selector":"[id="usernameOrEmail"]"}`. But, when I am inspect the webpage on the input text type it shows me the same id which I have mentioned in my script. P.S: When the selenium opens up the browser please maximize the windows else the code will not work ``` from selenium import webdriver from selenium.webdriver.common.keys import Keys from selenium.webdriver.support.ui import WebDriverWait driver = webdriver.Chrome(executable_path='C://Arighna/chromedriver.exe') driver.get("https://www.fool.com/") print(driver.title) mybutton = driver.find_element_by_id('login-menu-item') mybutton.click() delay = 5 WebDriverWait(driver,delay) email_area = driver.find_element_by_id('usernameOrEmail') email.send_keys(Keys.ENTER) email_area.send_keys('ar') WebDriverWait(driver,delay) pwd_area = driver.find_element_by_id('password') pwd_area.send_keys(Keys.ENTER) pwd_area.send_keys('1234') WebDriverWait(driver,delay) login_btn = driver.find_element_by_id('btn-login') login_btn.click() ``` Any help is really appreciated.
2020/09/01
[ "https://Stackoverflow.com/questions/63694387", "https://Stackoverflow.com", "https://Stackoverflow.com/users/8599554/" ]
don't initialized the variables. use the `nillable` attribute and set it value to `true` ``` @XmlAccessorType(XmlAccessType.FIELD) @XmlType(name = "", propOrder = { "currencyCode", "discountValue", "setPrice" }) @XmlRootElement(name = "countryData") public class CountryData { @XmlElement(nillable=true) protected String currencyCode; @XmlElement(nillable=true) protected String discountValue; @XmlElement(nillable=true) protected String setPrice; // getters and setters } ``` output ``` <currencyCode>GBP</currencyCode> <discountValue xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:nil="true"/> <setPrice xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:nil="true"/> ```
Although the strings are empty they still contain non-null data and the end tag is generated. Remove the default values of the strings or set them as `null` (a default instance field value): ``` protected String discountValue; protected String setPrice; ``` The tags become closed: ``` <discountValue/> <setPrice/> ```
1,736,655
Before resorting to stackoverflow, i have spend a lot of times looking for the solutions. I have been a linux-user/developer for few years, now shifting to windows-7. I am looking for seting-up a development environment (mainly c/c++/bash/python) on my windows machine. Solutions i tired - * VirtuaBox latest, with grml-medium (very light debian-based distro) some how managed to install it in VBox, but lots of issues still regarding Guest-Additions, sharing files, screen-resolutions. Tired with it, now. * MinGW installed it, added to %PATH%, along with GVIM. Now i can use powershell, run gvim, vim, and mingw from the shell as bash. But no manpages, its a lot of convenience to have them availble, locally and offline. But i think it gives me a gcc development Do i need mySys now. i can installed it if it provides me with manpages and ssh. * Cygwin Has avoided till now. But i think it will give me manpages, gcc-utils, python-latest. * Something called Interix. any taker for that. is it recommened. What are the best practices? What are you guys following, i dont have a linux-box to ssh to, well if Vbox things works fine at some point of it, i can then ssh to my VBox. I have lost of time setting it up, so abandoning it for a while. I think only VirtualBox solution will let try things like IPtables, or other linux-system-frameworks. I checked this [Best setup for Linux development from Windows?](https://stackoverflow.com/questions/964850/best-setup-for-linux-development-from-windows) do you recommend coLinux or its derivatives. If yes advices or consideration before i try that.
2009/11/15
[ "https://Stackoverflow.com/questions/1736655", "https://Stackoverflow.com", "https://Stackoverflow.com/users/132597/" ]
Here is what I do for Python development on Windows: * EasyEclipse for Python (includes eclipse, subclipse, pydev) * GNU Win32 [Native Windows ports for GNU tools](http://gnuwin32.sourceforge.net/) * Vim and Emacs (for non-IDE editing work)
The following suggestions hold if you are not going to do complex template programming as the c++ IDE's other than visual studio SUCK, they cannot efficiently index modern C++ code (the boost library). I would suggest using Netbeans (it has far better support for C++ than eclipse/CDT) with the following two build environments. Both are important if you want to cross-compile and test against POSIX and win32. This is not a silver-bullet, you should test on different variants of UNIX once in a while: I would suggest installing Mingw and Msys for windows development, its nice when you can use awk, grep, sed etc on your code :D generative programming is easier with shell tools as well -- writing generative build scripts is a bitch to do effectively of the command line in windows (powershell might have changed this). I would ALSO suggest installing Cygwin and using that on the side. Mingw is for programming against the win32 low-level API, Cygwin is for programming against the POSIX standard. Cygwin also compiles a lot of software that you would otherwise have to port. Also once you get your project up and running you can use CMAKE as build environment, its the best thing since sliced bread :P You can get it to spit out build definition for anything and everything -- including visual studio.
1,736,655
Before resorting to stackoverflow, i have spend a lot of times looking for the solutions. I have been a linux-user/developer for few years, now shifting to windows-7. I am looking for seting-up a development environment (mainly c/c++/bash/python) on my windows machine. Solutions i tired - * VirtuaBox latest, with grml-medium (very light debian-based distro) some how managed to install it in VBox, but lots of issues still regarding Guest-Additions, sharing files, screen-resolutions. Tired with it, now. * MinGW installed it, added to %PATH%, along with GVIM. Now i can use powershell, run gvim, vim, and mingw from the shell as bash. But no manpages, its a lot of convenience to have them availble, locally and offline. But i think it gives me a gcc development Do i need mySys now. i can installed it if it provides me with manpages and ssh. * Cygwin Has avoided till now. But i think it will give me manpages, gcc-utils, python-latest. * Something called Interix. any taker for that. is it recommened. What are the best practices? What are you guys following, i dont have a linux-box to ssh to, well if Vbox things works fine at some point of it, i can then ssh to my VBox. I have lost of time setting it up, so abandoning it for a while. I think only VirtualBox solution will let try things like IPtables, or other linux-system-frameworks. I checked this [Best setup for Linux development from Windows?](https://stackoverflow.com/questions/964850/best-setup-for-linux-development-from-windows) do you recommend coLinux or its derivatives. If yes advices or consideration before i try that.
2009/11/15
[ "https://Stackoverflow.com/questions/1736655", "https://Stackoverflow.com", "https://Stackoverflow.com/users/132597/" ]
Here is what I do for Python development on Windows: * EasyEclipse for Python (includes eclipse, subclipse, pydev) * GNU Win32 [Native Windows ports for GNU tools](http://gnuwin32.sourceforge.net/) * Vim and Emacs (for non-IDE editing work)
I would see if MSysGit can provide what you want first. also since man pages aren't really anything hugely impressive... it might just be possible to just copy them. I've had problems with cygwin, although to be honest I'm not happy with MSys, MSysGit, or Cygwin. I wish someone would build one that was more... linux like. I would if I had to use windows every day, fortunately I only have to use windows sparingly.
1,736,655
Before resorting to stackoverflow, i have spend a lot of times looking for the solutions. I have been a linux-user/developer for few years, now shifting to windows-7. I am looking for seting-up a development environment (mainly c/c++/bash/python) on my windows machine. Solutions i tired - * VirtuaBox latest, with grml-medium (very light debian-based distro) some how managed to install it in VBox, but lots of issues still regarding Guest-Additions, sharing files, screen-resolutions. Tired with it, now. * MinGW installed it, added to %PATH%, along with GVIM. Now i can use powershell, run gvim, vim, and mingw from the shell as bash. But no manpages, its a lot of convenience to have them availble, locally and offline. But i think it gives me a gcc development Do i need mySys now. i can installed it if it provides me with manpages and ssh. * Cygwin Has avoided till now. But i think it will give me manpages, gcc-utils, python-latest. * Something called Interix. any taker for that. is it recommened. What are the best practices? What are you guys following, i dont have a linux-box to ssh to, well if Vbox things works fine at some point of it, i can then ssh to my VBox. I have lost of time setting it up, so abandoning it for a while. I think only VirtualBox solution will let try things like IPtables, or other linux-system-frameworks. I checked this [Best setup for Linux development from Windows?](https://stackoverflow.com/questions/964850/best-setup-for-linux-development-from-windows) do you recommend coLinux or its derivatives. If yes advices or consideration before i try that.
2009/11/15
[ "https://Stackoverflow.com/questions/1736655", "https://Stackoverflow.com", "https://Stackoverflow.com/users/132597/" ]
Here is what I do for Python development on Windows: * EasyEclipse for Python (includes eclipse, subclipse, pydev) * GNU Win32 [Native Windows ports for GNU tools](http://gnuwin32.sourceforge.net/) * Vim and Emacs (for non-IDE editing work)
I would recommend [Bloodshed DevC++](http://www.bloodshed.net/devcpp.html) as a good basic non-microsoft specific Windows solution for developing ANSI C/C++ code. Personally I just use Visual Studio 2008 and ignore all the Microsoft specific extensions. For Python there is the wonderful Komodo Edit software that is free, personally the IDE version is what I prefer, but I use an old 3.5.3 version that works for me. And they have a very popular Python package called ActivePython as well, that has a bunch of Windows specific extension modules. Personally cygwin just feels and acts like a hack to me and is painful to setup and maintain. I think running Linux/Unix in a Virtual Machine is much less hassle if you are looking for a \*nix environment. Getting a really genuine \*nix environment feel is going to be very hard under Windows.
1,736,655
Before resorting to stackoverflow, i have spend a lot of times looking for the solutions. I have been a linux-user/developer for few years, now shifting to windows-7. I am looking for seting-up a development environment (mainly c/c++/bash/python) on my windows machine. Solutions i tired - * VirtuaBox latest, with grml-medium (very light debian-based distro) some how managed to install it in VBox, but lots of issues still regarding Guest-Additions, sharing files, screen-resolutions. Tired with it, now. * MinGW installed it, added to %PATH%, along with GVIM. Now i can use powershell, run gvim, vim, and mingw from the shell as bash. But no manpages, its a lot of convenience to have them availble, locally and offline. But i think it gives me a gcc development Do i need mySys now. i can installed it if it provides me with manpages and ssh. * Cygwin Has avoided till now. But i think it will give me manpages, gcc-utils, python-latest. * Something called Interix. any taker for that. is it recommened. What are the best practices? What are you guys following, i dont have a linux-box to ssh to, well if Vbox things works fine at some point of it, i can then ssh to my VBox. I have lost of time setting it up, so abandoning it for a while. I think only VirtualBox solution will let try things like IPtables, or other linux-system-frameworks. I checked this [Best setup for Linux development from Windows?](https://stackoverflow.com/questions/964850/best-setup-for-linux-development-from-windows) do you recommend coLinux or its derivatives. If yes advices or consideration before i try that.
2009/11/15
[ "https://Stackoverflow.com/questions/1736655", "https://Stackoverflow.com", "https://Stackoverflow.com/users/132597/" ]
I recommend VirtualBox+Ubuntu. Cygwin just doesn't cut it for certain tasks and is in beta for Win7.
IMO I'd say VirtualBox + Gentoo Linux + KDevelop4, Gentoo will give you the control you need over your environment. I'm doing exactly the opposite of you, I have gcc/qt4 installed on wine to compile for windows and using Linux primarily.
1,736,655
Before resorting to stackoverflow, i have spend a lot of times looking for the solutions. I have been a linux-user/developer for few years, now shifting to windows-7. I am looking for seting-up a development environment (mainly c/c++/bash/python) on my windows machine. Solutions i tired - * VirtuaBox latest, with grml-medium (very light debian-based distro) some how managed to install it in VBox, but lots of issues still regarding Guest-Additions, sharing files, screen-resolutions. Tired with it, now. * MinGW installed it, added to %PATH%, along with GVIM. Now i can use powershell, run gvim, vim, and mingw from the shell as bash. But no manpages, its a lot of convenience to have them availble, locally and offline. But i think it gives me a gcc development Do i need mySys now. i can installed it if it provides me with manpages and ssh. * Cygwin Has avoided till now. But i think it will give me manpages, gcc-utils, python-latest. * Something called Interix. any taker for that. is it recommened. What are the best practices? What are you guys following, i dont have a linux-box to ssh to, well if Vbox things works fine at some point of it, i can then ssh to my VBox. I have lost of time setting it up, so abandoning it for a while. I think only VirtualBox solution will let try things like IPtables, or other linux-system-frameworks. I checked this [Best setup for Linux development from Windows?](https://stackoverflow.com/questions/964850/best-setup-for-linux-development-from-windows) do you recommend coLinux or its derivatives. If yes advices or consideration before i try that.
2009/11/15
[ "https://Stackoverflow.com/questions/1736655", "https://Stackoverflow.com", "https://Stackoverflow.com/users/132597/" ]
Here is what I do for Python development on Windows: * EasyEclipse for Python (includes eclipse, subclipse, pydev) * GNU Win32 [Native Windows ports for GNU tools](http://gnuwin32.sourceforge.net/) * Vim and Emacs (for non-IDE editing work)
If you want to do development of POSIX applications (mostly command line), with all the familiar Linux tools, then cygwin is your best bet. It probably include everything you are used to. But if you will try to do Windows development (anything with UI, drivers, services), then Visual Studio is really gold. And in general Visual Studio is just great for anything, if you want to spend the time and money. Good IDE, great debugger. I highly recommend it. And if you are in Rome, do what the Romans do :-)
1,736,655
Before resorting to stackoverflow, i have spend a lot of times looking for the solutions. I have been a linux-user/developer for few years, now shifting to windows-7. I am looking for seting-up a development environment (mainly c/c++/bash/python) on my windows machine. Solutions i tired - * VirtuaBox latest, with grml-medium (very light debian-based distro) some how managed to install it in VBox, but lots of issues still regarding Guest-Additions, sharing files, screen-resolutions. Tired with it, now. * MinGW installed it, added to %PATH%, along with GVIM. Now i can use powershell, run gvim, vim, and mingw from the shell as bash. But no manpages, its a lot of convenience to have them availble, locally and offline. But i think it gives me a gcc development Do i need mySys now. i can installed it if it provides me with manpages and ssh. * Cygwin Has avoided till now. But i think it will give me manpages, gcc-utils, python-latest. * Something called Interix. any taker for that. is it recommened. What are the best practices? What are you guys following, i dont have a linux-box to ssh to, well if Vbox things works fine at some point of it, i can then ssh to my VBox. I have lost of time setting it up, so abandoning it for a while. I think only VirtualBox solution will let try things like IPtables, or other linux-system-frameworks. I checked this [Best setup for Linux development from Windows?](https://stackoverflow.com/questions/964850/best-setup-for-linux-development-from-windows) do you recommend coLinux or its derivatives. If yes advices or consideration before i try that.
2009/11/15
[ "https://Stackoverflow.com/questions/1736655", "https://Stackoverflow.com", "https://Stackoverflow.com/users/132597/" ]
Here is what I do for Python development on Windows: * EasyEclipse for Python (includes eclipse, subclipse, pydev) * GNU Win32 [Native Windows ports for GNU tools](http://gnuwin32.sourceforge.net/) * Vim and Emacs (for non-IDE editing work)
IMO I'd say VirtualBox + Gentoo Linux + KDevelop4, Gentoo will give you the control you need over your environment. I'm doing exactly the opposite of you, I have gcc/qt4 installed on wine to compile for windows and using Linux primarily.
1,736,655
Before resorting to stackoverflow, i have spend a lot of times looking for the solutions. I have been a linux-user/developer for few years, now shifting to windows-7. I am looking for seting-up a development environment (mainly c/c++/bash/python) on my windows machine. Solutions i tired - * VirtuaBox latest, with grml-medium (very light debian-based distro) some how managed to install it in VBox, but lots of issues still regarding Guest-Additions, sharing files, screen-resolutions. Tired with it, now. * MinGW installed it, added to %PATH%, along with GVIM. Now i can use powershell, run gvim, vim, and mingw from the shell as bash. But no manpages, its a lot of convenience to have them availble, locally and offline. But i think it gives me a gcc development Do i need mySys now. i can installed it if it provides me with manpages and ssh. * Cygwin Has avoided till now. But i think it will give me manpages, gcc-utils, python-latest. * Something called Interix. any taker for that. is it recommened. What are the best practices? What are you guys following, i dont have a linux-box to ssh to, well if Vbox things works fine at some point of it, i can then ssh to my VBox. I have lost of time setting it up, so abandoning it for a while. I think only VirtualBox solution will let try things like IPtables, or other linux-system-frameworks. I checked this [Best setup for Linux development from Windows?](https://stackoverflow.com/questions/964850/best-setup-for-linux-development-from-windows) do you recommend coLinux or its derivatives. If yes advices or consideration before i try that.
2009/11/15
[ "https://Stackoverflow.com/questions/1736655", "https://Stackoverflow.com", "https://Stackoverflow.com/users/132597/" ]
I recommend VirtualBox+Ubuntu. Cygwin just doesn't cut it for certain tasks and is in beta for Win7.
I would see if MSysGit can provide what you want first. also since man pages aren't really anything hugely impressive... it might just be possible to just copy them. I've had problems with cygwin, although to be honest I'm not happy with MSys, MSysGit, or Cygwin. I wish someone would build one that was more... linux like. I would if I had to use windows every day, fortunately I only have to use windows sparingly.
1,736,655
Before resorting to stackoverflow, i have spend a lot of times looking for the solutions. I have been a linux-user/developer for few years, now shifting to windows-7. I am looking for seting-up a development environment (mainly c/c++/bash/python) on my windows machine. Solutions i tired - * VirtuaBox latest, with grml-medium (very light debian-based distro) some how managed to install it in VBox, but lots of issues still regarding Guest-Additions, sharing files, screen-resolutions. Tired with it, now. * MinGW installed it, added to %PATH%, along with GVIM. Now i can use powershell, run gvim, vim, and mingw from the shell as bash. But no manpages, its a lot of convenience to have them availble, locally and offline. But i think it gives me a gcc development Do i need mySys now. i can installed it if it provides me with manpages and ssh. * Cygwin Has avoided till now. But i think it will give me manpages, gcc-utils, python-latest. * Something called Interix. any taker for that. is it recommened. What are the best practices? What are you guys following, i dont have a linux-box to ssh to, well if Vbox things works fine at some point of it, i can then ssh to my VBox. I have lost of time setting it up, so abandoning it for a while. I think only VirtualBox solution will let try things like IPtables, or other linux-system-frameworks. I checked this [Best setup for Linux development from Windows?](https://stackoverflow.com/questions/964850/best-setup-for-linux-development-from-windows) do you recommend coLinux or its derivatives. If yes advices or consideration before i try that.
2009/11/15
[ "https://Stackoverflow.com/questions/1736655", "https://Stackoverflow.com", "https://Stackoverflow.com/users/132597/" ]
I recommend VirtualBox+Ubuntu. Cygwin just doesn't cut it for certain tasks and is in beta for Win7.
If you want to do development of POSIX applications (mostly command line), with all the familiar Linux tools, then cygwin is your best bet. It probably include everything you are used to. But if you will try to do Windows development (anything with UI, drivers, services), then Visual Studio is really gold. And in general Visual Studio is just great for anything, if you want to spend the time and money. Good IDE, great debugger. I highly recommend it. And if you are in Rome, do what the Romans do :-)
1,736,655
Before resorting to stackoverflow, i have spend a lot of times looking for the solutions. I have been a linux-user/developer for few years, now shifting to windows-7. I am looking for seting-up a development environment (mainly c/c++/bash/python) on my windows machine. Solutions i tired - * VirtuaBox latest, with grml-medium (very light debian-based distro) some how managed to install it in VBox, but lots of issues still regarding Guest-Additions, sharing files, screen-resolutions. Tired with it, now. * MinGW installed it, added to %PATH%, along with GVIM. Now i can use powershell, run gvim, vim, and mingw from the shell as bash. But no manpages, its a lot of convenience to have them availble, locally and offline. But i think it gives me a gcc development Do i need mySys now. i can installed it if it provides me with manpages and ssh. * Cygwin Has avoided till now. But i think it will give me manpages, gcc-utils, python-latest. * Something called Interix. any taker for that. is it recommened. What are the best practices? What are you guys following, i dont have a linux-box to ssh to, well if Vbox things works fine at some point of it, i can then ssh to my VBox. I have lost of time setting it up, so abandoning it for a while. I think only VirtualBox solution will let try things like IPtables, or other linux-system-frameworks. I checked this [Best setup for Linux development from Windows?](https://stackoverflow.com/questions/964850/best-setup-for-linux-development-from-windows) do you recommend coLinux or its derivatives. If yes advices or consideration before i try that.
2009/11/15
[ "https://Stackoverflow.com/questions/1736655", "https://Stackoverflow.com", "https://Stackoverflow.com/users/132597/" ]
I recommend VirtualBox+Ubuntu. Cygwin just doesn't cut it for certain tasks and is in beta for Win7.
Here is what I do for Python development on Windows: * EasyEclipse for Python (includes eclipse, subclipse, pydev) * GNU Win32 [Native Windows ports for GNU tools](http://gnuwin32.sourceforge.net/) * Vim and Emacs (for non-IDE editing work)
1,736,655
Before resorting to stackoverflow, i have spend a lot of times looking for the solutions. I have been a linux-user/developer for few years, now shifting to windows-7. I am looking for seting-up a development environment (mainly c/c++/bash/python) on my windows machine. Solutions i tired - * VirtuaBox latest, with grml-medium (very light debian-based distro) some how managed to install it in VBox, but lots of issues still regarding Guest-Additions, sharing files, screen-resolutions. Tired with it, now. * MinGW installed it, added to %PATH%, along with GVIM. Now i can use powershell, run gvim, vim, and mingw from the shell as bash. But no manpages, its a lot of convenience to have them availble, locally and offline. But i think it gives me a gcc development Do i need mySys now. i can installed it if it provides me with manpages and ssh. * Cygwin Has avoided till now. But i think it will give me manpages, gcc-utils, python-latest. * Something called Interix. any taker for that. is it recommened. What are the best practices? What are you guys following, i dont have a linux-box to ssh to, well if Vbox things works fine at some point of it, i can then ssh to my VBox. I have lost of time setting it up, so abandoning it for a while. I think only VirtualBox solution will let try things like IPtables, or other linux-system-frameworks. I checked this [Best setup for Linux development from Windows?](https://stackoverflow.com/questions/964850/best-setup-for-linux-development-from-windows) do you recommend coLinux or its derivatives. If yes advices or consideration before i try that.
2009/11/15
[ "https://Stackoverflow.com/questions/1736655", "https://Stackoverflow.com", "https://Stackoverflow.com/users/132597/" ]
I recommend VirtualBox+Ubuntu. Cygwin just doesn't cut it for certain tasks and is in beta for Win7.
The following suggestions hold if you are not going to do complex template programming as the c++ IDE's other than visual studio SUCK, they cannot efficiently index modern C++ code (the boost library). I would suggest using Netbeans (it has far better support for C++ than eclipse/CDT) with the following two build environments. Both are important if you want to cross-compile and test against POSIX and win32. This is not a silver-bullet, you should test on different variants of UNIX once in a while: I would suggest installing Mingw and Msys for windows development, its nice when you can use awk, grep, sed etc on your code :D generative programming is easier with shell tools as well -- writing generative build scripts is a bitch to do effectively of the command line in windows (powershell might have changed this). I would ALSO suggest installing Cygwin and using that on the side. Mingw is for programming against the win32 low-level API, Cygwin is for programming against the POSIX standard. Cygwin also compiles a lot of software that you would otherwise have to port. Also once you get your project up and running you can use CMAKE as build environment, its the best thing since sliced bread :P You can get it to spit out build definition for anything and everything -- including visual studio.
16,648,670
I am developing the structure of the MySQL database and I've faced a small decisional problem about its structure. I have 2 tables: 1. All messages published on the site. 2. All comments published on the site. Every message can have more than one comment associated to it. What is a better way to make connection between a message and comments related to it? 1. Have a field for comments that contains id of the related message. 2. Have a field for messages that contains an array of ids of related comments in json format. I think that usually the first method is used and then MySQL query is used to find comments that have message\_id of corresponding message. But how efficient will it be when there are hundreds of thousands of comments? Will in this case decoding json string and accessing comments by exact unique id be more efficient and fast? I am using python for back-end if that matters.
2013/05/20
[ "https://Stackoverflow.com/questions/16648670", "https://Stackoverflow.com", "https://Stackoverflow.com/users/656100/" ]
The first option is the way to go. So you'll have: comment\_id | message\_id | comment\_text | timestamp etc. For your MySQL table you can specify to build the index over the first two columns for good performance. 10Mio Comments should work OK, but you could test this in advance with a test scenario yourself. If you want to plan for more, then after about 100,000 comments you can do the following: * determine how many comments there are on average per message * determine how many messages would be required for about 5mio comments * let's say it takes 50,000 messages for 5mio comments * add comment\_table1 [..] comment\_table9 to your database * switch within python: if message\_id > 50,000 -> then look at comment\_table2 etc. * Of course, you'll have to save the comments accordingly This should be performant for a large number of entries. You can adapt the numbers to your individual hosting (performance) environment...
Option one is the best approach. You'll want an index on the `message_id` column in the comments table. This allows MySQL to quickly and efficiently pull out all the comments for a particular message, even when there are hundreds of thousands of comments.
71,607,064
In openai.py the Completion.create is highlighting as alert and also not working.. the error is right down below.. whats the problem with the code ``` response = openai.Completion.create( engine="text-davinci-002", prompt="Generate blog topic on: Ethical hacking", temperature=0.7, max_tokens=256, top_p=1, frequency_penalty=0, presence_penalty=0 ) $ python openai.py Traceback (most recent call last): File "E:\python\openAI\openai.py", line 2, in <module> import openai File "E:\python\openAI\openai.py", line 9, in <module> response = openai.Completion.create( AttributeError: partially initialized module 'openai' has no attribute 'Completion' (most likely due to a circular import) ```
2022/03/24
[ "https://Stackoverflow.com/questions/71607064", "https://Stackoverflow.com", "https://Stackoverflow.com/users/16521679/" ]
for my fellow doofuses going thru all the above suggestions and wondering why its not working: make sure your file is NOT named `openai.py`. because then it will call itself, because python. wasted 2 hours on this nonsense lol. relevant link [How to fix AttributeError: partially initialized module?](https://stackoverflow.com/questions/59762996/how-to-fix-attributeerror-partially-initialized-module)
Try this, engine="davinci"
71,607,064
In openai.py the Completion.create is highlighting as alert and also not working.. the error is right down below.. whats the problem with the code ``` response = openai.Completion.create( engine="text-davinci-002", prompt="Generate blog topic on: Ethical hacking", temperature=0.7, max_tokens=256, top_p=1, frequency_penalty=0, presence_penalty=0 ) $ python openai.py Traceback (most recent call last): File "E:\python\openAI\openai.py", line 2, in <module> import openai File "E:\python\openAI\openai.py", line 9, in <module> response = openai.Completion.create( AttributeError: partially initialized module 'openai' has no attribute 'Completion' (most likely due to a circular import) ```
2022/03/24
[ "https://Stackoverflow.com/questions/71607064", "https://Stackoverflow.com", "https://Stackoverflow.com/users/16521679/" ]
for my fellow doofuses going thru all the above suggestions and wondering why its not working: make sure your file is NOT named `openai.py`. because then it will call itself, because python. wasted 2 hours on this nonsense lol. relevant link [How to fix AttributeError: partially initialized module?](https://stackoverflow.com/questions/59762996/how-to-fix-attributeerror-partially-initialized-module)
I tried the openai version 0.18.1 and was able to run a sample GPT-3 code. ``` pip install openai==0.18.1 import openai import config openai.api_key = config.OPENAI_API_KEY if 'OPENAI_API_KEY' in dir(config) else '' print(f'openai.api_key : {openai.api_key}') def openAIQuery(query): response = openai.Completion.create( engine="davinci-instruct-beta-v3", prompt=query, temperature=0.8, max_tokens=200, top_p=1, frequency_penalty=0, presence_penalty=0) if 'choices' in response: if len(response['choices']) > 0: answer = response['choices'][0]['text'] else: answer = 'Opps sorry, you beat the AI this time' else: answer = 'Opps sorry, you beat the AI this time' return answer if __name__ == '__main__': if not openai.api_key: print(f'api_key is not set') exit(0) query = 'Generate a keras 3 layer neural network python code for classification' try: response = openAIQuery(query) print(f'Response : {response}') except Exception as e: print(f'Exception : {str(e)}') ```
71,607,064
In openai.py the Completion.create is highlighting as alert and also not working.. the error is right down below.. whats the problem with the code ``` response = openai.Completion.create( engine="text-davinci-002", prompt="Generate blog topic on: Ethical hacking", temperature=0.7, max_tokens=256, top_p=1, frequency_penalty=0, presence_penalty=0 ) $ python openai.py Traceback (most recent call last): File "E:\python\openAI\openai.py", line 2, in <module> import openai File "E:\python\openAI\openai.py", line 9, in <module> response = openai.Completion.create( AttributeError: partially initialized module 'openai' has no attribute 'Completion' (most likely due to a circular import) ```
2022/03/24
[ "https://Stackoverflow.com/questions/71607064", "https://Stackoverflow.com", "https://Stackoverflow.com/users/16521679/" ]
for my fellow doofuses going thru all the above suggestions and wondering why its not working: make sure your file is NOT named `openai.py`. because then it will call itself, because python. wasted 2 hours on this nonsense lol. relevant link [How to fix AttributeError: partially initialized module?](https://stackoverflow.com/questions/59762996/how-to-fix-attributeerror-partially-initialized-module)
Upgrade `openai` module or try reinstalling it. `text-davinci-002` is a correct engine name, so no need to change it.
57,449,963
I want to install ansible in RHEL 8 Centos. To use yum install ansible i must enable epel release but i can't find a best source of epel release for Rhel 8. **I tried this** ``` sudo dnf install https://dl.fedoraproject.org/pub/epel/epel-release-latest-7.noarch.rpm sudo yum install ansible ``` **The output i got is** ``` Last metadata expiration check: 0:01:26 ago on Sun 11 Aug 2019 12:21:55 PM UTC. Error: Problem: conflicting requests - nothing provides python-setuptools needed by ansible-2.8.2-1.el7.noarch - nothing provides python-jinja2 needed by ansible-2.8.2-1.el7.noarch - nothing provides python-six needed by ansible-2.8.2-1.el7.noarch - nothing provides PyYAML needed by ansible-2.8.2-1.el7.noarch - nothing provides python2-cryptography needed by ansible-2.8.2-1.el7.noarch (try to add '--skip-broken' to skip uninstallable packages or '--nobest' to use not only best candidate packages) ```
2019/08/11
[ "https://Stackoverflow.com/questions/57449963", "https://Stackoverflow.com", "https://Stackoverflow.com/users/7179457/" ]
EPEL8 is not released yet. There are some packages available, but a lot are still being worked on and the repo is not considered "generally available". For now, you can install Ansible from the Python Package Index (PyPI): ``` yum install python3-pip pip3 install ansible ```
If you are using RHEL 8 then you can use the subscription manager to get Ansible with the host and config file pre-built. Also, you will need to create an account on <https://developers.redhat.com> before you can do this: ``` subscription-manager register --auto-attach subscription-manager repos --enable ansible-2.8-for-rhel-8-x86_64-rpms yum -y install ansible ansible --version ```
57,449,963
I want to install ansible in RHEL 8 Centos. To use yum install ansible i must enable epel release but i can't find a best source of epel release for Rhel 8. **I tried this** ``` sudo dnf install https://dl.fedoraproject.org/pub/epel/epel-release-latest-7.noarch.rpm sudo yum install ansible ``` **The output i got is** ``` Last metadata expiration check: 0:01:26 ago on Sun 11 Aug 2019 12:21:55 PM UTC. Error: Problem: conflicting requests - nothing provides python-setuptools needed by ansible-2.8.2-1.el7.noarch - nothing provides python-jinja2 needed by ansible-2.8.2-1.el7.noarch - nothing provides python-six needed by ansible-2.8.2-1.el7.noarch - nothing provides PyYAML needed by ansible-2.8.2-1.el7.noarch - nothing provides python2-cryptography needed by ansible-2.8.2-1.el7.noarch (try to add '--skip-broken' to skip uninstallable packages or '--nobest' to use not only best candidate packages) ```
2019/08/11
[ "https://Stackoverflow.com/questions/57449963", "https://Stackoverflow.com", "https://Stackoverflow.com/users/7179457/" ]
EPEL8 is not released yet. There are some packages available, but a lot are still being worked on and the repo is not considered "generally available". For now, you can install Ansible from the Python Package Index (PyPI): ``` yum install python3-pip pip3 install ansible ```
This worked for RHEL9 Should work for RHEL8 as well ``` [root@controller yum.repos.d]# yum list | grep ansible ansible-collection-microsoft-sql.noarch 1.1.1-3.el9_0 Local-AppStream ansible-collection-redhat-rhel_mgmt.noarch 1.0.0-2.el9 Local-AppStream ansible-core.x86_64 2.12.2-1.el9 Local-AppStream ansible-freeipa.noarch 1.6.3-1.el9 Local-AppStream ansible-freeipa-tests.noarch 1.6.3-1.el9 Local-AppStream ansible-pcp.noarch 2.2.2-2.el9 Local-AppStream ansible-test.x86_64 2.12.2-1.el9 Local-AppStream [root@controller yum.repos.d]# yum install ansible-core.x86_64 ```
57,449,963
I want to install ansible in RHEL 8 Centos. To use yum install ansible i must enable epel release but i can't find a best source of epel release for Rhel 8. **I tried this** ``` sudo dnf install https://dl.fedoraproject.org/pub/epel/epel-release-latest-7.noarch.rpm sudo yum install ansible ``` **The output i got is** ``` Last metadata expiration check: 0:01:26 ago on Sun 11 Aug 2019 12:21:55 PM UTC. Error: Problem: conflicting requests - nothing provides python-setuptools needed by ansible-2.8.2-1.el7.noarch - nothing provides python-jinja2 needed by ansible-2.8.2-1.el7.noarch - nothing provides python-six needed by ansible-2.8.2-1.el7.noarch - nothing provides PyYAML needed by ansible-2.8.2-1.el7.noarch - nothing provides python2-cryptography needed by ansible-2.8.2-1.el7.noarch (try to add '--skip-broken' to skip uninstallable packages or '--nobest' to use not only best candidate packages) ```
2019/08/11
[ "https://Stackoverflow.com/questions/57449963", "https://Stackoverflow.com", "https://Stackoverflow.com/users/7179457/" ]
If you are using RHEL 8 then you can use the subscription manager to get Ansible with the host and config file pre-built. Also, you will need to create an account on <https://developers.redhat.com> before you can do this: ``` subscription-manager register --auto-attach subscription-manager repos --enable ansible-2.8-for-rhel-8-x86_64-rpms yum -y install ansible ansible --version ```
This worked for RHEL9 Should work for RHEL8 as well ``` [root@controller yum.repos.d]# yum list | grep ansible ansible-collection-microsoft-sql.noarch 1.1.1-3.el9_0 Local-AppStream ansible-collection-redhat-rhel_mgmt.noarch 1.0.0-2.el9 Local-AppStream ansible-core.x86_64 2.12.2-1.el9 Local-AppStream ansible-freeipa.noarch 1.6.3-1.el9 Local-AppStream ansible-freeipa-tests.noarch 1.6.3-1.el9 Local-AppStream ansible-pcp.noarch 2.2.2-2.el9 Local-AppStream ansible-test.x86_64 2.12.2-1.el9 Local-AppStream [root@controller yum.repos.d]# yum install ansible-core.x86_64 ```
7,330,279
I am writing a python interface to a c++ library and am wondering about the correct design of the library. I have found out (the hard way) that all methods passed to python must be declared static. If I understand correctly, this means that all functions basically must be defined in the same .cpp file. My interface has many functions, so this gets ugly very quickly. What is the standard way to deal with this problem? Possibilities I could think of: * don't worry about it and use one looong .cpp file * compile into more than one library (.so file) * write a .cpp for each group of functions and #include that .cpp into the body of the main defining cpp file (the one with the PyMethodDef) both of them seem very ugly
2011/09/07
[ "https://Stackoverflow.com/questions/7330279", "https://Stackoverflow.com", "https://Stackoverflow.com/users/446137/" ]
> > I have found out (the hard way) that all methods passed to python must > be declared static. If I understand correctly, this means that all > functions basically must be defined in the same .cpp file. My > interface has many functions, so this gets ugly very quickly. > > > Where did you find this out? It isn't true. The keyword `static` means two different things in C++. There is class-static, which means a class-scoped function is called without an instance of the object (just like a normal function). There is also static linkage, which means your functions do not get added to the global offset table and you'll have a tough time finding them outside of the translation unit (CPP file). I would recommend looking at [Boost.Python](http://www.boost.org/doc/libs/1_47_0/libs/python/doc/index.html). They have solved many of the problems you would encounter and make it extremely easy to make C++ and Python talk to each other.
Why do you say that all functions called by Python have to be static? It's usual for that to be the case, in order to avoid name conflicts (since any namespace, etc. will be ignored because of the `extern "C"`), but whether the function is static or not is of no consequence. When interfacing a library in C++, in my experience, it's generally not a big problem to make it static, and to put all of the functions in a single translation unit, because the functions will be just small wrappers which call the actual C++, and normally, will be automatically generated from some sort of descripter file; you surely aren't going to write all of the necessary boilerplate by hand.
7,542,421
[Python Challenge #2](http://www.pythonchallenge.com/pc/def/ocr.html) [Answer I found](http://ymcagodme.blogspot.com/2011/04/python-challenge-level-2.html) ``` FILE_PATH = 'l2-text' f = open(FILE_PATH) print ''.join([ t for t in f.read() if t.isalpha()]) f.close() ``` Question: Why is their a 't' before the for loop `t for t in f.read()`. I understand the rest of the code except for that one bit. If I try to remove it I get an error, so what does it do? Thanks.
2011/09/24
[ "https://Stackoverflow.com/questions/7542421", "https://Stackoverflow.com", "https://Stackoverflow.com/users/963081/" ]
`[t for t in f.read() if t.isalpha()]` is a list comprehension. Basically, it takes the given iterable (`f.read()`) and forms a list by taking all the elements read by applying an optional filter (the `if` clause) and a mapping function (the part on the left of the `for`). However, the mapping part is trivial here, this makes the syntax look a bit redundant: for each element `t` given, it just adds the element value (`t`) to the output list. But more complex expressions are possible, for example `t*2 for t ...` would duplicate all valid characters.
This is a [list comprehension](http://www.python.org/doc//current/tutorial/datastructures.html#list-comprehensions), not a `for`-loop. > > List comprehensions provide a concise way to create lists. > > > ``` [t for t in f.read() if t.isalpha()] ``` This creates a [`list`](http://www.python.org/doc//current/tutorial/datastructures.html#more-on-lists) of all of the `alpha` characters in the file (`f`). You then [`join()`](http://www.python.org/doc//current/library/string.html#string.join) them all together. You now have a link to the documentation, which should help you comprehend comprehensions. It's tricky to search for things when you don't know what they're called! Hope this helps.
7,542,421
[Python Challenge #2](http://www.pythonchallenge.com/pc/def/ocr.html) [Answer I found](http://ymcagodme.blogspot.com/2011/04/python-challenge-level-2.html) ``` FILE_PATH = 'l2-text' f = open(FILE_PATH) print ''.join([ t for t in f.read() if t.isalpha()]) f.close() ``` Question: Why is their a 't' before the for loop `t for t in f.read()`. I understand the rest of the code except for that one bit. If I try to remove it I get an error, so what does it do? Thanks.
2011/09/24
[ "https://Stackoverflow.com/questions/7542421", "https://Stackoverflow.com", "https://Stackoverflow.com/users/963081/" ]
This is a [list comprehension](http://www.python.org/doc//current/tutorial/datastructures.html#list-comprehensions), not a `for`-loop. > > List comprehensions provide a concise way to create lists. > > > ``` [t for t in f.read() if t.isalpha()] ``` This creates a [`list`](http://www.python.org/doc//current/tutorial/datastructures.html#more-on-lists) of all of the `alpha` characters in the file (`f`). You then [`join()`](http://www.python.org/doc//current/library/string.html#string.join) them all together. You now have a link to the documentation, which should help you comprehend comprehensions. It's tricky to search for things when you don't know what they're called! Hope this helps.
note the following is also valid: ``` print ''.join(( t for t in f.read() if t.isalpha())) ``` instead of [ and ] you have ( and ) This specifies a generator instead of a list. [generator comprehension](https://stackoverflow.com/questions/364802/generator-comprehension)
7,542,421
[Python Challenge #2](http://www.pythonchallenge.com/pc/def/ocr.html) [Answer I found](http://ymcagodme.blogspot.com/2011/04/python-challenge-level-2.html) ``` FILE_PATH = 'l2-text' f = open(FILE_PATH) print ''.join([ t for t in f.read() if t.isalpha()]) f.close() ``` Question: Why is their a 't' before the for loop `t for t in f.read()`. I understand the rest of the code except for that one bit. If I try to remove it I get an error, so what does it do? Thanks.
2011/09/24
[ "https://Stackoverflow.com/questions/7542421", "https://Stackoverflow.com", "https://Stackoverflow.com/users/963081/" ]
`[t for t in f.read() if t.isalpha()]` is a list comprehension. Basically, it takes the given iterable (`f.read()`) and forms a list by taking all the elements read by applying an optional filter (the `if` clause) and a mapping function (the part on the left of the `for`). However, the mapping part is trivial here, this makes the syntax look a bit redundant: for each element `t` given, it just adds the element value (`t`) to the output list. But more complex expressions are possible, for example `t*2 for t ...` would duplicate all valid characters.
note the following is also valid: ``` print ''.join(( t for t in f.read() if t.isalpha())) ``` instead of [ and ] you have ( and ) This specifies a generator instead of a list. [generator comprehension](https://stackoverflow.com/questions/364802/generator-comprehension)
57,077,432
I was trying to add two tuples two create a new sort of nested tuple using the coerce function of python. I'm using python version 3.7 which is showing that the function isn't defined. It is supposed to be a built-in function in python
2019/07/17
[ "https://Stackoverflow.com/questions/57077432", "https://Stackoverflow.com", "https://Stackoverflow.com/users/11797954/" ]
Including `Building` will also include `MultiApartmentBuilding` entries (in fact all types deriving from `Building`). You can use C# 7.0's pattern matching to test and cast at the same time (where `apartments` is the result of the query): ``` foreach (Apartment apartment in apartments) { // Access common Building field. Console.WriteLine(apartment.Building.Id); // Access specialized field from derived building type. if (apartment.Building is MultiApartmentBuilding maBuilding) { Console.WriteLine(maBuilding.GroundFloorCount); } } ``` If you have many types of buildings, you can use pattern matching in the switch statement ``` switch (apartment.Building) { case MultiApartmentBuilding maBuilding: Console.WriteLine(maBuilding.GroundFloorCount); break; case Igloo igloo: Console.WriteLine(igloo.SnowQuality); break; default: Console.WriteLine("all other building types"); break; } ```
You can't access the child's class attributes. In other words, if you have a Building, you can't access its **MultiApartmentBuilding** attributes, because you don't even know if it really is a **MultiApartmentBuilding**. What I would do in this case would be to change your **Apartment** class and use the type **MultiApartmentBuilding** instead of **Building**: ``` public class Apartment : EntityBase { public int Id { get; set; } public int BuildingId { get; set; } public MultiApartmentBuilding MultiApartmentBuilding { get; set; } public Common.Enums.ApartmentState State { get; set; } public AccessibilityState Accessibility { get; set; } public int Floor { get; set; } public bool IsPentHouse { get; set; } } ```
24,452,972
Would like to extract all the lines from first file (GunZip \*.gz i.e Input.csv.gz), if the first file 4th field is falls within a range of Second file (Slab.csv) first field (Start Range) and second field (End Range) then populate Slab wise count of rows and sum of 4th and 5th field of first file. Input.csv.gz (GunZip) ``` Desc,Date,Zone,Duration,Calls AB,01-06-2014,XYZ,450,3 AB,01-06-2014,XYZ,642,3 AB,01-06-2014,XYZ,0,0 AB,01-06-2014,XYZ,205,3 AB,01-06-2014,XYZ,98,1 AB,01-06-2014,XYZ,455,1 AB,01-06-2014,XYZ,120,1 AB,01-06-2014,XYZ,0,0 AB,01-06-2014,XYZ,193,1 AB,01-06-2014,XYZ,0,0 AB,01-06-2014,XYZ,161,2 ``` Slab.csv ``` StartRange,EndRange 0,0 1,10 11,100 101,200 201,300 301,400 401,500 501,10000 ``` Expected Output: ``` StartRange,EndRange,Count,Sum-4,Sum-5 0,0,3,0,0 1,10,NotFound,NotFound,NotFound 11,100,1,98,1 101,200,3,474,4 201,300,1,205,3 301,400,NotFound,NotFound,NotFound 401,500,2,905,4 501,10000,1,642,3 ``` I am using below two commands to get the above output , expect "NotFound"cases . ``` awk -F, 'NR==FNR{s[NR]=$1;e[NR]=$2;c[NR]=$0;n++;next} {for(i=1;i<=n;i++) if($4>=s[i]&&$4<=e[i]) {print $0,","c[i];break}}' Slab.csv <(gzip -dc Input.csv.gz) >Op_step1.csv cat Op_step1.csv | awk -F, '{key=$6","$7;++a[key];b[key]=b[key]+$4;c[key]=c[key]+$5} END{for(i in a)print i","a[i]","b[i]","c[i]}' >Op_step2.csv ``` Op\_step2.csv ``` 101,200,3,474,4 501,10000,1,642,3 0,0,3,0,0 401,500,2,905,4 11,100,1,98,1 201,300,1,205,3 ``` Any suggestions to make it one liner command to achieve the Expected Output , Don't have perl , python access.
2014/06/27
[ "https://Stackoverflow.com/questions/24452972", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3350223/" ]
Here is one way using `awk` and `sort`: ``` awk ' BEGIN { FS = OFS = SUBSEP = ","; print "StartRange,EndRange,Count,Sum-4,Sum-5" } FNR == 1 { next } NR == FNR { ranges[$1,$2]++; next } { for (range in ranges) { split(range, tmp, SUBSEP); if ($4 >= tmp[1] && $4 <= tmp[2]) { count[range]++; sum4[range]+=$4; sum5[range]+=$5; next } } } END { for(range in ranges) print range, (count[range]?count[range]:"NotFound"), (sum4[range]?sum4[range]:"NotFound"), (sum5[range]?sum5[range]:"NotFound") | "sort -t, -nk1,2" }' slab input StartRange,EndRange,Count,Sum-4,Sum-5 0,0,3,NotFound,NotFound 1,10,NotFound,NotFound,NotFound 11,100,1,98,1 101,200,3,474,4 201,300,1,205,3 301,400,NotFound,NotFound,NotFound 401,500,2,905,4 501,10000,1,642,3 ``` * Set the Input, Output Field Separators and `SUBSEP` to `,`. Print the Header line. * If it is the first line skip it. * Load the entire `slab.txt` in to an array called `ranges`. * For every range in the `ranges` array, split the field to get start and end range. If the 4th column is in the range, increment the count array and add the value to `sum4` and `sum5` array appropriately. * In the `END` block, iterate through the ranges and print them. * Pipe the output to `sort` to get the output in order.
Here is another option using `perl` which takes benefits of creating multi-dimensional arrays and hashes. ``` perl -F, -lane' BEGIN { $x = pop; ## Create array of arrays from start and end ranges ## $range = ( [0,0] , [1,10] ... ) (undef, @range)= map { chomp; [split /,/] } <>; @ARGV = $x; } ## Skip the first line next if $. ==1; ## Create hash of hash ## $line = '[0,0]' => { "count" => counts , "sum4" => sum_of_col4 , "sum5" => sum_of_col5 } for (@range) { if ($F[3] >= $_->[0] && $F[3] <= $_->[1]) { $line{"@$_"}{"count"}++; $line{"@$_"}{"sum4"} +=$F[3]; $line{"@$_"}{"sum5"} +=$F[4]; } } }{ print "StartRange,EndRange,Count,Sum-4,Sum-5"; print join ",", @$_, $line{"@$_"}{"count"} //"NotFound", $line{"@$_"}{"sum4"} //"NotFound", $line{"@$_"}{"sum5"} //"NotFound" for @range ' slab input StartRange,EndRange,Count,Sum-4,Sum-5 0,0,3,0,0 1,10,NotFound,NotFound,NotFound 11,100,1,98,1 101,200,3,474,4 201,300,1,205,3 301,400,NotFound,NotFound,NotFound 401,500,2,905,4 501,10000,1,642,3 ```
44,335,494
So I downloaded Deuces, code for poker hand evaluations, and originally I think it was in Python 2, because all of the print statements had no parentheses. I fixed all of those, and everything seems to work, except this last part. Here is the code for it: ``` def get_lexographically_next_bit_sequence(self, bits): """ Bit hack from here: http://www-graphics.stanford.edu/~seander/bithacks.html#NextBitPermutation Generator even does this in poker order rank so no need to sort when done! Perfect. """ t = (bits | (bits - 1)) + 1 next = t | ((((t & -t) / (bits & -bits)) >> 1) - 1) yield next while True: t = (next | (next - 1)) + 1 next = t | ((((t & -t) / (next & -next)) >> 1) - 1) yield next ``` I looked online and found that they are bit operators, but I dont understand why python doesnt recognize them. Do I have to import something, or are those operators not used in python 3 ``` File "/Volumes/PROJECTS/deuces/All_poker.py", line 709, in get_lexographically_next_bit_sequence next = t | ((((t and -t) / (bits and -bits)) // 2) - 1) ``` TypeError: unsupported operand type(s) for |: 'float' and 'float' This is the error I get and the code can be found at <https://github.com/vitamins/deuces/tree/8222a6505979886171b8a0c581ef667f13c5d165> It is the last portion of the lookup class when I write ``` board = [ Card.new('Ah'), Card.new('Kd'), ('Jc') ] hand = [ Card.new('Qs'),Card.new('Th')] evaluator=Evaluator() ``` On that last line of code I get the error. All of the code can be found in the link
2017/06/02
[ "https://Stackoverflow.com/questions/44335494", "https://Stackoverflow.com", "https://Stackoverflow.com/users/8062387/" ]
In accord with Arrivillaga's comment I had just modified what you had posted to this. ``` def get_lexographically_next_bit_sequence(bits): """ Bit hack from here: http://www-graphics.stanford.edu/~seander/bithacks.html#NextBitPermutation Generator even does this in poker order rank so no need to sort when done! Perfect. """ t = (bits | (bits - 1)) + 1 next = t | ((((t & -t) // (bits & -bits)) >> 1) - 1) yield next while True: t = (next | (next - 1)) + 1 next = t | ((((t & -t) // (next & -next)) >> 1) - 1) yield next for i, g in enumerate(get_lexographically_next_bit_sequence(123)): print (g) if i > 10: break ``` Do these results seem reasonable? ``` 125 126 159 175 183 187 189 190 207 215 219 221 ```
It was the / symbol, as the gentleman said above it is supposed to be for floor division, and quick fix and it works fine.
69,217,390
I'm trying to build a website in python and flask however my CSS is not loading I don't see anything wrong with my code and I've tried the same code snippet from a few different sites. My Link: ```html <link rel="stylesheet" href="{{ url_for('static', filename= 'css/style.css') }}"> ``` File structure as below: [![File Structure](https://i.stack.imgur.com/eyo3p.png)](https://i.stack.imgur.com/eyo3p.png) > > Error: 127.0.0.1 - - [16/Sep/2021 20:18:34] "GET /static/css/style.css > HTTP/1.1" 404 - > > >
2021/09/17
[ "https://Stackoverflow.com/questions/69217390", "https://Stackoverflow.com", "https://Stackoverflow.com/users/11386215/" ]
The problem was probably with the numpy function 'percentile' and how I passed in my argument to the find\_outliers\_tukey function. So these changes worked for me step 1 ====== 1. Include two arguments; one for the name of df, another for the name of the feature. 2. Put the feature argument into the df explicitly. 3. Don't use attribute chaining when accessing the feature and use quantile instead of percentile. ``` def find_outliers_tukey(df:"dataframe", feature:"series") -> "list, list": "write later" q1 = df[feature].quantile(0.25) q3 = df[feature].quantile(0.75) iqr = q3-q1 floor = q1 -1.5*iqr ceiling = q3 +1.5*iqr outlier_indices = list(df.index[ (df[feature] < floor) | (df[feature] > ceiling) ]) #outlier_values = list(df[feature][outlier_indices]) #print(f"outliers are {outlier_values} at indices {outlier_indices}") #return outlier_indices, outlier_values return outlier_indices ``` step 2 ====== I put all the columns I wanted to remove outliers from into a list. ``` df_columns = list(df.columns[1:56]) ``` step 3 ====== no change here. Just used 2 arguments instead of 1 for the find\_outliers\_tukey function. Oh and I stored the indices of the outliers just for future use. ``` index_list = [] for feature in df_columns: index_list.extend(find_outliers_tukey(df, feature)) ``` This gave me better statistical results for the columns.
For Question 1, your code seems to work fine on my end, but of course I don't have your original data. For Question 2, there are two problems. The first is that you are passing the column *names* to `find_outliers_tukey` instead of the columns themselves. Use `iteritems` to iterate over pairs of `(column name, column Series)`: ```py for feature, column in df.iteritems(): tukey_indices, tukey_values = find_outliers_tukey(column) print(f"Outliers in {feature} are {tukey_values} \n") ``` The second problem, which you'll run into after solving the first problem, is that your `location` column is not a column with, so you won't be able to find outliers for it. Make sure to only iterate over the columns that you actually want to perform the calculation on.
54,695,126
I am trying to parse a webpage and print the link for items(href). Can you help with where am i going wrong? ``` import requests from bs4 import BeautifulSoup link = "https://www.amazon.in/Power- Banks/b/ref=nav_shopall_sbc_mobcomp_powerbank?ie=UTF8&node=6612025031" def amazon(url): sourcecode = requests.get(url) sourcecode_text = sourcecode.text soup = BeautifulSoup(sourcecode_text) for link in soup.findALL('a', {'class': 'a-link-normal aok-block a- text-normal'}): href = link.get('href') print(href) amazon(link) ``` Output : > > C:\Users\TIMAH\AppData\Local\Programs\Python\Python37\python.exe > "C:/Users/TIMAH/OneDrive/study materials/Python\_Test\_Scripts/Self > Basic/Class\_Test.py" Traceback (most recent call last): File > "C:/Users/TIMAH/OneDrive/study materials/Python\_Test\_Scripts/Self > Basic/Class\_Test.py", line 15, in > amazon(link) File "C:/Users/TIMAH/OneDrive/study materials/Python\_Test\_Scripts/Self Basic/Class\_Test.py", line 9, in > amazon > soup = BeautifulSoup(sourcecode\_text, 'features="html.parser"') File > "C:\Users\TIMAH\AppData\Local\Programs\Python\Python37\lib\site-packages\bs4\_\_init\_\_.py", > line 196, in **init** > % ",".join(features)) bs4.FeatureNotFound: Couldn't find a tree builder with the features you requested: features="html.parser". Do > you need to install a parser library? > > > Process finished with exit code 1 > > >
2019/02/14
[ "https://Stackoverflow.com/questions/54695126", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4992020/" ]
You can though add headers. Then also when you do `find_all('a')`, you can just get it there is href: ``` import requests from bs4 import BeautifulSoup link = "https://www.amazon.in/Power-Banks/b/ref=nav_shopall_sbc_mobcomp_powerbank?ie=UTF8&node=6612025031" def amazon(url): headers = {'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/71.0.3578.98 Safari/537.36'} sourcecode = requests.get(url, headers=headers) sourcecode_text = sourcecode.text soup = BeautifulSoup(sourcecode_text, 'html.parser') for link in soup.find_all('a', href=True): href = link.get('href') print(href) amazon(link) ```
If you tried to scrape Amazon right now with `requests` you won't get anything in return since Amazon will know that it's a script, and headers won't help it (as far as I know). Instead, in response they will tell the following: ``` To discuss automated access to Amazon data please contact api-services-support@amazon.com. ``` --- You can scrape Amazon using `requests-html` or `selenium` by rendering it. `Requeests-html` simple example scraping titles (results will be similar if you open the same link in the incognito tab): ``` from requests_html import HTMLSession session = HTMLSession() url = 'https://www.amazon.com/s?k=apple+watch+series+6+band' r = session.get(url) r.html.render(sleep=1, keep_page=True, scrolldown = 1) for container in r.html.find('.a-size-medium'): title = container.text print(f"Title: {title}") ``` Output: ```none Title: New Apple Watch Series 6 (GPS, 40mm) - (Product) RED - Aluminum Case with (Product) RED - Sport Band Title: SUPCASE [Unicorn Beetle Pro] Designed for Apple Watch Series 6/SE/5/4 [44mm], Rugged Protective Case with Strap Bands(Black) Title: Spigen Rugged Armor Pro Designed for Apple Watch Band with Case for 44mm Series 6/SE/5/4 - Charcoal Gray Title: Highly rated and well-priced products Title: Fitlink Stainless Steel Metal Band for Apple Watch 38/40/42/44mm Replacement Link Bracelet Band Compatible with Apple Watch Series 6 Apple Watch Series 5 Apple Watch Series 1/2/3/4 (Grey,42/44mm) Title: TalkWorks Compatible for Apple Watch Band 42mm / 44mm Comfort Fit Mesh Loop Stainless Steel Adjustable Magnetic Strap for iWatch Series 6, 5, 4, 3, 2, 1, SE - Rose Gold Title: COOYA Compatible for Apple Watch Band 44mm 42mm Women Men iWatch Wristband with Protective Rugged Case Sport Strap Adjustable Replacement Band Compatible with Apple Watch Series 6 SE 5 4 3 2, Clear Title: Stainless Steel Metal Bands Compatible with Apple Watch Band 42mm 44mm, Gold Replacement Strap with Adapter+Case Cover Compatible with iWatch Series 6 5 4 3 2 1 SE Sport Title: elago W2 Charger Stand Compatible with Apple Watch Series 6/SE/5/4/3/2/1 (44mm, 42mm, 40mm, 38mm), Durable Silicone, Compatible with Nightstand Mode (Black) Title: Element Case Black Ops Watch Band for Apple Watch Series 4/5/6/SE, 44mm - Black (EMT-522-244A-01) ... ```
54,695,126
I am trying to parse a webpage and print the link for items(href). Can you help with where am i going wrong? ``` import requests from bs4 import BeautifulSoup link = "https://www.amazon.in/Power- Banks/b/ref=nav_shopall_sbc_mobcomp_powerbank?ie=UTF8&node=6612025031" def amazon(url): sourcecode = requests.get(url) sourcecode_text = sourcecode.text soup = BeautifulSoup(sourcecode_text) for link in soup.findALL('a', {'class': 'a-link-normal aok-block a- text-normal'}): href = link.get('href') print(href) amazon(link) ``` Output : > > C:\Users\TIMAH\AppData\Local\Programs\Python\Python37\python.exe > "C:/Users/TIMAH/OneDrive/study materials/Python\_Test\_Scripts/Self > Basic/Class\_Test.py" Traceback (most recent call last): File > "C:/Users/TIMAH/OneDrive/study materials/Python\_Test\_Scripts/Self > Basic/Class\_Test.py", line 15, in > amazon(link) File "C:/Users/TIMAH/OneDrive/study materials/Python\_Test\_Scripts/Self Basic/Class\_Test.py", line 9, in > amazon > soup = BeautifulSoup(sourcecode\_text, 'features="html.parser"') File > "C:\Users\TIMAH\AppData\Local\Programs\Python\Python37\lib\site-packages\bs4\_\_init\_\_.py", > line 196, in **init** > % ",".join(features)) bs4.FeatureNotFound: Couldn't find a tree builder with the features you requested: features="html.parser". Do > you need to install a parser library? > > > Process finished with exit code 1 > > >
2019/02/14
[ "https://Stackoverflow.com/questions/54695126", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4992020/" ]
the problem in your code you are using wrong method name findALL .. There is no findALL method in soup object, so None is returned for that. to fix that use find\_all for new code , also findAll should work (with lower case double l). hope this clear things to you. ``` import requests from bs4 import BeautifulSoup link = "https://www.amazon.in/Power-Banks/b/ref=nav_shopall_sbc_mobcomp_powerbank?ie=UTF8&node=6612025031" def amazon(url): sourcecode = requests.get(url) sourcecode_text = sourcecode.text soup = BeautifulSoup(sourcecode_text, "html.parser") # add "html.parser" as second arg , so you not get a warning . # use soup.find_all for new code , also soup.findAll should work for link in soup.find_all('a', {'class': 'a-link-normal aok-block a-text-normal'}): href = link.get('href') print(href) amazon(link) ```
If you tried to scrape Amazon right now with `requests` you won't get anything in return since Amazon will know that it's a script, and headers won't help it (as far as I know). Instead, in response they will tell the following: ``` To discuss automated access to Amazon data please contact api-services-support@amazon.com. ``` --- You can scrape Amazon using `requests-html` or `selenium` by rendering it. `Requeests-html` simple example scraping titles (results will be similar if you open the same link in the incognito tab): ``` from requests_html import HTMLSession session = HTMLSession() url = 'https://www.amazon.com/s?k=apple+watch+series+6+band' r = session.get(url) r.html.render(sleep=1, keep_page=True, scrolldown = 1) for container in r.html.find('.a-size-medium'): title = container.text print(f"Title: {title}") ``` Output: ```none Title: New Apple Watch Series 6 (GPS, 40mm) - (Product) RED - Aluminum Case with (Product) RED - Sport Band Title: SUPCASE [Unicorn Beetle Pro] Designed for Apple Watch Series 6/SE/5/4 [44mm], Rugged Protective Case with Strap Bands(Black) Title: Spigen Rugged Armor Pro Designed for Apple Watch Band with Case for 44mm Series 6/SE/5/4 - Charcoal Gray Title: Highly rated and well-priced products Title: Fitlink Stainless Steel Metal Band for Apple Watch 38/40/42/44mm Replacement Link Bracelet Band Compatible with Apple Watch Series 6 Apple Watch Series 5 Apple Watch Series 1/2/3/4 (Grey,42/44mm) Title: TalkWorks Compatible for Apple Watch Band 42mm / 44mm Comfort Fit Mesh Loop Stainless Steel Adjustable Magnetic Strap for iWatch Series 6, 5, 4, 3, 2, 1, SE - Rose Gold Title: COOYA Compatible for Apple Watch Band 44mm 42mm Women Men iWatch Wristband with Protective Rugged Case Sport Strap Adjustable Replacement Band Compatible with Apple Watch Series 6 SE 5 4 3 2, Clear Title: Stainless Steel Metal Bands Compatible with Apple Watch Band 42mm 44mm, Gold Replacement Strap with Adapter+Case Cover Compatible with iWatch Series 6 5 4 3 2 1 SE Sport Title: elago W2 Charger Stand Compatible with Apple Watch Series 6/SE/5/4/3/2/1 (44mm, 42mm, 40mm, 38mm), Durable Silicone, Compatible with Nightstand Mode (Black) Title: Element Case Black Ops Watch Band for Apple Watch Series 4/5/6/SE, 44mm - Black (EMT-522-244A-01) ... ```
52,788,039
I'm given a task to convert a **Perl script to Python**. I'm really new to Perl and understanding it where I came across a command line option which is `-Sx`. There is good documentation provided for these parameters in Perl. But there is no much documentation for the same in python (Didn't find much info in Python official site). My question is are those command line options `-Sx` same for both **Perl** and **Python**? Do they achieve same task in both?
2018/10/12
[ "https://Stackoverflow.com/questions/52788039", "https://Stackoverflow.com", "https://Stackoverflow.com/users/10496576/" ]
Questions I asked in a comment: > > Have you thought about if that bit of shell you were asking about is really necessary for what you're doing? Or are you just trying to blindly translate without understanding what things are doing? > > > I'm pretty sure the answers are no and yes respectively. That's not a good place to be when you're trying to translate code from one language to another; you should understand **what** is going on in the original and make your new version do the same thing in whatever way is most appropriate for the new language, and not get trapped into some blind-leading-the-blind cargo cult code where you have no idea what's going on or how to fix it when it invariably doesn't work. It doesn't help that based on your [other question](https://stackoverflow.com/questions/52785232/what-does-exec-perl-perl-sx-0-1-mean-in-shell-script) your source program that you're trying to translate is rather confusing if you've never seen one like it before. You have a shell script that, as the only thing it does, is run perl with a program whose source is directly embedded in the shell script. The reason to do this is to have the real script run under different perl installs on the same computer depending on the environment (Personally I'd put the perl code in its own separate file instead of trying to be clever with having it directly in the shell script; [perlbrew documentation](https://perlbrew.pl/Perlbrew-In-Shell-Scripts.html) examples take that approach). Is that something you need to be concerned about with the python version? I'm guessing probably not (and if it is, look into pythonic ways to do it, not obscure perlish ways). Which means the answer to another question > > Are you sure you even *need* equivalents [to -S and -x]? > > > is no, I don't think you do. I think you should just keep it to pure python, making the things those options do irrelevant.
The following snippet is used to support ancient systems that predate the existence of `python`: ``` #!/bin/sh exec perl -Sx $0 ${1+"$@"} if 0; ``` Now, [it appears](https://stackoverflow.com/questions/52785232/what-does-exec-perl-perl-sx-0-1-mean-in-shell-script) that you are dealing with a bastardized and modified version, but it makes no more sense to use that. If the caller wants to use a specific `perl`, they should use `"$PERL" script` instead of relying on `script` to use `$PERL`. So, you should be using the following for the Perl script: ``` #!/usr/bin/env perl ``` or ``` #!/path/to/the/installing/perl ``` So, you should be using the following for the Python script: ``` #!/usr/bin/env python ``` or ``` #!/path/to/the/installing/python ```
48,466,337
I have been working on creating a python GUI for some work. I would self-describe as a novice when it comes to by Python knowledge. I am using wxPython and wxGlade to help with the GUI development, as well. The problem is as follows: I have an empty TextCtrl object and a Button next to it. The Button is meant to open a FileDialog and populate or replace the TextCtrl with the value of the file location that is selected. I have created the functionality for the button to open the FileDialog but I can't seem to figure out how to populate the TextCtrl with that resulting value. ``` import wx class frmCheckSubmital(wx.Frame): def __init__(self, *args, **kwds): # begin wxGlade: frmCheckSubmitall.__init__ kwds["style"] = kwds.get("style", 0) | wx.DEFAULT_FRAME_STYLE wx.Frame.__init__(self, *args, **kwds) self.rbxUtilitySelect = wx.RadioBox(self, wx.ID_ANY, "Utility", choices=["Stormwater", "Sewer", "Water"], majorDimension=1, style=wx.RA_SPECIFY_ROWS) self.txtFeaturesPath = wx.TextCtrl(self, wx.ID_ANY, "") self.btnSelectFeatures = wx.Button(self, wx.ID_ANY, "Select") # selectEvent = lambda event, pathname=txt: self.dialogFeatures(event, pathname) self.btnSelectFeatures.Bind(wx.EVT_BUTTON, self.dialogFeatures) self.txtPipesPath = wx.TextCtrl(self, wx.ID_ANY, "") self.btnSelectPipes = wx.Button(self, wx.ID_ANY, "Select") self.bxOutput = wx.Panel(self, wx.ID_ANY) self.cbxDraw = wx.CheckBox(self, wx.ID_ANY, "Draw") self.btnClear = wx.Button(self, wx.ID_ANY, "Clear") self.btnZoom = wx.Button(self, wx.ID_ANY, "Zoom") self.btnRun = wx.Button(self, wx.ID_ANY, "Run", style=wx.BU_EXACTFIT) self.__set_properties() self.__do_layout() # end wxGlade def __set_properties(self): # begin wxGlade: frmCheckSubmitall.__set_properties self.SetTitle("Check Submittal") self.rbxUtilitySelect.SetSelection(0) self.btnSelectFeatures.SetMinSize((80, 20)) self.btnSelectPipes.SetMinSize((80, 20)) self.cbxDraw.SetValue(1) self.btnClear.SetMinSize((50, 20)) self.btnZoom.SetMinSize((50, 20)) # end wxGlade def __do_layout(self): # begin wxGlade: frmCheckSubmitall.__do_layout sizer_1 = wx.BoxSizer(wx.VERTICAL) sizer_5 = wx.BoxSizer(wx.VERTICAL) sizer_8 = wx.BoxSizer(wx.HORIZONTAL) sizer_7 = wx.BoxSizer(wx.HORIZONTAL) sizer_6 = wx.BoxSizer(wx.HORIZONTAL) sizer_5.Add(self.rbxUtilitySelect, 0, wx.ALIGN_CENTER | wx.BOTTOM, 10) lblFeatures = wx.StaticText(self, wx.ID_ANY, "Features: ") sizer_6.Add(lblFeatures, 0, wx.ALIGN_CENTER | wx.LEFT, 16) sizer_6.Add(self.txtFeaturesPath, 1, 0, 0) sizer_6.Add(self.btnSelectFeatures, 0, wx.ALIGN_CENTER_VERTICAL | wx.LEFT | wx.RIGHT, 5) sizer_5.Add(sizer_6, 0, wx.EXPAND, 0) lblPipes = wx.StaticText(self, wx.ID_ANY, "Pipes: ") sizer_7.Add(lblPipes, 0, wx.ALIGN_CENTER | wx.LEFT | wx.RIGHT, 16) sizer_7.Add(self.txtPipesPath, 1, 0, 0) sizer_7.Add(self.btnSelectPipes, 0, wx.ALIGN_CENTER_VERTICAL | wx.LEFT | wx.RIGHT, 5) sizer_5.Add(sizer_7, 0, wx.ALL | wx.EXPAND, 0) sizer_5.Add(self.bxOutput, 1, wx.ALL | wx.EXPAND, 10) sizer_8.Add(self.cbxDraw, 0, wx.LEFT | wx.RIGHT, 10) sizer_8.Add(self.btnClear, 0, wx.RIGHT, 10) sizer_8.Add(self.btnZoom, 0, 0, 0) sizer_8.Add((20, 20), 1, 0, 0) sizer_8.Add(self.btnRun, 0, wx.BOTTOM | wx.RIGHT, 10) sizer_5.Add(sizer_8, 0, wx.EXPAND, 0) sizer_1.Add(sizer_5, 1, wx.EXPAND, 0) self.SetSizer(sizer_1) self.Layout() self.SetSize((400, 300)) # end wxGlade # Begin Dialog Method def dialogFeatures(self, event): # otherwise ask the user what new file to open #with wx.FileDialog(self, "Select the Features File", wildcard="Text files (*.txt)|*.txt", # style=wx.FD_OPEN | wx.FD_FILE_MUST_EXIST) as fileDialog: fileDialog = wx.FileDialog(self, "Select the Features File", wildcard="Text files (*.txt)|*.txt", style=wx.FD_OPEN | wx.FD_FILE_MUST_EXIST) if fileDialog.ShowModal() == wx.ID_CANCEL: return # the user changed their mind # Proceed loading the file chosen by the user pathname = fileDialog.GetPath() self.txtFeaturesPath.SetValue = pathname self.txtFeaturesPath.SetValue(pathname) try: with open(pathname, 'r') as file: self.txtFeaturesPath = file except IOError: wx.LogError("Cannot open file '%s'." % newfile) # End Dialog Method # end of class frmCheckSubmitall if __name__ == '__main__': app=wx.PySimpleApp() frame = frmCheckSubmital(parent=None, id=-1) frame.Show() app.MainLoop() ``` I've tried to do several things and I am just burnt out and in need of some help. Some things I've tried to do: - Add a third argument in the dialog method to return that (just not sure where to assign) - Use a lambda event to try and assign the value with the constructors? Any help or insight would be greatly appreciated. Thank you!
2018/01/26
[ "https://Stackoverflow.com/questions/48466337", "https://Stackoverflow.com", "https://Stackoverflow.com/users/9272739/" ]
As others have already pointed out, the way to go is by using the text control's `SetValue`. But here's a small runnable example: ``` import wx class MyPanel(wx.Panel): def __init__(self, parent): wx.Panel.__init__(self, parent) open_file_dlg_btn = wx.Button(self, label="Open FileDialog") open_file_dlg_btn.Bind(wx.EVT_BUTTON, self.on_open_file) self.file_path = wx.TextCtrl(self) sizer = wx.BoxSizer(wx.VERTICAL) sizer.Add(open_file_dlg_btn, 0, wx.ALL, 5) sizer.Add(self.file_path, 1, wx.ALL|wx.EXPAND, 5) self.SetSizer(sizer) def on_open_file(self, event): wildcard = "Python source (*.py)|*.py|" \ "All files (*.*)|*.*" dlg = wx.FileDialog( self, message="Choose a file", defaultDir='', defaultFile="", wildcard=wildcard, style=wx.FD_OPEN | wx.FD_MULTIPLE | wx.FD_CHANGE_DIR ) if dlg.ShowModal() == wx.ID_OK: paths = dlg.GetPath() print("You chose the following file(s):") for path in paths: print(path) self.file_path.SetValue(str(paths)) dlg.Destroy() class MyFrame(wx.Frame): def __init__(self): wx.Frame.__init__(self, None, title="File Dialogs Tutorial") panel = MyPanel(self) self.Show() if __name__ == '__main__': app = wx.App(False) frame = MyFrame() app.MainLoop() ```
Try: ``` self.txtFeaturesPath.SetValue(pathname) ``` You have a few other buggy "features" in your example code, so watch out.
48,466,337
I have been working on creating a python GUI for some work. I would self-describe as a novice when it comes to by Python knowledge. I am using wxPython and wxGlade to help with the GUI development, as well. The problem is as follows: I have an empty TextCtrl object and a Button next to it. The Button is meant to open a FileDialog and populate or replace the TextCtrl with the value of the file location that is selected. I have created the functionality for the button to open the FileDialog but I can't seem to figure out how to populate the TextCtrl with that resulting value. ``` import wx class frmCheckSubmital(wx.Frame): def __init__(self, *args, **kwds): # begin wxGlade: frmCheckSubmitall.__init__ kwds["style"] = kwds.get("style", 0) | wx.DEFAULT_FRAME_STYLE wx.Frame.__init__(self, *args, **kwds) self.rbxUtilitySelect = wx.RadioBox(self, wx.ID_ANY, "Utility", choices=["Stormwater", "Sewer", "Water"], majorDimension=1, style=wx.RA_SPECIFY_ROWS) self.txtFeaturesPath = wx.TextCtrl(self, wx.ID_ANY, "") self.btnSelectFeatures = wx.Button(self, wx.ID_ANY, "Select") # selectEvent = lambda event, pathname=txt: self.dialogFeatures(event, pathname) self.btnSelectFeatures.Bind(wx.EVT_BUTTON, self.dialogFeatures) self.txtPipesPath = wx.TextCtrl(self, wx.ID_ANY, "") self.btnSelectPipes = wx.Button(self, wx.ID_ANY, "Select") self.bxOutput = wx.Panel(self, wx.ID_ANY) self.cbxDraw = wx.CheckBox(self, wx.ID_ANY, "Draw") self.btnClear = wx.Button(self, wx.ID_ANY, "Clear") self.btnZoom = wx.Button(self, wx.ID_ANY, "Zoom") self.btnRun = wx.Button(self, wx.ID_ANY, "Run", style=wx.BU_EXACTFIT) self.__set_properties() self.__do_layout() # end wxGlade def __set_properties(self): # begin wxGlade: frmCheckSubmitall.__set_properties self.SetTitle("Check Submittal") self.rbxUtilitySelect.SetSelection(0) self.btnSelectFeatures.SetMinSize((80, 20)) self.btnSelectPipes.SetMinSize((80, 20)) self.cbxDraw.SetValue(1) self.btnClear.SetMinSize((50, 20)) self.btnZoom.SetMinSize((50, 20)) # end wxGlade def __do_layout(self): # begin wxGlade: frmCheckSubmitall.__do_layout sizer_1 = wx.BoxSizer(wx.VERTICAL) sizer_5 = wx.BoxSizer(wx.VERTICAL) sizer_8 = wx.BoxSizer(wx.HORIZONTAL) sizer_7 = wx.BoxSizer(wx.HORIZONTAL) sizer_6 = wx.BoxSizer(wx.HORIZONTAL) sizer_5.Add(self.rbxUtilitySelect, 0, wx.ALIGN_CENTER | wx.BOTTOM, 10) lblFeatures = wx.StaticText(self, wx.ID_ANY, "Features: ") sizer_6.Add(lblFeatures, 0, wx.ALIGN_CENTER | wx.LEFT, 16) sizer_6.Add(self.txtFeaturesPath, 1, 0, 0) sizer_6.Add(self.btnSelectFeatures, 0, wx.ALIGN_CENTER_VERTICAL | wx.LEFT | wx.RIGHT, 5) sizer_5.Add(sizer_6, 0, wx.EXPAND, 0) lblPipes = wx.StaticText(self, wx.ID_ANY, "Pipes: ") sizer_7.Add(lblPipes, 0, wx.ALIGN_CENTER | wx.LEFT | wx.RIGHT, 16) sizer_7.Add(self.txtPipesPath, 1, 0, 0) sizer_7.Add(self.btnSelectPipes, 0, wx.ALIGN_CENTER_VERTICAL | wx.LEFT | wx.RIGHT, 5) sizer_5.Add(sizer_7, 0, wx.ALL | wx.EXPAND, 0) sizer_5.Add(self.bxOutput, 1, wx.ALL | wx.EXPAND, 10) sizer_8.Add(self.cbxDraw, 0, wx.LEFT | wx.RIGHT, 10) sizer_8.Add(self.btnClear, 0, wx.RIGHT, 10) sizer_8.Add(self.btnZoom, 0, 0, 0) sizer_8.Add((20, 20), 1, 0, 0) sizer_8.Add(self.btnRun, 0, wx.BOTTOM | wx.RIGHT, 10) sizer_5.Add(sizer_8, 0, wx.EXPAND, 0) sizer_1.Add(sizer_5, 1, wx.EXPAND, 0) self.SetSizer(sizer_1) self.Layout() self.SetSize((400, 300)) # end wxGlade # Begin Dialog Method def dialogFeatures(self, event): # otherwise ask the user what new file to open #with wx.FileDialog(self, "Select the Features File", wildcard="Text files (*.txt)|*.txt", # style=wx.FD_OPEN | wx.FD_FILE_MUST_EXIST) as fileDialog: fileDialog = wx.FileDialog(self, "Select the Features File", wildcard="Text files (*.txt)|*.txt", style=wx.FD_OPEN | wx.FD_FILE_MUST_EXIST) if fileDialog.ShowModal() == wx.ID_CANCEL: return # the user changed their mind # Proceed loading the file chosen by the user pathname = fileDialog.GetPath() self.txtFeaturesPath.SetValue = pathname self.txtFeaturesPath.SetValue(pathname) try: with open(pathname, 'r') as file: self.txtFeaturesPath = file except IOError: wx.LogError("Cannot open file '%s'." % newfile) # End Dialog Method # end of class frmCheckSubmitall if __name__ == '__main__': app=wx.PySimpleApp() frame = frmCheckSubmital(parent=None, id=-1) frame.Show() app.MainLoop() ``` I've tried to do several things and I am just burnt out and in need of some help. Some things I've tried to do: - Add a third argument in the dialog method to return that (just not sure where to assign) - Use a lambda event to try and assign the value with the constructors? Any help or insight would be greatly appreciated. Thank you!
2018/01/26
[ "https://Stackoverflow.com/questions/48466337", "https://Stackoverflow.com", "https://Stackoverflow.com/users/9272739/" ]
As others have already pointed out, the way to go is by using the text control's `SetValue`. But here's a small runnable example: ``` import wx class MyPanel(wx.Panel): def __init__(self, parent): wx.Panel.__init__(self, parent) open_file_dlg_btn = wx.Button(self, label="Open FileDialog") open_file_dlg_btn.Bind(wx.EVT_BUTTON, self.on_open_file) self.file_path = wx.TextCtrl(self) sizer = wx.BoxSizer(wx.VERTICAL) sizer.Add(open_file_dlg_btn, 0, wx.ALL, 5) sizer.Add(self.file_path, 1, wx.ALL|wx.EXPAND, 5) self.SetSizer(sizer) def on_open_file(self, event): wildcard = "Python source (*.py)|*.py|" \ "All files (*.*)|*.*" dlg = wx.FileDialog( self, message="Choose a file", defaultDir='', defaultFile="", wildcard=wildcard, style=wx.FD_OPEN | wx.FD_MULTIPLE | wx.FD_CHANGE_DIR ) if dlg.ShowModal() == wx.ID_OK: paths = dlg.GetPath() print("You chose the following file(s):") for path in paths: print(path) self.file_path.SetValue(str(paths)) dlg.Destroy() class MyFrame(wx.Frame): def __init__(self): wx.Frame.__init__(self, None, title="File Dialogs Tutorial") panel = MyPanel(self) self.Show() if __name__ == '__main__': app = wx.App(False) frame = MyFrame() app.MainLoop() ```
I realized that what I was assigning was not the value the fileDialog opens, but was instead the location in ram of the fileDialog object. the solution was the following ``` value = fileDialog.Directory + "\\" + fileDialog.filename self.txtFeaturesPath.SetValue(value) ``` thank you!
44,060,906
I just installed python-vlc via pip and when I try ``` import vlc ``` The follow error message shows up: ``` ... ... File "c:\Program Files\Python34\Lib\site-packages\vlc.py", line 173, in <module> dll, plugin_path = find_lib() File "c:\Program Files\Python34\Lib\site-packages\vlc.py", line 150, in find_lib dll = ctypes.CDLL('libvlc.dll') File "c:\Program Files\Python34\Lib\ctypes\__init__.py", line 351, in __init__ self._handle = _dlopen(self._name, mode) builtins.OSError: [WinError 126] The specified module could not be found ``` I am unfamiliar with the ctypes module. What is causing the problem?
2017/05/19
[ "https://Stackoverflow.com/questions/44060906", "https://Stackoverflow.com", "https://Stackoverflow.com/users/7970976/" ]
The problem has been solved. I was using 64 bit python and 32 bit VLC. Installing a 64 bit VLC program fixed the problem.
`python-vlc` on Windows needs to load `libvlc.dll` from VLC. If it's not found in the normal `%PATH%`, it will try to use `pywin32` to look in the registry to find the VLC install path, and fall back to a hard-coded set of directories after that. The stack trace looks like all of that failed. Do you have VLC installed?
44,060,906
I just installed python-vlc via pip and when I try ``` import vlc ``` The follow error message shows up: ``` ... ... File "c:\Program Files\Python34\Lib\site-packages\vlc.py", line 173, in <module> dll, plugin_path = find_lib() File "c:\Program Files\Python34\Lib\site-packages\vlc.py", line 150, in find_lib dll = ctypes.CDLL('libvlc.dll') File "c:\Program Files\Python34\Lib\ctypes\__init__.py", line 351, in __init__ self._handle = _dlopen(self._name, mode) builtins.OSError: [WinError 126] The specified module could not be found ``` I am unfamiliar with the ctypes module. What is causing the problem?
2017/05/19
[ "https://Stackoverflow.com/questions/44060906", "https://Stackoverflow.com", "https://Stackoverflow.com/users/7970976/" ]
`python-vlc` on Windows needs to load `libvlc.dll` from VLC. If it's not found in the normal `%PATH%`, it will try to use `pywin32` to look in the registry to find the VLC install path, and fall back to a hard-coded set of directories after that. The stack trace looks like all of that failed. Do you have VLC installed?
You are installed VLC 32 bit hence your path goes to program file (x86) and your code search the VLC file in programs file. That's why you are getting this error. To solve this problem we need to install VLC for 64 bits.
44,060,906
I just installed python-vlc via pip and when I try ``` import vlc ``` The follow error message shows up: ``` ... ... File "c:\Program Files\Python34\Lib\site-packages\vlc.py", line 173, in <module> dll, plugin_path = find_lib() File "c:\Program Files\Python34\Lib\site-packages\vlc.py", line 150, in find_lib dll = ctypes.CDLL('libvlc.dll') File "c:\Program Files\Python34\Lib\ctypes\__init__.py", line 351, in __init__ self._handle = _dlopen(self._name, mode) builtins.OSError: [WinError 126] The specified module could not be found ``` I am unfamiliar with the ctypes module. What is causing the problem?
2017/05/19
[ "https://Stackoverflow.com/questions/44060906", "https://Stackoverflow.com", "https://Stackoverflow.com/users/7970976/" ]
`python-vlc` on Windows needs to load `libvlc.dll` from VLC. If it's not found in the normal `%PATH%`, it will try to use `pywin32` to look in the registry to find the VLC install path, and fall back to a hard-coded set of directories after that. The stack trace looks like all of that failed. Do you have VLC installed?
I ran into the same problem. To fix it I actually had to install the x86 version **NOT** the x64bit version... no matter what I did it would not work. I figured this out through looking at the code it was using to find the path. Used a breakpoint to see what it was seeing in the exists flow and what it was searching : C:<Your Python Path>\Lib\site-packages\vlc.py --------------------------------------------- ``` if plugin_path is None: # try some standard locations. programfiles = os.environ["ProgramFiles"] homedir = os.environ["HOMEDRIVE"] for p in ('{programfiles}\\VideoLan{libname}', '{homedir}:\\VideoLan{libname}', '{programfiles}{libname}', '{homedir}:{libname}'): p = p.format(homedir = homedir, programfiles = programfiles, libname = '\\VLC\\' + libname) if os.path.exists(p): plugin_path = os.path.dirname(p) ``` Hope it helps someone :)
44,060,906
I just installed python-vlc via pip and when I try ``` import vlc ``` The follow error message shows up: ``` ... ... File "c:\Program Files\Python34\Lib\site-packages\vlc.py", line 173, in <module> dll, plugin_path = find_lib() File "c:\Program Files\Python34\Lib\site-packages\vlc.py", line 150, in find_lib dll = ctypes.CDLL('libvlc.dll') File "c:\Program Files\Python34\Lib\ctypes\__init__.py", line 351, in __init__ self._handle = _dlopen(self._name, mode) builtins.OSError: [WinError 126] The specified module could not be found ``` I am unfamiliar with the ctypes module. What is causing the problem?
2017/05/19
[ "https://Stackoverflow.com/questions/44060906", "https://Stackoverflow.com", "https://Stackoverflow.com/users/7970976/" ]
The problem has been solved. I was using 64 bit python and 32 bit VLC. Installing a 64 bit VLC program fixed the problem.
You are installed VLC 32 bit hence your path goes to program file (x86) and your code search the VLC file in programs file. That's why you are getting this error. To solve this problem we need to install VLC for 64 bits.
44,060,906
I just installed python-vlc via pip and when I try ``` import vlc ``` The follow error message shows up: ``` ... ... File "c:\Program Files\Python34\Lib\site-packages\vlc.py", line 173, in <module> dll, plugin_path = find_lib() File "c:\Program Files\Python34\Lib\site-packages\vlc.py", line 150, in find_lib dll = ctypes.CDLL('libvlc.dll') File "c:\Program Files\Python34\Lib\ctypes\__init__.py", line 351, in __init__ self._handle = _dlopen(self._name, mode) builtins.OSError: [WinError 126] The specified module could not be found ``` I am unfamiliar with the ctypes module. What is causing the problem?
2017/05/19
[ "https://Stackoverflow.com/questions/44060906", "https://Stackoverflow.com", "https://Stackoverflow.com/users/7970976/" ]
The problem has been solved. I was using 64 bit python and 32 bit VLC. Installing a 64 bit VLC program fixed the problem.
I ran into the same problem. To fix it I actually had to install the x86 version **NOT** the x64bit version... no matter what I did it would not work. I figured this out through looking at the code it was using to find the path. Used a breakpoint to see what it was seeing in the exists flow and what it was searching : C:<Your Python Path>\Lib\site-packages\vlc.py --------------------------------------------- ``` if plugin_path is None: # try some standard locations. programfiles = os.environ["ProgramFiles"] homedir = os.environ["HOMEDRIVE"] for p in ('{programfiles}\\VideoLan{libname}', '{homedir}:\\VideoLan{libname}', '{programfiles}{libname}', '{homedir}:{libname}'): p = p.format(homedir = homedir, programfiles = programfiles, libname = '\\VLC\\' + libname) if os.path.exists(p): plugin_path = os.path.dirname(p) ``` Hope it helps someone :)
68,036,975
**Done** I am just trying to run and replicate the following project: <https://machinelearningmastery.com/time-series-prediction-lstm-recurrent-neural-networks-python-keras/> . Basically until this point I have done everything as it is in the linked project but than I got the following issue: **My Own Dataset - I have tried with the dataframe:** * I have tried with his original dataset fully 100% his code but I still have the same error * A.) having the 2 columns (1st column date and 2nd column target values), * B.) time code in to the index and dataframe only containing the target value. **INPUT CODE:** ``` # reshape into X=t and Y=t+1 look_back = 1 trainX, trainY = create_dataset(train, look_back) testX, testY = create_dataset(test, look_back) # reshape input to be [samples, time steps, features] trainX = numpy.reshape(trainX, (trainX.shape[0], 1, trainX.shape[1])) testX = numpy.reshape(testX, (testX.shape[0], 1, testX.shape[1])) # create and fit the LSTM network model = Sequential() model.add(LSTM(4, input_shape=(1, look_back))) model.add(Dense(1)) model.compile(loss='mean_squared_error', optimizer='adam') model.fit(trainX, trainY, epochs=100, batch_size=1, verbose=2) ``` **OUTPUT ERROR:** ``` --------------------------------------------------------------------------- InvalidArgumentError Traceback (most recent call last) ~/anaconda3/envs/tfall/lib/python3.7/site-packages/tensorflow/python/framework/ops.py in _create_c_op(graph, node_def, inputs, control_inputs, op_def) 1879 try: -> 1880 c_op = pywrap_tf_session.TF_FinishOperation(op_desc) 1881 except errors.InvalidArgumentError as e: InvalidArgumentError: Shape must be at least rank 3 but is rank 2 for '{{node BiasAdd}} = BiasAdd[T=DT_FLOAT, data_format="NCHW"](add, bias)' with input shapes: [?,16], [16]. During handling of the above exception, another exception occurred: ValueError Traceback (most recent call last) <ipython-input-146-278c5358bee6> in <module> 1 # create and fit the LSTM network 2 model = Sequential() ----> 3 model.add(LSTM(4, input_shape=(1, look_back))) 4 model.add(Dense(1)) 5 model.compile(loss='mean_squared_error', optimizer='adam') ~/anaconda3/envs/tfall/lib/python3.7/site-packages/tensorflow/python/training/tracking/base.py in _method_wrapper(self, *args, **kwargs) 520 self._self_setattr_tracking = False # pylint: disable=protected-access 521 try: --> 522 result = method(self, *args, **kwargs) 523 finally: 524 self._self_setattr_tracking = previous_value # pylint: disable=protected-access ~/anaconda3/envs/tfall/lib/python3.7/site-packages/keras/engine/sequential.py in add(self, layer) 206 # and create the node connecting the current layer 207 # to the input layer we just created. --> 208 layer(x) 209 set_inputs = True 210 ~/anaconda3/envs/tfall/lib/python3.7/site-packages/keras/layers/recurrent.py in __call__(self, inputs, initial_state, constants, **kwargs) 658 659 if initial_state is None and constants is None: --> 660 return super(RNN, self).__call__(inputs, **kwargs) 661 662 # If any of `initial_state` or `constants` are specified and are Keras ~/anaconda3/envs/tfall/lib/python3.7/site-packages/keras/engine/base_layer.py in __call__(self, *args, **kwargs) 944 if _in_functional_construction_mode(self, inputs, args, kwargs, input_list): 945 return self._functional_construction_call(inputs, args, kwargs, --> 946 input_list) 947 948 # Maintains info about the `Layer.call` stack. ~/anaconda3/envs/tfall/lib/python3.7/site-packages/keras/engine/base_layer.py in _functional_construction_call(self, inputs, args, kwargs, input_list) 1082 # Check input assumptions set after layer building, e.g. input shape. 1083 outputs = self._keras_tensor_symbolic_call( -> 1084 inputs, input_masks, args, kwargs) 1085 1086 if outputs is None: ~/anaconda3/envs/tfall/lib/python3.7/site-packages/keras/engine/base_layer.py in _keras_tensor_symbolic_call(self, inputs, input_masks, args, kwargs) 814 return tf.nest.map_structure(keras_tensor.KerasTensor, output_signature) 815 else: --> 816 return self._infer_output_signature(inputs, args, kwargs, input_masks) 817 818 def _infer_output_signature(self, inputs, args, kwargs, input_masks): ~/anaconda3/envs/tfall/lib/python3.7/site-packages/keras/engine/base_layer.py in _infer_output_signature(self, inputs, args, kwargs, input_masks) 854 self._maybe_build(inputs) 855 inputs = self._maybe_cast_inputs(inputs) --> 856 outputs = call_fn(inputs, *args, **kwargs) 857 858 self._handle_activity_regularization(inputs, outputs) ~/anaconda3/envs/tfall/lib/python3.7/site-packages/keras/layers/recurrent_v2.py in call(self, inputs, mask, training, initial_state) 1250 else: 1251 (last_output, outputs, new_h, new_c, -> 1252 runtime) = lstm_with_backend_selection(**normal_lstm_kwargs) 1253 1254 states = [new_h, new_c] ~/anaconda3/envs/tfall/lib/python3.7/site-packages/keras/layers/recurrent_v2.py in lstm_with_backend_selection(inputs, init_h, init_c, kernel, recurrent_kernel, bias, mask, time_major, go_backwards, sequence_lengths, zero_output_for_mask) 1645 # Call the normal LSTM impl and register the CuDNN impl function. The 1646 # grappler will kick in during session execution to optimize the graph. -> 1647 last_output, outputs, new_h, new_c, runtime = defun_standard_lstm(**params) 1648 _function_register(defun_gpu_lstm, **params) 1649 ~/anaconda3/envs/tfall/lib/python3.7/site-packages/tensorflow/python/eager/function.py in __call__(self, *args, **kwargs) 3020 with self._lock: 3021 (graph_function, -> 3022 filtered_flat_args) = self._maybe_define_function(args, kwargs) 3023 return graph_function._call_flat( 3024 filtered_flat_args, captured_inputs=graph_function.captured_inputs) # pylint: disable=protected-access ~/anaconda3/envs/tfall/lib/python3.7/site-packages/tensorflow/python/eager/function.py in _maybe_define_function(self, args, kwargs) 3442 3443 self._function_cache.missed.add(call_context_key) -> 3444 graph_function = self._create_graph_function(args, kwargs) 3445 self._function_cache.primary[cache_key] = graph_function 3446 ~/anaconda3/envs/tfall/lib/python3.7/site-packages/tensorflow/python/eager/function.py in _create_graph_function(self, args, kwargs, override_flat_arg_shapes) 3287 arg_names=arg_names, 3288 override_flat_arg_shapes=override_flat_arg_shapes, -> 3289 capture_by_value=self._capture_by_value), 3290 self._function_attributes, 3291 function_spec=self.function_spec, ~/anaconda3/envs/tfall/lib/python3.7/site-packages/tensorflow/python/framework/func_graph.py in func_graph_from_py_func(name, python_func, args, kwargs, signature, func_graph, autograph, autograph_options, add_control_dependencies, arg_names, op_return_value, collections, capture_by_value, override_flat_arg_shapes) 997 _, original_func = tf_decorator.unwrap(python_func) 998 --> 999 func_outputs = python_func(*func_args, **func_kwargs) 1000 1001 # invariant: `func_outputs` contains only Tensors, CompositeTensors, ~/anaconda3/envs/tfall/lib/python3.7/site-packages/keras/layers/recurrent_v2.py in standard_lstm(inputs, init_h, init_c, kernel, recurrent_kernel, bias, mask, time_major, go_backwards, sequence_lengths, zero_output_for_mask) 1386 input_length=(sequence_lengths 1387 if sequence_lengths is not None else timesteps), -> 1388 zero_output_for_mask=zero_output_for_mask) 1389 return (last_output, outputs, new_states[0], new_states[1], 1390 _runtime(_RUNTIME_CPU)) ~/anaconda3/envs/tfall/lib/python3.7/site-packages/tensorflow/python/util/dispatch.py in wrapper(*args, **kwargs) 204 """Call target, and fall back on dispatchers if there is a TypeError.""" 205 try: --> 206 return target(*args, **kwargs) 207 except (TypeError, ValueError): 208 # Note: convert_to_eager_tensor currently raises a ValueError, not a ~/anaconda3/envs/tfall/lib/python3.7/site-packages/keras/backend.py in rnn(step_function, inputs, initial_states, go_backwards, mask, constants, unroll, input_length, time_major, zero_output_for_mask) 4341 # the value is discarded. 4342 output_time_zero, _ = step_function( -> 4343 input_time_zero, tuple(initial_states) + tuple(constants)) 4344 output_ta = tuple( 4345 tf.TensorArray( ~/anaconda3/envs/tfall/lib/python3.7/site-packages/keras/layers/recurrent_v2.py in step(cell_inputs, cell_states) 1364 z = backend.dot(cell_inputs, kernel) 1365 z += backend.dot(h_tm1, recurrent_kernel) -> 1366 z = backend.bias_add(z, bias) 1367 1368 z0, z1, z2, z3 = tf.split(z, 4, axis=1) ~/anaconda3/envs/tfall/lib/python3.7/site-packages/tensorflow/python/util/dispatch.py in wrapper(*args, **kwargs) 204 """Call target, and fall back on dispatchers if there is a TypeError.""" 205 try: --> 206 return target(*args, **kwargs) 207 except (TypeError, ValueError): 208 # Note: convert_to_eager_tensor currently raises a ValueError, not a ~/anaconda3/envs/tfall/lib/python3.7/site-packages/keras/backend.py in bias_add(x, bias, data_format) 5961 if len(bias_shape) == 1: 5962 if data_format == 'channels_first': -> 5963 return tf.nn.bias_add(x, bias, data_format='NCHW') 5964 return tf.nn.bias_add(x, bias, data_format='NHWC') 5965 if ndim(x) in (3, 4, 5): ~/anaconda3/envs/tfall/lib/python3.7/site-packages/tensorflow/python/util/dispatch.py in wrapper(*args, **kwargs) 204 """Call target, and fall back on dispatchers if there is a TypeError.""" 205 try: --> 206 return target(*args, **kwargs) 207 except (TypeError, ValueError): 208 # Note: convert_to_eager_tensor currently raises a ValueError, not a ~/anaconda3/envs/tfall/lib/python3.7/site-packages/tensorflow/python/ops/nn_ops.py in bias_add(value, bias, data_format, name) 3376 else: 3377 return gen_nn_ops.bias_add( -> 3378 value, bias, data_format=data_format, name=name) 3379 3380 ~/anaconda3/envs/tfall/lib/python3.7/site-packages/tensorflow/python/ops/gen_nn_ops.py in bias_add(value, bias, data_format, name) 689 data_format = _execute.make_str(data_format, "data_format") 690 _, _, _op, _outputs = _op_def_library._apply_op_helper( --> 691 "BiasAdd", value=value, bias=bias, data_format=data_format, name=name) 692 _result = _outputs[:] 693 if _execute.must_record_gradient(): ~/anaconda3/envs/tfall/lib/python3.7/site-packages/tensorflow/python/framework/op_def_library.py in _apply_op_helper(op_type_name, name, **keywords) 748 op = g._create_op_internal(op_type_name, inputs, dtypes=None, 749 name=scope, input_types=input_types, --> 750 attrs=attr_protos, op_def=op_def) 751 752 # `outputs` is returned as a separate return value so that the output ~/anaconda3/envs/tfall/lib/python3.7/site-packages/tensorflow/python/framework/func_graph.py in _create_op_internal(self, op_type, inputs, dtypes, input_types, name, attrs, op_def, compute_device) 599 return super(FuncGraph, self)._create_op_internal( # pylint: disable=protected-access 600 op_type, captured_inputs, dtypes, input_types, name, attrs, op_def, --> 601 compute_device) 602 603 def capture(self, tensor, name=None, shape=None): ~/anaconda3/envs/tfall/lib/python3.7/site-packages/tensorflow/python/framework/ops.py in _create_op_internal(self, op_type, inputs, dtypes, input_types, name, attrs, op_def, compute_device) 3563 input_types=input_types, 3564 original_op=self._default_original_op, -> 3565 op_def=op_def) 3566 self._create_op_helper(ret, compute_device=compute_device) 3567 return ret ~/anaconda3/envs/tfall/lib/python3.7/site-packages/tensorflow/python/framework/ops.py in __init__(self, node_def, g, inputs, output_types, control_inputs, input_types, original_op, op_def) 2040 op_def = self._graph._get_op_def(node_def.op) 2041 self._c_op = _create_c_op(self._graph, node_def, inputs, -> 2042 control_input_ops, op_def) 2043 name = compat.as_str(node_def.name) 2044 ~/anaconda3/envs/tfall/lib/python3.7/site-packages/tensorflow/python/framework/ops.py in _create_c_op(graph, node_def, inputs, control_inputs, op_def) 1881 except errors.InvalidArgumentError as e: 1882 # Convert to ValueError for backwards compatibility. -> 1883 raise ValueError(str(e)) 1884 1885 return c_op ValueError: Shape must be at least rank 3 but is rank 2 for '{{node BiasAdd}} = BiasAdd[T=DT_FLOAT, data_format="NCHW"](add, bias)' with input shapes: [?,16], [16]. ``` **Tried Solutions** * no actual solution in the answers - <https://www.reddit.com/r/tensorflow/comments/ipbse4/valueerror_shape_must_be_at_least_rank_3_but_is/> * no actual solution in the answers - <https://github.com/tensorflow/recommenders/issues/237> * no actual solution in the answers, different input code - [ValueError: Shape must be rank 2 but is rank 3 for 'MatMul'](https://stackoverflow.com/questions/50162787/valueerror-shape-must-be-rank-2-but-is-rank-3-for-matmul)
2021/06/18
[ "https://Stackoverflow.com/questions/68036975", "https://Stackoverflow.com", "https://Stackoverflow.com/users/10270590/" ]
I continue to see this problem in 2022 when using LSTMs or GRUs in Sagemaker with conda\_tensorflow2\_p38 kernel. Here's my workaround: Early in your notebook, before defining your model, set ``` tf.keras.backend.set_image_data_format("channels_last") ``` I know it looks weird to set image data format when you aren't processing pics, but this somehow works around the dimension error. To demonstrate that this isn't just a library mismatch in the default kernel, here's something I sometimes add to the beginning of my notebooks to update to the latest library versions (currently TF 2.9.0). It did not solve the error above. ``` import sys !{sys.executable} -m pip install --upgrade pip tensorflow numpy scikit-learn pandas ```
**Solution** * I switched to AWS EC2 SageMaker "Python [conda env:tensorflow2\_p36] " so this is the exact pre made environment "tensorflow2\_p36" * As I have read it in some other places it is probably library collision maybe with NumPy.
68,036,975
**Done** I am just trying to run and replicate the following project: <https://machinelearningmastery.com/time-series-prediction-lstm-recurrent-neural-networks-python-keras/> . Basically until this point I have done everything as it is in the linked project but than I got the following issue: **My Own Dataset - I have tried with the dataframe:** * I have tried with his original dataset fully 100% his code but I still have the same error * A.) having the 2 columns (1st column date and 2nd column target values), * B.) time code in to the index and dataframe only containing the target value. **INPUT CODE:** ``` # reshape into X=t and Y=t+1 look_back = 1 trainX, trainY = create_dataset(train, look_back) testX, testY = create_dataset(test, look_back) # reshape input to be [samples, time steps, features] trainX = numpy.reshape(trainX, (trainX.shape[0], 1, trainX.shape[1])) testX = numpy.reshape(testX, (testX.shape[0], 1, testX.shape[1])) # create and fit the LSTM network model = Sequential() model.add(LSTM(4, input_shape=(1, look_back))) model.add(Dense(1)) model.compile(loss='mean_squared_error', optimizer='adam') model.fit(trainX, trainY, epochs=100, batch_size=1, verbose=2) ``` **OUTPUT ERROR:** ``` --------------------------------------------------------------------------- InvalidArgumentError Traceback (most recent call last) ~/anaconda3/envs/tfall/lib/python3.7/site-packages/tensorflow/python/framework/ops.py in _create_c_op(graph, node_def, inputs, control_inputs, op_def) 1879 try: -> 1880 c_op = pywrap_tf_session.TF_FinishOperation(op_desc) 1881 except errors.InvalidArgumentError as e: InvalidArgumentError: Shape must be at least rank 3 but is rank 2 for '{{node BiasAdd}} = BiasAdd[T=DT_FLOAT, data_format="NCHW"](add, bias)' with input shapes: [?,16], [16]. During handling of the above exception, another exception occurred: ValueError Traceback (most recent call last) <ipython-input-146-278c5358bee6> in <module> 1 # create and fit the LSTM network 2 model = Sequential() ----> 3 model.add(LSTM(4, input_shape=(1, look_back))) 4 model.add(Dense(1)) 5 model.compile(loss='mean_squared_error', optimizer='adam') ~/anaconda3/envs/tfall/lib/python3.7/site-packages/tensorflow/python/training/tracking/base.py in _method_wrapper(self, *args, **kwargs) 520 self._self_setattr_tracking = False # pylint: disable=protected-access 521 try: --> 522 result = method(self, *args, **kwargs) 523 finally: 524 self._self_setattr_tracking = previous_value # pylint: disable=protected-access ~/anaconda3/envs/tfall/lib/python3.7/site-packages/keras/engine/sequential.py in add(self, layer) 206 # and create the node connecting the current layer 207 # to the input layer we just created. --> 208 layer(x) 209 set_inputs = True 210 ~/anaconda3/envs/tfall/lib/python3.7/site-packages/keras/layers/recurrent.py in __call__(self, inputs, initial_state, constants, **kwargs) 658 659 if initial_state is None and constants is None: --> 660 return super(RNN, self).__call__(inputs, **kwargs) 661 662 # If any of `initial_state` or `constants` are specified and are Keras ~/anaconda3/envs/tfall/lib/python3.7/site-packages/keras/engine/base_layer.py in __call__(self, *args, **kwargs) 944 if _in_functional_construction_mode(self, inputs, args, kwargs, input_list): 945 return self._functional_construction_call(inputs, args, kwargs, --> 946 input_list) 947 948 # Maintains info about the `Layer.call` stack. ~/anaconda3/envs/tfall/lib/python3.7/site-packages/keras/engine/base_layer.py in _functional_construction_call(self, inputs, args, kwargs, input_list) 1082 # Check input assumptions set after layer building, e.g. input shape. 1083 outputs = self._keras_tensor_symbolic_call( -> 1084 inputs, input_masks, args, kwargs) 1085 1086 if outputs is None: ~/anaconda3/envs/tfall/lib/python3.7/site-packages/keras/engine/base_layer.py in _keras_tensor_symbolic_call(self, inputs, input_masks, args, kwargs) 814 return tf.nest.map_structure(keras_tensor.KerasTensor, output_signature) 815 else: --> 816 return self._infer_output_signature(inputs, args, kwargs, input_masks) 817 818 def _infer_output_signature(self, inputs, args, kwargs, input_masks): ~/anaconda3/envs/tfall/lib/python3.7/site-packages/keras/engine/base_layer.py in _infer_output_signature(self, inputs, args, kwargs, input_masks) 854 self._maybe_build(inputs) 855 inputs = self._maybe_cast_inputs(inputs) --> 856 outputs = call_fn(inputs, *args, **kwargs) 857 858 self._handle_activity_regularization(inputs, outputs) ~/anaconda3/envs/tfall/lib/python3.7/site-packages/keras/layers/recurrent_v2.py in call(self, inputs, mask, training, initial_state) 1250 else: 1251 (last_output, outputs, new_h, new_c, -> 1252 runtime) = lstm_with_backend_selection(**normal_lstm_kwargs) 1253 1254 states = [new_h, new_c] ~/anaconda3/envs/tfall/lib/python3.7/site-packages/keras/layers/recurrent_v2.py in lstm_with_backend_selection(inputs, init_h, init_c, kernel, recurrent_kernel, bias, mask, time_major, go_backwards, sequence_lengths, zero_output_for_mask) 1645 # Call the normal LSTM impl and register the CuDNN impl function. The 1646 # grappler will kick in during session execution to optimize the graph. -> 1647 last_output, outputs, new_h, new_c, runtime = defun_standard_lstm(**params) 1648 _function_register(defun_gpu_lstm, **params) 1649 ~/anaconda3/envs/tfall/lib/python3.7/site-packages/tensorflow/python/eager/function.py in __call__(self, *args, **kwargs) 3020 with self._lock: 3021 (graph_function, -> 3022 filtered_flat_args) = self._maybe_define_function(args, kwargs) 3023 return graph_function._call_flat( 3024 filtered_flat_args, captured_inputs=graph_function.captured_inputs) # pylint: disable=protected-access ~/anaconda3/envs/tfall/lib/python3.7/site-packages/tensorflow/python/eager/function.py in _maybe_define_function(self, args, kwargs) 3442 3443 self._function_cache.missed.add(call_context_key) -> 3444 graph_function = self._create_graph_function(args, kwargs) 3445 self._function_cache.primary[cache_key] = graph_function 3446 ~/anaconda3/envs/tfall/lib/python3.7/site-packages/tensorflow/python/eager/function.py in _create_graph_function(self, args, kwargs, override_flat_arg_shapes) 3287 arg_names=arg_names, 3288 override_flat_arg_shapes=override_flat_arg_shapes, -> 3289 capture_by_value=self._capture_by_value), 3290 self._function_attributes, 3291 function_spec=self.function_spec, ~/anaconda3/envs/tfall/lib/python3.7/site-packages/tensorflow/python/framework/func_graph.py in func_graph_from_py_func(name, python_func, args, kwargs, signature, func_graph, autograph, autograph_options, add_control_dependencies, arg_names, op_return_value, collections, capture_by_value, override_flat_arg_shapes) 997 _, original_func = tf_decorator.unwrap(python_func) 998 --> 999 func_outputs = python_func(*func_args, **func_kwargs) 1000 1001 # invariant: `func_outputs` contains only Tensors, CompositeTensors, ~/anaconda3/envs/tfall/lib/python3.7/site-packages/keras/layers/recurrent_v2.py in standard_lstm(inputs, init_h, init_c, kernel, recurrent_kernel, bias, mask, time_major, go_backwards, sequence_lengths, zero_output_for_mask) 1386 input_length=(sequence_lengths 1387 if sequence_lengths is not None else timesteps), -> 1388 zero_output_for_mask=zero_output_for_mask) 1389 return (last_output, outputs, new_states[0], new_states[1], 1390 _runtime(_RUNTIME_CPU)) ~/anaconda3/envs/tfall/lib/python3.7/site-packages/tensorflow/python/util/dispatch.py in wrapper(*args, **kwargs) 204 """Call target, and fall back on dispatchers if there is a TypeError.""" 205 try: --> 206 return target(*args, **kwargs) 207 except (TypeError, ValueError): 208 # Note: convert_to_eager_tensor currently raises a ValueError, not a ~/anaconda3/envs/tfall/lib/python3.7/site-packages/keras/backend.py in rnn(step_function, inputs, initial_states, go_backwards, mask, constants, unroll, input_length, time_major, zero_output_for_mask) 4341 # the value is discarded. 4342 output_time_zero, _ = step_function( -> 4343 input_time_zero, tuple(initial_states) + tuple(constants)) 4344 output_ta = tuple( 4345 tf.TensorArray( ~/anaconda3/envs/tfall/lib/python3.7/site-packages/keras/layers/recurrent_v2.py in step(cell_inputs, cell_states) 1364 z = backend.dot(cell_inputs, kernel) 1365 z += backend.dot(h_tm1, recurrent_kernel) -> 1366 z = backend.bias_add(z, bias) 1367 1368 z0, z1, z2, z3 = tf.split(z, 4, axis=1) ~/anaconda3/envs/tfall/lib/python3.7/site-packages/tensorflow/python/util/dispatch.py in wrapper(*args, **kwargs) 204 """Call target, and fall back on dispatchers if there is a TypeError.""" 205 try: --> 206 return target(*args, **kwargs) 207 except (TypeError, ValueError): 208 # Note: convert_to_eager_tensor currently raises a ValueError, not a ~/anaconda3/envs/tfall/lib/python3.7/site-packages/keras/backend.py in bias_add(x, bias, data_format) 5961 if len(bias_shape) == 1: 5962 if data_format == 'channels_first': -> 5963 return tf.nn.bias_add(x, bias, data_format='NCHW') 5964 return tf.nn.bias_add(x, bias, data_format='NHWC') 5965 if ndim(x) in (3, 4, 5): ~/anaconda3/envs/tfall/lib/python3.7/site-packages/tensorflow/python/util/dispatch.py in wrapper(*args, **kwargs) 204 """Call target, and fall back on dispatchers if there is a TypeError.""" 205 try: --> 206 return target(*args, **kwargs) 207 except (TypeError, ValueError): 208 # Note: convert_to_eager_tensor currently raises a ValueError, not a ~/anaconda3/envs/tfall/lib/python3.7/site-packages/tensorflow/python/ops/nn_ops.py in bias_add(value, bias, data_format, name) 3376 else: 3377 return gen_nn_ops.bias_add( -> 3378 value, bias, data_format=data_format, name=name) 3379 3380 ~/anaconda3/envs/tfall/lib/python3.7/site-packages/tensorflow/python/ops/gen_nn_ops.py in bias_add(value, bias, data_format, name) 689 data_format = _execute.make_str(data_format, "data_format") 690 _, _, _op, _outputs = _op_def_library._apply_op_helper( --> 691 "BiasAdd", value=value, bias=bias, data_format=data_format, name=name) 692 _result = _outputs[:] 693 if _execute.must_record_gradient(): ~/anaconda3/envs/tfall/lib/python3.7/site-packages/tensorflow/python/framework/op_def_library.py in _apply_op_helper(op_type_name, name, **keywords) 748 op = g._create_op_internal(op_type_name, inputs, dtypes=None, 749 name=scope, input_types=input_types, --> 750 attrs=attr_protos, op_def=op_def) 751 752 # `outputs` is returned as a separate return value so that the output ~/anaconda3/envs/tfall/lib/python3.7/site-packages/tensorflow/python/framework/func_graph.py in _create_op_internal(self, op_type, inputs, dtypes, input_types, name, attrs, op_def, compute_device) 599 return super(FuncGraph, self)._create_op_internal( # pylint: disable=protected-access 600 op_type, captured_inputs, dtypes, input_types, name, attrs, op_def, --> 601 compute_device) 602 603 def capture(self, tensor, name=None, shape=None): ~/anaconda3/envs/tfall/lib/python3.7/site-packages/tensorflow/python/framework/ops.py in _create_op_internal(self, op_type, inputs, dtypes, input_types, name, attrs, op_def, compute_device) 3563 input_types=input_types, 3564 original_op=self._default_original_op, -> 3565 op_def=op_def) 3566 self._create_op_helper(ret, compute_device=compute_device) 3567 return ret ~/anaconda3/envs/tfall/lib/python3.7/site-packages/tensorflow/python/framework/ops.py in __init__(self, node_def, g, inputs, output_types, control_inputs, input_types, original_op, op_def) 2040 op_def = self._graph._get_op_def(node_def.op) 2041 self._c_op = _create_c_op(self._graph, node_def, inputs, -> 2042 control_input_ops, op_def) 2043 name = compat.as_str(node_def.name) 2044 ~/anaconda3/envs/tfall/lib/python3.7/site-packages/tensorflow/python/framework/ops.py in _create_c_op(graph, node_def, inputs, control_inputs, op_def) 1881 except errors.InvalidArgumentError as e: 1882 # Convert to ValueError for backwards compatibility. -> 1883 raise ValueError(str(e)) 1884 1885 return c_op ValueError: Shape must be at least rank 3 but is rank 2 for '{{node BiasAdd}} = BiasAdd[T=DT_FLOAT, data_format="NCHW"](add, bias)' with input shapes: [?,16], [16]. ``` **Tried Solutions** * no actual solution in the answers - <https://www.reddit.com/r/tensorflow/comments/ipbse4/valueerror_shape_must_be_at_least_rank_3_but_is/> * no actual solution in the answers - <https://github.com/tensorflow/recommenders/issues/237> * no actual solution in the answers, different input code - [ValueError: Shape must be rank 2 but is rank 3 for 'MatMul'](https://stackoverflow.com/questions/50162787/valueerror-shape-must-be-rank-2-but-is-rank-3-for-matmul)
2021/06/18
[ "https://Stackoverflow.com/questions/68036975", "https://Stackoverflow.com", "https://Stackoverflow.com/users/10270590/" ]
I continue to see this problem in 2022 when using LSTMs or GRUs in Sagemaker with conda\_tensorflow2\_p38 kernel. Here's my workaround: Early in your notebook, before defining your model, set ``` tf.keras.backend.set_image_data_format("channels_last") ``` I know it looks weird to set image data format when you aren't processing pics, but this somehow works around the dimension error. To demonstrate that this isn't just a library mismatch in the default kernel, here's something I sometimes add to the beginning of my notebooks to update to the latest library versions (currently TF 2.9.0). It did not solve the error above. ``` import sys !{sys.executable} -m pip install --upgrade pip tensorflow numpy scikit-learn pandas ```
I encountered same issue while using aws sagemaker. Changing lstm to tf.compat.v1.keras.layers.CuDNNLSTM worked for me. in your case: ``` model.add(LSTM(4, input_shape=(1, look_back))) ``` to ``` model.add(tf.compat.v1.keras.layers.CuDNNLSTM(4, input_shape=(1, look_back))) ```
30,284,611
I have a python web app that carry's out calculations on data you send to it via POST / GET parameters. The app works perfectly on my machine, but when deployed to openshift, it fails to access the parameters with an error no 32 : Broken pipe I then used this [quickstart](https://github.com/openshift-quickstart/flask-base) repo to just focus on server code and not app code. Got to differentiate between a POST and GET request and ended there here's the relevant python code : ``` @app.route('/', methods=['GET','POST']) def index(): result = "" if request.method == "GET": name = request.form['name'] if "name" in request.form else "" result = "We received a GET request and the value for <name> is :%s" % name elif request.method == "POST": result = "We received a POST request" else : result = "We don't know what type of request we have received" return result ``` So i just wanna know how i can access the parameters.
2015/05/17
[ "https://Stackoverflow.com/questions/30284611", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1677589/" ]
Don't use Flask's development server in production. Use a proper WSGI server that can handle concurrent requests, like [Gunicorn](http://gunicorn.org/ "Gunicorn"). For now try turning on the server's threaded mode and see if it works. ``` app.run(host="x.x.x.x", port=1234, threaded=True) ```
You can get form data from the POST request via: ``` name = request.form.get("name") ``` Refactor: ``` @app.route('/', methods=['GET', 'POST']) def index(): if request.method == 'POST': name = request.form.get("name") result = "We received a POST request and the value for <name> is - {0}".format(name) else: result = "This is a GET request" return result ``` --- Refer to the [official Flask documentation](http://flask.pocoo.org/docs/0.10/api/#incoming-request-data) to learn more about the Request object.
4,542,730
I have a app with a kind of rest api that I'm using to send emails . However it currently sends only text email so I need to know how to modify it and make it send html . Below is the code : ``` from __future__ import with_statement #!/usr/bin/env python # import cgi import os import logging import contextlib from xml.dom import minidom from xml.dom.minidom import Document import exceptions import warnings import imghdr from google.appengine.api import images from google.appengine.api import users from google.appengine.ext import db from google.appengine.ext import webapp from google.appengine.ext.webapp.util import run_wsgi_app from google.appengine.ext.webapp import template from google.appengine.api import mail import wsgiref.handlers # START Constants CONTENT_TYPE_HEADER = "Content-Type" CONTENT_TYPE_TEXT = "text/plain" XML_CONTENT_TYPE = "application/xml" XML_ENCODING = "utf-8" """ Allows you to specify IP addresses and associated "api_key"s to prevent others from using your app. Storage and Manipulation methods will check for this "api_key" in the POST/GET params. Retrieval methods don't use it (however you could enable them to use it, but maybe rewrite so you have a "read" key and a "write" key to prevent others from manipulating your data). Set "AUTH = False" to disable (allowing anyone use your app and CRUD your data). To generate a hash/api_key visit https://www.grc.com/passwords.htm To find your ip visit http://www.whatsmyip.org/ """ AUTH = { '000.000.000.000':'JLQ7P5SnTPq7AJvLnUysJmXSeXTrhgaJ', } # END Constants # START Exception Handling class Error(StandardError): pass class Forbidden(Error): pass logging.getLogger().setLevel(logging.DEBUG) @contextlib.contextmanager def mailExcpHandler(ctx): try: yield {} except (ValueError), exc: xml_error_response(ctx, 400 ,'app.invalid_parameters', 'The indicated parameters are not valid: ' + exc.message) except (Forbidden), exc: xml_error_response(ctx, 403 ,'app.forbidden', 'You don\'t have permission to perform this action: ' + exc.message) except (Exception), exc: xml_error_response(ctx, 500 ,'system.other', 'An unexpected error in the web service has happened: ' + exc.message) def xml_error_response(ctx, status, error_id, error_msg): ctx.error(status) doc = Document() errorcard = doc.createElement("error") errorcard.setAttribute("id", error_id) doc.appendChild(errorcard) ptext = doc.createTextNode(error_msg) errorcard.appendChild(ptext) ctx.response.headers[CONTENT_TYPE_HEADER] = XML_CONTENT_TYPE ctx.response.out.write(doc.toxml(XML_ENCODING)) # END Exception Handling # START Helper Methods def isAuth(ip = None, key = None): if AUTH == False: return True elif AUTH.has_key(ip) and key == AUTH[ip]: return True else: return False # END Helper Methods # START Request Handlers class Send(webapp.RequestHandler): def post(self): """ Sends an email based on POST params. It will queue if resources are unavailable at the time. Returns "Success" POST Args: to: the receipent address from: the sender address (must be a registered GAE email) subject: email subject body: email body content """ with mailExcpHandler(self): # check authorised if isAuth(self.request.remote_addr,self.request.POST.get('api_key')) == False: raise Forbidden("Invalid Credentials") # read data from request mail_to = str(self.request.POST.get('to')) mail_from = str(self.request.POST.get('from')) mail_subject = str(self.request.POST.get('subject')) mail_body = str(self.request.POST.get('body')) mail.send_mail(mail_from, mail_to, mail_subject, mail_body) self.response.headers[CONTENT_TYPE_HEADER] = CONTENT_TYPE_TEXT self.response.out.write("Success") # END Request Handlers # START Application application = webapp.WSGIApplication([ ('/send', Send) ],debug=True) def main(): run_wsgi_app(application) if __name__ == '__main__': main() # END Application ```
2010/12/27
[ "https://Stackoverflow.com/questions/4542730", "https://Stackoverflow.com", "https://Stackoverflow.com/users/331071/" ]
Have a look to the [Email message fields](http://code.google.com/intl/it/appengine/docs/python/mail/emailmessagefields.html) of the `send_mail` function. Here is the parameter you need: > > **html** > > An HTML version of the body content, for recipients that prefer HTML email. > > > You should add the `html` input parameter like this: ``` #Your html body mail_html_body = '<h1>Hello!</h1>' # read data from request mail_to = str(self.request.POST.get('to')) mail_from = str(self.request.POST.get('from')) mail_subject = str(self.request.POST.get('subject')) mail_body = str(self.request.POST.get('body')) mail.send_mail(mail_from, mail_to, mail_subject, mail_body, html = mail_html_body ) #your html body ```
you can use the html field of EmailMessage class ``` message = mail.EmailMessage(sender=emailFrom,subject=emailSubject) message.to = emailTo message.body = emailBody message.html = emailHtml message.send() ```
73,906,061
Be the following python pandas DataFrame: | ID | country | money | code | money\_add | other | time | | --- | --- | --- | --- | --- | --- | --- | | 832932 | Other | NaN | 00000 | NaN | [N2,N2,N4] | 0 days 01:37:00 | | 217#8# | NaN | NaN | NaN | NaN | [N1,N2,N3] | 2 days 01:01:00 | | 1329T2 | France | 12131 | 00020 | 3452 | [N1,N1] | 1 days 03:55:00 | | 124932 | France | NaN | 00016 | NaN | [N2] | 0 days 01:28:00 | | 194022 | France | NaN | 00000 | NaN | [N4,N3] | 3 days 02:35:00 | If `code` column is not `NaN` and the `money` column is `NaN`, we update the values `money` and `money_add` from the following table. Using the `code` and `cod_t` columns as a key. | cod\_t | money | money\_add | | --- | --- | --- | | 00000 | 4532 | 72323 | | 00016 | 1213 | 23822 | | 00030 | 1313 | 8393 | | 00020 | 1813 | 27328 | Example of the resulting table: | ID | country | money | code | money\_add | other | time | | --- | --- | --- | --- | --- | --- | --- | | 832932 | Other | 4532 | 00000 | 72323 | [N2,N2,N4] | 0 days 01:37:00 | | 217#8# | NaN | NaN | NaN | NaN | [N1,N2,N3] | 2 days 01:01:00 | | 1329T2 | France | 12131 | 00020 | 3452 | [N1,N1] | 1 days 03:55:00 | | 124932 | France | 1213 | 00016 | 23822 | [N2] | 0 days 01:28:00 | | 194022 | France | 4532 | 00000 | 72323 | [N4,N3] | 3 days 02:35:00 | User @jezrael, gave me the following solution to the problem: ```py df1 = df1.drop_duplicates('cod_t').set_index('cod_t') df = df.set_index(df['code']) df.update(df1, overwrite=False) df = df.reset_index(drop=True).reindex(df.columns, axis=1) ``` But this code gives me an error that I don't know how to solve: ``` TypeError: The DType <class 'numpy.dtype[timedelta64]'> could not be promoted by <class 'numpy.dtype[float64]'>. This means that no common DType exists for the given inputs. For example they cannot be stored in a single array unless the dtype is `object`. The full list of DTypes is: (<class 'numpy.dtype[timedelta64]'>, <class 'numpy.dtype[float64]'>) ``` ``` // First DataFrame dtypes ID object country object code object money float64 money_add float64 other object time timedelta64[ns] dtype: object // Second DataFrame dtypes cod_t object money int64 money_add int64 dtype: object ``` I would be grateful if you could help me to solve the error, or suggest an alternative method to using `update`.
2022/09/30
[ "https://Stackoverflow.com/questions/73906061", "https://Stackoverflow.com", "https://Stackoverflow.com/users/18396935/" ]
Because [`DataFrame.update`](http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.update.html) not working well here is alternative - first use left join for new columns from second DataFrame by [`DataFrame.merge`](http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.merge.html): ``` df2 = df.merge(df1.drop_duplicates('cod_t').rename(columns={'cod_t':'code'}), on='code', how='left', suffixes=('','_')) print (df2) ID country money code money_add other time \ 0 832932 Other NaN 0.0 NaN [N2, N2, N4] 0 days 01:37:00 1 217#8# NaN NaN NaN NaN [N1, N2, N3] 2 days 01:01:00 2 1329T2 France 12131.0 20.0 3452.0 [N1, N1] 1 days 03:55:00 3 124932 France NaN 16.0 NaN [N2] 0 days 01:28:00 4 194022 France NaN 0.0 NaN [N4, N3] 3 days 02:35:00 money_ money_add_ 0 4532.0 72323.0 1 NaN NaN 2 1813.0 27328.0 3 1213.0 23822.0 4 4532.0 72323.0 ``` Then get columns names with/without `_`: ``` cols_with_ = df2.columns[df2.columns.str.endswith('_')] cols_without_ = cols_with_.str.rstrip('_') print (cols_with_) Index(['money_', 'money_add_'], dtype='object') print (cols_without_) Index(['money', 'money_add'], dtype='object') ``` Pass to [`DataFrame.combine_first`](http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.combine_first.html) and last remove helper columns: ``` df2[cols_without_] = (df2[cols_without_].combine_first(df2[cols_with_] .rename(columns=lambda x: x.rstrip('_')))) df2 = df2.drop(cols_with_, axis=1) print (df2) ID country money code money_add other time 0 832932 Other 4532.0 0.0 72323.0 [N2, N2, N4] 0 days 01:37:00 1 217#8# NaN NaN NaN NaN [N1, N2, N3] 2 days 01:01:00 2 1329T2 France 12131.0 20.0 3452.0 [N1, N1] 1 days 03:55:00 3 124932 France 1213.0 16.0 23822.0 [N2] 0 days 01:28:00 4 194022 France 4532.0 0.0 72323.0 [N4, N3] 3 days 02:35:00 ```
This is another method different from Jezrael's, but you can try it out. You can first create a condition variable for your dataframe. ``` condition = (df.code.isin(df1.cod_t) & ~df.code.isnull() & df.money.isna()) columns = ['money', 'money_add'] ``` Next, use `df.loc` to do the update. ``` df.loc[condition, columns] = df1.loc[condition, columns] ID country money code money_add other time 0 832932 Other 4532.0 0.0 72323.0 [N2,N2,N4] 0 days 01:37:00 1 217#8# NaN NaN NaN NaN [N1,N2,N3] 2 days 01:01:00 2 1329T2 France 12131.0 20.0 3452.0 [N1,N1] 1 days 03:55:00 3 124932 France 1813.0 16.0 27328.0 [N2] 0 days 01:28:00 4 194022 France 8932.0 0.0 3204.0 [N4,N3] 3 days 02:35:00 ``` Update ------ If there's an unequal length for both dataframe. ``` df1_cond = df1.cod_t.isin(df.loc[condition].code) result = [i[1:] for row in df.loc[condition].code for i in df1.loc[df1_cond].values if row in i] df.loc[condition, columns] = result ```
43,006,368
I am trying to connect to AWS Athena using python. I am trying to use pyathenajdbc to achieve this task. The issue I am having is obtaining a connection. When I run the code below, I receive an error message stating it cannot find the AthenaDriver. ( java.lang.RuntimeException: Class com.amazonaws.athena.jdbc.AthenaDriver not found). I did download this file from AWS and I have confirmed it is sitting in that directory. ``` from mdpbi.rsi.config import * from mdpbi.tools.functions import mdpLog from pkg_resources import resource_string import argparse import os import pyathenajdbc import sys SCRIPT_NAME = "Athena_Export" ATHENA_JDBC_CLASSPATH = "/opt/amazon/athenajdbc/AthenaJDBC41-1.0.0.jar" EXPORT_OUTFILE = "RSI_Export.txt" EXPORT_OUTFILE_PATH = os.path.join(WORKINGDIR, EXPORT_OUTFILE) def get_arg_parser(): """This function returns the argument parser object to be used with this script""" parser = argparse.ArgumentParser(description=__doc__, formatter_class=argparse.RawDescriptionHelpFormatter) return parser def main(): args = get_arg_parser().parse_args(sys.argv[1:]) logger = mdpLog(SCRIPT_NAME, LOGDIR) SQL = resource_string("mdpbi.rsi.athena.resources", "athena.sql") conn = pyathenajdbc.connect( s3_staging_dir="s3://athena", access_key=AWS_ACCESS_KEY_ID, secret_key=AWS_SECRET_ACCESS_KEY, region_name="us-east-1", log_path=LOGDIR, driver_path=ATHENA_JDBC_CLASSPATH ) try: with conn.cursor() as cursor: cursor.execute(SQL) logger.info(cursor.description) logger.info(cursor.fetchall()) finally: conn.close() return 0 if __name__ == '__main__': rtn = main() sys.exit(rtn) ``` > > Traceback (most recent call last): File > "/usr/lib64/python2.7/runpy.py", line 174, in \_run\_module\_as\_main > "**main**", fname, loader, pkg\_name) File "/usr/lib64/python2.7/runpy.py", line 72, in \_run\_code > exec code in run\_globals File "/home/ec2-user/jason\_testing/mdpbi/rsi/athena/**main**.py", line 53, > in > rtn = main() File "/home/ec2-user/jason\_testing/mdpbi/rsi/athena/**main**.py", line 39, > in main > driver\_path=athena\_jdbc\_driver\_path File "/opt/mdpbi/Python\_Envs/2.7.10/local/lib/python2.7/dist-packages/pyathenajdbc/**init**.py", > line 65, in connect > driver\_path, \*\*kwargs) File "/opt/mdpbi/Python\_Envs/2.7.10/local/lib/python2.7/dist-packages/pyathenajdbc/connection.py", > line 68, in **init** > jpype.JClass(ATHENA\_DRIVER\_CLASS\_NAME) File "/opt/mdpbi/Python\_Envs/2.7.10/lib64/python2.7/dist-packages/jpype/\_jclass.py", > line 55, in JClass > raise \_RUNTIMEEXCEPTION.PYEXC("Class %s not found" % name) > > >
2017/03/24
[ "https://Stackoverflow.com/questions/43006368", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3389780/" ]
The JDBC driver requires Java 8. I was currently running Java 7. I was able to install another version of Java on EC2 instance. <https://tecadmin.net/install-java-8-on-centos-rhel-and-fedora/#> I had to also set the java version in my code. With these changes, the code now runs as expected. ``` from mdpbi.rsi.config import * from mdpbi.tools.functions import mdpLog from pkg_resources import resource_string import argparse import os import pyathenajdbc import sys SCRIPT_NAME = "Athena_Export" def get_arg_parser(): """This function returns the argument parser object to be used with this script""" parser = argparse.ArgumentParser(description=__doc__, formatter_class=argparse.RawDescriptionHelpFormatter) return parser def main(): args = get_arg_parser().parse_args(sys.argv[1:]) logger = mdpLog(SCRIPT_NAME, LOGDIR) SQL = resource_string("mdpbi.rsi.athena.resources", "athena.sql") os.environ["JAVA_HOME"] = "/opt/jdk1.8.0_121" os.environ["JRE_HOME"] = "/opt/jdk1.8.0_121/jre" os.environ["PATH"] = "/opt/jdk1.8.0_121/bin:/opt/jdk1.8.0_121/jre/bin" conn = pyathenajdbc.connect( s3_staging_dir="s3://mdpbi.data.rsi.out/", access_key=AWS_ACCESS_KEY_ID, secret_key=AWS_SECRET_ACCESS_KEY, schema_name="rsi", region_name="us-east-1" ) try: with conn.cursor() as cursor: cursor.execute(SQL) logger.info(cursor.description) logger.info(cursor.fetchall()) finally: conn.close() return 0 if __name__ == '__main__': rtn = main() sys.exit(rtn) ```
Try this : ``` pyathenajdbc.ATHENA_JAR = ATHENA_JDBC_CLASSPATH ``` You won't be needing to specify the driver\_path argument in the connection method
59,986,413
I'm trying to use the new python dataclasses to create some mix-in classes (already as I write this I think it sounds like a rash idea), and I'm having some issues. Behold the example below: ```py from dataclasses import dataclass @dataclass class NamedObj: name: str def __post_init__(self): print("NamedObj __post_init__") self.name = "Name: " + self.name @dataclass class NumberedObj: number: int = 0 def __post_init__(self): print("NumberedObj __post_init__") self.number += 1 @dataclass class NamedAndNumbered(NumberedObj, NamedObj): def __post_init__(self): super().__post_init__() print("NamedAndNumbered __post_init__") ``` If I then try: ``` nandn = NamedAndNumbered('n_and_n') print(nandn.name) print(nandn.number) ``` I get ```py NumberedObj __post_init__ NamedAndNumbered __post_init__ n_and_n 1 ``` Suggesting it has run `__post_init__` for `NamedObj`, but not for `NumberedObj`. What I would like is to have NamedAndNumbered run `__post_init__` for both of its mix-in classes, Named and Numbered. One might think that it could be done if `NamedAndNumbered` had a `__post_init__` like this: ``` def __post_init__(self): super(NamedObj, self).__post_init__() super(NumberedObj, self).__post_init__() print("NamedAndNumbered __post_init__") ``` But this just gives me an error `AttributeError: 'super' object has no attribute '__post_init__'` when I try to call `NamedObj.__post_init__()`. At this point I'm not entirely sure if this is a bug/feature with dataclasses or something to do with my probably-flawed understanding of Python's approach to inheritance. Could anyone lend a hand?
2020/01/30
[ "https://Stackoverflow.com/questions/59986413", "https://Stackoverflow.com", "https://Stackoverflow.com/users/9112585/" ]
This: ``` def __post_init__(self): super(NamedObj, self).__post_init__() super(NumberedObj, self).__post_init__() print("NamedAndNumbered __post_init__") ``` doesn't do what you think it does. `super(cls, obj)` will return a proxy to the class **after** `cls` in `type(obj).__mro__` - so, in your case, to `object`. And the whole point of cooperative `super()` calls is to avoid having to explicitely call each of the parents. The way cooperative `super()` calls are intended to work is, well, by being "cooperative" - IOW, everyone in the mro is supposed to relay the call to the next class (actually, the `super` name is a rather sad choice, as it's not about calling "the super class", but about "calling the next class in the mro"). IOW, you want each of your "composable" dataclasses (which are not mixins - mixins only have behaviour) to relay the call, so you can compose them in any order. A first naive implementation would look like: ``` @dataclass class NamedObj: name: str def __post_init__(self): super().__post_init__() print("NamedObj __post_init__") self.name = "Name: " + self.name @dataclass class NumberedObj: number: int = 0 def __post_init__(self): super().__post_init__() print("NumberedObj __post_init__") self.number += 1 @dataclass class NamedAndNumbered(NumberedObj, NamedObj): def __post_init__(self): super().__post_init__() print("NamedAndNumbered __post_init__") ``` BUT this doesn't work, since for the last class in the mro (here `NamedObj`), the next class in the mro is the builtin `object` class, which doesn't have a `__post_init__` method. The solution is simple: just add a base class that defines this method as a noop, and make all your composable dataclasses inherit from it: ``` class Base(object): def __post_init__(self): # just intercept the __post_init__ calls so they # aren't relayed to `object` pass @dataclass class NamedObj(Base): name: str def __post_init__(self): super().__post_init__() print("NamedObj __post_init__") self.name = "Name: " + self.name @dataclass class NumberedObj(Base): number: int = 0 def __post_init__(self): super().__post_init__() print("NumberedObj __post_init__") self.number += 1 @dataclass class NamedAndNumbered(NumberedObj, NamedObj): def __post_init__(self): super().__post_init__() print("NamedAndNumbered __post_init__") ```
The problem (most probably) isn't related to `dataclass`es. The problem is in Python's [method resolution](http://python-history.blogspot.com/2010/06/method-resolution-order.html). Calling method on `super()` invokes the first found method from parent class in the [MRO](https://www.python.org/download/releases/2.3/mro/) chain. So to make it work you need to call the methods of parent classes manually: ``` @dataclass class NamedAndNumbered(NumberedObj, NamedObj): def __post_init__(self): NamedObj.__post_init__(self) NumberedObj.__post_init__(self) print("NamedAndNumbered __post_init__") ``` Another approach (if you really like `super()`) could be to continue the MRO chain by calling `super()` in all parent classes (but it needs to have a `__post_init__` in the chain): ``` @dataclass class MixinObj: def __post_init__(self): pass @dataclass class NamedObj(MixinObj): name: str def __post_init__(self): super().__post_init__() print("NamedObj __post_init__") self.name = "Name: " + self.name @dataclass class NumberedObj(MixinObj): number: int = 0 def __post_init__(self): super().__post_init__() print("NumberedObj __post_init__") self.number += 1 @dataclass class NamedAndNumbered(NumberedObj, NamedObj): def __post_init__(self): super().__post_init__() print("NamedAndNumbered __post_init__") ``` In both approaches: ``` >>> nandn = NamedAndNumbered('n_and_n') NamedObj __post_init__ NumberedObj __post_init__ NamedAndNumbered __post_init__ >>> print(nandn.name) Name: n_and_n >>> print(nandn.number) 1 ```
38,913,502
I am trying to install a python package on my ubuntu.I am trying to install it through a setup script which i had written.The setup.py script looks like this: ``` from setuptools import setup try: from setuptools import setup except ImportError: from distutils.core import setup setup( name = 'pyduino', description = 'PyDuino project aims to make python interactive with hardware particularly arduino.', url = '###', keywords = 'python arduino', author = '###', author_email = '###', version = '0.0.0', license = 'GNU', packages = ['pyduino'], install_requires = ['pyserial'], classifiers = [ # How mature is this project? Common values are # 3 - Alpha # 4 - Beta # 5 - Production/Stable 'Development Status :: 3 - Alpha', 'Intended Audience :: Developers', 'Topic :: Software Development :: Build Tools', 'Programming Language :: Python :: 2', 'Programming Language :: Python :: 2.6', 'Programming Language :: Python :: 2.7', 'Programming Language :: Python :: 3', 'Programming Language :: Python :: 3.3', 'Programming Language :: Python :: 3.4', 'Programming Language :: Python :: 3.5', ], scripts=['pyduino/pyduino.py'], ) ``` Package installs in /usr/local/bin directory.But when I am importing the modules outside the /usr/local/bin,import error occurs.I tried changing path to /usr/local/bin and it works perfectly and import error doesn't occur.How can I install the package so that I can import the modules in any directory? Thanks in advance...
2016/08/12
[ "https://Stackoverflow.com/questions/38913502", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5507861/" ]
Try install your packages with pip using this ``` pip install --install-option="--prefix=$PREFIX_PATH" package_name ``` as described here [Install a Python package into a different directory using pip?](https://stackoverflow.com/questions/2915471/install-a-python-package-into-a-different-directory-using-pip) and i'll suggest to read what are 1. pip 2. virtualenv Good luck :) EDIT: i found the package is installed with pip like: ``` pip install --install-option="--prefix=/usr/local/bin" pyduino_mk ```
Currently, you're using a `scripts` tag to install your python code. This will put your code in `/usr/local/bin`, which is not in `PYTHONPATH`. According to [the documentation](https://docs.python.org/2/distutils/setupscript.html), you use `scripts` when you want to install executable scripts (stuff you want to call from command line). Otherwise, you need to use `packages`. My approach would be like this: * install the `pyduino/pyduino.py` in the library with something like `packages=['pyduino']` * create a wrapper (shell or python) capable of calling your installed script and install that via `scripts=[...]` Using the `packages` tag for your module will install it in `/usr/local/lib/python...`, which is in `PYTHONPATH`. This will allow you to import your script with something like `import pyduino.pyduino.*`. For the wrapper script part: A best practice is to isolate the code to be executed if the script is triggered from command line in something like: ``` def main(): # insert your code here pass if __name__ == '__main__': main() ``` * Assuming there is a `def main()` as above * create a directory `scripts` in your tree (at the same level with `setup.py`) * create a file `scripts/pyduino` * in `scripts/pyduino`: ``` #!/usr/bin/env python from pydiuno.pyduino import main if __name__ == '__main__': main() ``` * add a `scripts = ['scripts/pyduino'] to your setup.py code
49,355,434
How do I navigate to another webpage using the same driver with Selenium in python? I do not want to open a new page. I want to keep on using the same driver. I thought that the following would work: ``` driver.navigate().to("https://support.tomtom.com/app/contact/") ``` But it doesn't! Navigate seems not to be a 'WebDriver' method
2018/03/19
[ "https://Stackoverflow.com/questions/49355434", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3623123/" ]
To navigate to a webpage you just write ``` driver.get(__url__) ``` you can do this in your program multiple times
The line of code which you have tried as : ``` driver.navigate().to("https://support.tomtom.com/app/contact/") ``` It is a typical *Java* based line of code. However as per the currect **Python API Docs** of [The WebDriver implementation](https://seleniumhq.github.io/selenium/docs/api/py/webdriver_remote/selenium.webdriver.remote.webdriver.html#module-selenium.webdriver.remote.webdriver) **navigate()** method is yet to be supported/implemented. Indtead, you can use the **get(url)** method instead which is defined as : ``` def get(self, url): """ Loads a web page in the current browser session. """ self.execute(Command.GET, {'url': url}) ```
23,726,365
I'm using tweepy and trying to run the basic script as shown by this [video](https://www.youtube.com/watch?v=pUUxmvvl2FE). I was previously receiving 401 errors (unsynchronized time zones) but am using the provided keys. I fixed that problem and now I'm getting this result: ``` Traceback (most recent call last): File "algotest.py", line 25, in <module> twitterStream.filter(track=["North"]) File "/usr/local/lib/python2.7/dist-packages/tweepy-2.3-py2.7.egg/tweepy/streaming.py", line 313, in filter File "/usr/local/lib/python2.7/dist-packages/tweepy-2.3-py2.7.egg/tweepy/streaming.py", line 235, in _start File "/usr/local/lib/python2.7/dist-packages/tweepy-2.3-py2.7.egg/tweepy/streaming.py", line 151, in _run File "/usr/local/lib/python2.7/dist-packages/requests-2.2.1-py2.7.egg/requests/sessions.py", line 335, in request resp = self.send(prep, **send_kwargs) File "/usr/local/lib/python2.7/dist-packages/requests-2.2.1-py2.7.egg/requests/sessions.py", line 438, in send r = adapter.send(request, **kwargs) File "/usr/local/lib/python2.7/dist-packages/requests-2.2.1-py2.7.egg/requests/adapters.py", line 327, in send raise ConnectionError(e) requests.exceptions.ConnectionError: HTTPSConnectionPool(host='stream.twitter.com', port=443): Max retries exceeded with url: /1.1/statuses/filter.json?track=North&delimited=length (Caused by <class 'socket.gaierror'>: [Errno -2] Name or service not known) ``` Any way around this? Is there some sort of reset option I can trigger? Thanks in advance
2014/05/18
[ "https://Stackoverflow.com/questions/23726365", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1472645/" ]
Turns out the solution is simply to wait a day. Who would've thought!
I was also getting same error while using the python-twitter module in my script but it got resolved automatically when I tried after an interval. As there is limit for the number of try at a particular interval hence we get this error when we exceed that maximum try limit.
33,340,442
I am trying to post [some data via ajax](http://jsfiddle.net/g1wvryp7/) to our backend API, but the arrays within the json data get turned into weird things by jquery...for example, the backend (python) sees the jquery ajax data as a dict of two lists ``` {'subject': ['something'], 'members[]': ['joe','bob']} ``` when it should be ``` {'subject':'something','members':['joe','bob']} ``` The HTML Form extracted from a react component: ``` <div class="the_form"> <form onSubmit={this.handleSubmit}> <input type="textarea" ref="members" placeholder="spongebob, patrick" /> <input type="submit" value="Add Thread" /> </form> </div> ``` The jquery ajax code: ``` $.ajax({ beforeSend: function(xhr, settings) { // csrf validation }, url: this.props.url, dataType: 'json', type: 'POST', data: {subject: "something", members: ["joe","bob"]}, success: function(data) { this.setState({data: data}); }.bind(this), error: function(xhr, status, err) { console.log(this.props.url, status, err.toString()); }.bind(this) }); ``` I am able, however, to make such a request appropriately with httpie (simple http command line client): ``` echo '{"subject":"something", "members":["joe","bob"]}' | http --auth test:test POST localhost:8000/api/some_page/ --verbose ``` What might I be doing wrong in the javascript request such that the inputs come into the server differently than expected?
2015/10/26
[ "https://Stackoverflow.com/questions/33340442", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1449443/" ]
That shouldnt be a problem since you only pass the reference to that list to other objects. That means you have only one big list. But you should be aware that every object that has a reference to that list can change it
Well in Java, you only pass object "by reference"... From the link in the comments: > > Let’s be a little bit more specific by what we mean here: objects are > passed by reference – meaning that a reference/memory address is > passed when an object is assigned to another – BUT (and this is what’s > important) that reference is actually passed by value. > > >
62,733,213
I'm trying to figure out how to read a file from Azure blob storage. Studying its documentation, I can see that the [download\_blob](https://learn.microsoft.com/en-us/python/api/azure-storage-blob/azure.storage.blob.blobclient?view=azure-python#download-blob-offset-none--length-none----kwargs-) method seems to be the main way to access a blob. This method, though, seems to require downloading the whole blob into a file or some other stream. Is it possible to read a file from Azure Blob Storage line by line as a stream from the service? (And without having to have downloaded the whole thing first)
2020/07/04
[ "https://Stackoverflow.com/questions/62733213", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1255356/" ]
**Update 0710:** In the latest SDK [azure-storage-blob 12.3.2](https://pypi.org/project/azure-storage-blob/), we can also do the same thing by using `download_blob`. The screenshot of the source code of `download_blob`: [![enter image description here](https://i.stack.imgur.com/eFH3j.jpg)](https://i.stack.imgur.com/eFH3j.jpg) So just provide an `offset` and `length` parameter, like below(it works as per my test): ``` blob_client.download_blob(60,100) ``` --- **Original answer:** You can not read the blob file line by line, but you can read them as per bytes. Like first read 10 bytes of the data, next you can continue to read the next 10 to 20 bytes etc. This is only available in the older version of [python blob storage sdk 2.1.0](https://pypi.org/project/azure-storage-blob/2.1.0/). Install it like below: ``` pip install azure-storage-blob==2.1.0 ``` Here is the sample code(here I read the text, but you can change it to use `get_blob_to_stream(container_name,blob_name,start_range=0,end_range=10)` method to read stream): ``` from azure.storage.blob import BlockBlobService, PublicAccess accountname="xxxx" accountkey="xxxx" blob_service_client = BlockBlobService(account_name=accountname,account_key=accountkey) container_name="test2" blob_name="a5.txt" #get the length of the blob file, you can use it if you need a loop in your code to read a blob file. blob_property = blob_service_client.get_blob_properties(container_name,blob_name) print("the length of the blob is: " + str(blob_property.properties.content_length) + " bytes") print("**********") #get the first 10 bytes data b1 = blob_service_client.get_blob_to_text(container_name,blob_name,start_range=0,end_range=10) #you can use the method below to read stream #blob_service_client.get_blob_to_stream(container_name,blob_name,start_range=0,end_range=10) print(b1.content) print("*******") #get the next range of data b2=blob_service_client.get_blob_to_text(container_name,blob_name,start_range=10,end_range=50) print(b2.content) print("********") #get the next range of data b3=blob_service_client.get_blob_to_text(container_name,blob_name,start_range=50,end_range=200) print(b3.content) ```
The accepted answer [here](https://stackoverflow.com/questions/33091830/how-best-to-convert-from-azure-blob-csv-format-to-pandas-dataframe-while-running) may be of use to you. The documentation can be found [here](https://learn.microsoft.com/en-us/python/api/azure-storage-blob/azure.storage.blob.baseblobservice.baseblobservice?view=azure-python-previous).
23,201,351
I know this is a basic question, but I'm new to python and can't figure out how to solve it. I have a list like the next example: ``` entities = ["#1= IFCORGANIZATION($,'Autodesk Revit 2014 (ENU)',$,$,$)";, "#5= IFCAPPLICATION(#1,'2014','Autodesk Revit 2014 (ENU)','Revit');"] ``` My problem is how to add the information from the list `"entities"` to a dictionary in the following format: ``` dic = {'#1= IFCORGANIZATION' : ['$','Autodesk Revit 2014 (ENU)','$','$','$'], '#5= IFCAPPLICATION' : ['#1','2014','Autodesk Revit 2014 (ENU)','Revit'] ``` I tried to do this using `"find"` but I'm getting the following error: `'list' object has no attribute 'find'`, and I don't know how to do this without find method.
2014/04/21
[ "https://Stackoverflow.com/questions/23201351", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3556883/" ]
If you want to know if a value is in a list you can use `in`, like this: ``` >>> my_list = ["one", "two", "three"] >>> "two" in my_list True >>> ``` If you need to get the position of the value in the list you must use `index`: ``` >>> my_list.index("two") 1 >>> ``` Note that the first element of the list has the 0 index.
Here you go: ``` >>> import re >>> import ast >>> entities = ["#1= IFCORGANIZATION('$','Autodesk Revit 2014 (ENU)','$','$','$');", "#5= IFCAPPLICATION('#1','2014','Autodesk Revit 2014 (ENU)','Revit');"] >>> entities = [a.strip(';') for a in entities] >>> pattern = re.compile(r'\((.*)\)') >>> dic = {} >>> for a in entities: ... s = re.search(pattern, a) ... dic[a[:a.index(s.group(0))]] = list(ast.literal_eval(s.group(0))) >>> dic {'#5= IFCAPPLICATION': ['#1', '2014', 'Autodesk Revit 2014 (ENU)', 'Revit'], '#1= IFCORGANIZATION': ['$', 'Autodesk Revit 2014 (ENU)', '$', '$', '$']} ``` This regex `r'\((.*)\)'` looks for elements in `(` and `)` and converts them to a list. It makes the sub string appearing before the brackets as the key and the list as the value.
23,201,351
I know this is a basic question, but I'm new to python and can't figure out how to solve it. I have a list like the next example: ``` entities = ["#1= IFCORGANIZATION($,'Autodesk Revit 2014 (ENU)',$,$,$)";, "#5= IFCAPPLICATION(#1,'2014','Autodesk Revit 2014 (ENU)','Revit');"] ``` My problem is how to add the information from the list `"entities"` to a dictionary in the following format: ``` dic = {'#1= IFCORGANIZATION' : ['$','Autodesk Revit 2014 (ENU)','$','$','$'], '#5= IFCAPPLICATION' : ['#1','2014','Autodesk Revit 2014 (ENU)','Revit'] ``` I tried to do this using `"find"` but I'm getting the following error: `'list' object has no attribute 'find'`, and I don't know how to do this without find method.
2014/04/21
[ "https://Stackoverflow.com/questions/23201351", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3556883/" ]
If you want to know if a value is in a list you can use `in`, like this: ``` >>> my_list = ["one", "two", "three"] >>> "two" in my_list True >>> ``` If you need to get the position of the value in the list you must use `index`: ``` >>> my_list.index("two") 1 >>> ``` Note that the first element of the list has the 0 index.
You could use [`str.split`](https://docs.python.org/2/library/stdtypes.html#str.split) to deal with strings. First split each element string with `'('`, with maxsplit being 1: ``` In [48]: dic=dict(e[:-1].split('(', 1) for e in entities) #using [:-1] to filter out ')' ...: print dic ...: {'#5= IFCAPPLICATION': "#1,'2014','Autodesk Revit 2014 (ENU)','Revit')", '#1= IFCORGANIZATION': "$,'Autodesk Revit 2014 (ENU)',$,$,$)"} ``` then split each value in the dict with `','`: ``` In [55]: dic={k: dic[k][:-1].split(',') for k in dic} ...: print dic {'#5= IFCAPPLICATION': ['#1', "'2014'", "'Autodesk Revit 2014 (ENU)'", "'Revit'"], '#1= IFCORGANIZATION': ['$', "'Autodesk Revit 2014 (ENU)'", '$', '$', '$']} ``` Note that the key-value pairs in a dict is unordered, as you may see `'#1= IFCORGANIZATION'` is not showing in the first place.
23,201,351
I know this is a basic question, but I'm new to python and can't figure out how to solve it. I have a list like the next example: ``` entities = ["#1= IFCORGANIZATION($,'Autodesk Revit 2014 (ENU)',$,$,$)";, "#5= IFCAPPLICATION(#1,'2014','Autodesk Revit 2014 (ENU)','Revit');"] ``` My problem is how to add the information from the list `"entities"` to a dictionary in the following format: ``` dic = {'#1= IFCORGANIZATION' : ['$','Autodesk Revit 2014 (ENU)','$','$','$'], '#5= IFCAPPLICATION' : ['#1','2014','Autodesk Revit 2014 (ENU)','Revit'] ``` I tried to do this using `"find"` but I'm getting the following error: `'list' object has no attribute 'find'`, and I don't know how to do this without find method.
2014/04/21
[ "https://Stackoverflow.com/questions/23201351", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3556883/" ]
You could use [`str.split`](https://docs.python.org/2/library/stdtypes.html#str.split) to deal with strings. First split each element string with `'('`, with maxsplit being 1: ``` In [48]: dic=dict(e[:-1].split('(', 1) for e in entities) #using [:-1] to filter out ')' ...: print dic ...: {'#5= IFCAPPLICATION': "#1,'2014','Autodesk Revit 2014 (ENU)','Revit')", '#1= IFCORGANIZATION': "$,'Autodesk Revit 2014 (ENU)',$,$,$)"} ``` then split each value in the dict with `','`: ``` In [55]: dic={k: dic[k][:-1].split(',') for k in dic} ...: print dic {'#5= IFCAPPLICATION': ['#1', "'2014'", "'Autodesk Revit 2014 (ENU)'", "'Revit'"], '#1= IFCORGANIZATION': ['$', "'Autodesk Revit 2014 (ENU)'", '$', '$', '$']} ``` Note that the key-value pairs in a dict is unordered, as you may see `'#1= IFCORGANIZATION'` is not showing in the first place.
Here you go: ``` >>> import re >>> import ast >>> entities = ["#1= IFCORGANIZATION('$','Autodesk Revit 2014 (ENU)','$','$','$');", "#5= IFCAPPLICATION('#1','2014','Autodesk Revit 2014 (ENU)','Revit');"] >>> entities = [a.strip(';') for a in entities] >>> pattern = re.compile(r'\((.*)\)') >>> dic = {} >>> for a in entities: ... s = re.search(pattern, a) ... dic[a[:a.index(s.group(0))]] = list(ast.literal_eval(s.group(0))) >>> dic {'#5= IFCAPPLICATION': ['#1', '2014', 'Autodesk Revit 2014 (ENU)', 'Revit'], '#1= IFCORGANIZATION': ['$', 'Autodesk Revit 2014 (ENU)', '$', '$', '$']} ``` This regex `r'\((.*)\)'` looks for elements in `(` and `)` and converts them to a list. It makes the sub string appearing before the brackets as the key and the list as the value.
69,637,510
I try to add a local KVM maschine dynamically to ansible inventory with ansible 2.11.6. ``` ansible [core 2.11.6] config file = /home/ansible/ansible.cfg configured module search path = ['/home/ansible/library'] ansible python module location = /usr/local/lib/python3.9/dist-packages/ansible ansible collection location = /home/ansible/.ansible/collections:/usr/share/ansible/collections executable location = /usr/local/bin/ansible python version = 3.9.2 (default, Feb 28 2021, 17:03:44) [GCC 10.2.1 20210110] jinja version = 3.0.2 libyaml = True ``` I create the KVM successful, start it, wait for port 22 and try to add it to inventory with following task in play "A": ``` - name: "{{libvirt_maschine_name}}: Add VM to in-memory inventory" local_action: module: add_host name: "{{libvirt_maschine_name}}" groups: libvirt ansible_ssh_private_key_file: "{{ansible_user_home}}/.ssh/{{libvirt_maschine_name}}-ssh.key" ansible_default_ipv4: "{{vm_ip}}" ansible_ssh_common_args: '-o StrictHostKeyChecking=no' ansible_host: "{{vm_ip}}" ``` When i output the content of hostvars in Play "B" i see the groups and hostname as expected: ``` ... "group_names": [ "libvirt" ], "groups": { "all": [ "ansible", "k8smaster" ], "libvirt": [ "k8smaster" ], "local_ansible": [ "ansible" ], "ungrouped": [] }, ... ``` When i add ``` - debug: var=group_names - debug: var=play_hosts ``` to my play "B", i get just the static information of my inventory. ``` TASK [debug] **************************************************************************************************************************************************************************************************** ok: [ansible] => { "group_names": [ "local_ansible" ] } TASK [debug] **************************************************************************************************************************************************************************************************** ok: [ansible] => { "play_hosts": [ "ansible" ] } ``` My inventory.ini looks like ``` [all] ansible ansible_host=localhost [local_ansible] ansible ansible_host=localhost [local_ansible:vars] ansible_ssh_private_key_file=~/.ssh/ansible.key ansible_ssh_common_args='-o StrictHostKeyChecking=no' ansible_user=ansible ``` Here is a minimal example: ``` --- - name: "Play A" hosts: all become: yes gather_facts: yes tasks: - name: "Import variables from file" include_vars: file: k8s-single-node_vars.yaml - name: "Do some basic stuff" include_role: name: ansible-core - name: "Add VM to in-memory inventory" add_host: name: "myMaschine" groups: myGroup ansible_ssh_private_key_file: "test.key" ansible_default_ipv4: "192.168.1.1" ansible_ssh_common_args: '-o StrictHostKeyChecking=no' ansible_host: "192.168.1.1" - name: "Play B" hosts: all become: yes gather_facts: no tasks: - debug: var=hostvars - debug: var=group_names - debug: var=play_hosts - name: test-ping ping: ``` Therefore, i am not able to run any task against the VM, because ansible is completely ignoring them. A ping is just working against the host "ansible". Any idea, what i do wrong here?
2021/10/19
[ "https://Stackoverflow.com/questions/69637510", "https://Stackoverflow.com", "https://Stackoverflow.com/users/8478100/" ]
Next.js is a framework for React which helps developers manage Server-Side Rendering in react. There are many benefits of server-side rendering including: caching specific pages (or caching only what is public and keeping user-specific data or auth-required data to be loaded on the frontend). Since Next.js is doing server side rendering that means that sometimes they use the `reactDOMServer.renderToString()` function in Node.js. They build the full page as HTML and send it to the user who is browsing the site. Next.js' intention in generating the page HTML is to maximize the capabilities of CDNs and improve your page's SEO. So not only do they render the React page as HTML. They make the API requests and `await` for them to return allowing them to render the list of elements which the API responded with. This can allow developers to take advantage of dynamic aspects of React and run JavaScript function within the rendering code (like: `{products.length <= 0 && <EmptyStateDiv type='products' />}`), but sadly you can't use JavaScript/functionality which lives on the client's/user's browser (as opposed to native to JavaScript/cross-platform Node.js/Browser). So while all functionality built into JS (like Array prototype methods) can be used without a second thought. Other functionality like fetch can be used cross-platform on Node.js and Frontend/React but only due to cross-platform libraries like `isomorphic-fetch`. And finally, other functionality lives only within the browser and is not native to JavaScript. This especially includes methods/properties accessible from the specific user's browser like it might be great to do: `document.innerWidth is > 1600` but that isn't possible since this function runs before a specific client had rendered the page. Next.js built the page on the server side where things like `document`/`window` are not defined and where it wouldn't make sense for them to exist. (Though you can probably optimize and cache different experiences for mobile vs desktop users, by reading some the client's headers.) While it runs on the server in Node.js (Server-Side Rendering) window is not defined in the Node.js runtime and it could crash before rendering. It also wouldn't make sense for window to be defined on the server as window typically contains browser specific properties like clientHeight/clientWidth or allow a user to do client side redirects with `window.location.assign` which would be impossible on the server.
If this code is run on the server as part of [pre-rendering](https://nextjs.org/docs/basic-features/pages#pre-rendering) (either server-side rendering or static rendering), there will be no `window` (and hence no `window.btoa` for base64-encoding) since there is no browser, but instead node.js's `Buffer` can be utilized.
70,290,737
i tried to use this command in cmd to install module certifi: ``` pip install certifi ``` But it throws some warning like this: ``` WARNING: Ignoring invalid distribution -ip (c:\python39\lib\site-packages) ``` How can i fix it and install certifi ? (Python 3.9.6 )
2021/12/09
[ "https://Stackoverflow.com/questions/70290737", "https://Stackoverflow.com", "https://Stackoverflow.com/users/17291416/" ]
there is no question in your title nor in description, some mathematical resolution to your problem could be to put numbers to your directions, for example up=1 down=-1 left=2 right=-2 and then on keypress to change direction check: ``` if not actualPosition + newPosition: #dont do anything since collision else: #do your action ```
You can check if the new direction is different from the old one. If it is diffrent, you update the direction, otherwise you keep the same direction: ``` def new_dir(self, new_dir): return new_dir if new_dir != self.direction else self.direction def move_up(self): self.direction = self.new_dir("up") def move_down(self): self.direction = self.new_dir("down") def move_right(self): self.direction = self.new_dir("right") def move_left(self): self.direction = self.new_dir("left") ```
70,290,737
i tried to use this command in cmd to install module certifi: ``` pip install certifi ``` But it throws some warning like this: ``` WARNING: Ignoring invalid distribution -ip (c:\python39\lib\site-packages) ``` How can i fix it and install certifi ? (Python 3.9.6 )
2021/12/09
[ "https://Stackoverflow.com/questions/70290737", "https://Stackoverflow.com", "https://Stackoverflow.com/users/17291416/" ]
That's how I'd do it. It avoids walking into itself and also prevents doing "fake" turns into the direction you are already following. ``` def new_dir(self, direction): if self.direction == "up" or self.direction == "down": if direction == "left": self.move_left() if direction == "right": self.move_right() if self.direction == "left" or self.direction == "right": if direction == "up": self.move_up() if direction == "down": self.move_down() def move_up(self): self.direction = "up" def move_down(self): self.direction = "down" def move_right(self): self.direction = "right" def move_left(self): self.direction = "left" ```
there is no question in your title nor in description, some mathematical resolution to your problem could be to put numbers to your directions, for example up=1 down=-1 left=2 right=-2 and then on keypress to change direction check: ``` if not actualPosition + newPosition: #dont do anything since collision else: #do your action ```
70,290,737
i tried to use this command in cmd to install module certifi: ``` pip install certifi ``` But it throws some warning like this: ``` WARNING: Ignoring invalid distribution -ip (c:\python39\lib\site-packages) ``` How can i fix it and install certifi ? (Python 3.9.6 )
2021/12/09
[ "https://Stackoverflow.com/questions/70290737", "https://Stackoverflow.com", "https://Stackoverflow.com/users/17291416/" ]
```py def move_up(self): if direction != "down": self.direction = "up" def move_down(self): if self.direction != "up": self.direction = "down" def move_right(self): if self.direction != "left": self.direction = "right" def move_left(self): if self.direction != "right": self.direction = "left" ```
You can check if the new direction is different from the old one. If it is diffrent, you update the direction, otherwise you keep the same direction: ``` def new_dir(self, new_dir): return new_dir if new_dir != self.direction else self.direction def move_up(self): self.direction = self.new_dir("up") def move_down(self): self.direction = self.new_dir("down") def move_right(self): self.direction = self.new_dir("right") def move_left(self): self.direction = self.new_dir("left") ```
70,290,737
i tried to use this command in cmd to install module certifi: ``` pip install certifi ``` But it throws some warning like this: ``` WARNING: Ignoring invalid distribution -ip (c:\python39\lib\site-packages) ``` How can i fix it and install certifi ? (Python 3.9.6 )
2021/12/09
[ "https://Stackoverflow.com/questions/70290737", "https://Stackoverflow.com", "https://Stackoverflow.com/users/17291416/" ]
That's how I'd do it. It avoids walking into itself and also prevents doing "fake" turns into the direction you are already following. ``` def new_dir(self, direction): if self.direction == "up" or self.direction == "down": if direction == "left": self.move_left() if direction == "right": self.move_right() if self.direction == "left" or self.direction == "right": if direction == "up": self.move_up() if direction == "down": self.move_down() def move_up(self): self.direction = "up" def move_down(self): self.direction = "down" def move_right(self): self.direction = "right" def move_left(self): self.direction = "left" ```
You can check if the new direction is different from the old one. If it is diffrent, you update the direction, otherwise you keep the same direction: ``` def new_dir(self, new_dir): return new_dir if new_dir != self.direction else self.direction def move_up(self): self.direction = self.new_dir("up") def move_down(self): self.direction = self.new_dir("down") def move_right(self): self.direction = self.new_dir("right") def move_left(self): self.direction = self.new_dir("left") ```
70,290,737
i tried to use this command in cmd to install module certifi: ``` pip install certifi ``` But it throws some warning like this: ``` WARNING: Ignoring invalid distribution -ip (c:\python39\lib\site-packages) ``` How can i fix it and install certifi ? (Python 3.9.6 )
2021/12/09
[ "https://Stackoverflow.com/questions/70290737", "https://Stackoverflow.com", "https://Stackoverflow.com/users/17291416/" ]
That's how I'd do it. It avoids walking into itself and also prevents doing "fake" turns into the direction you are already following. ``` def new_dir(self, direction): if self.direction == "up" or self.direction == "down": if direction == "left": self.move_left() if direction == "right": self.move_right() if self.direction == "left" or self.direction == "right": if direction == "up": self.move_up() if direction == "down": self.move_down() def move_up(self): self.direction = "up" def move_down(self): self.direction = "down" def move_right(self): self.direction = "right" def move_left(self): self.direction = "left" ```
```py def move_up(self): if direction != "down": self.direction = "up" def move_down(self): if self.direction != "up": self.direction = "down" def move_right(self): if self.direction != "left": self.direction = "right" def move_left(self): if self.direction != "right": self.direction = "left" ```
32,277,153
I'm using wxpython to code this simple form. A notebook with a scroll bar and few text controls is what i have used.I can see the widgets which are view-able on screen but the ones which needs to be scrolled down are not visible. In my code below i could see upto "Enter the Logs" and appropriate text control for that fields but the "review fields are missing along with submit and cancel buttons. ``` import wx ``` import wx.lib.filebrowsebutton as filebrowse class Frame ( wx.Frame ): ``` def __init__( self, parent ): wx.Frame.__init__ ( self, parent, id = wx.ID_ANY, title = u"Test", pos = wx.DefaultPosition, size = wx.Size( 600,300 ), style = wx.DEFAULT_FRAME_STYLE|wx.TAB_TRAVERSAL ) self.SetSizeHintsSz( wx.DefaultSize, wx.DefaultSize ) sizer = wx.BoxSizer( wx.VERTICAL ) self.notebook = wx.Notebook( self, wx.ID_ANY, wx.DefaultPosition, wx.DefaultSize, 0 ) self.login = wx.Panel( self.notebook, wx.ID_ANY, wx.DefaultPosition, wx.DefaultSize, wx.TAB_TRAVERSAL ) self.notebook.AddPage( self.login, u"Login", False ) self.scroll = wx.ScrolledWindow( self.notebook, wx.ID_ANY, wx.DefaultPosition, wx.DefaultSize, wx.HSCROLL|wx.VSCROLL ) vbox = wx.BoxSizer(wx.VERTICAL) # Sizer for widgets inside tabs inside_sizer_h1 = wx.BoxSizer(wx.HORIZONTAL) inside_sizer_h2 = wx.BoxSizer(wx.HORIZONTAL) inside_sizer_h3 = wx.BoxSizer(wx.HORIZONTAL) inside_sizer_h4 = wx.BoxSizer(wx.HORIZONTAL) inside_sizer_h5 = wx.BoxSizer(wx.HORIZONTAL) inside_sizer_h6 = wx.BoxSizer(wx.HORIZONTAL) inside_sizer_h7 = wx.BoxSizer(wx.HORIZONTAL) inside_sizer_h8 = wx.BoxSizer(wx.HORIZONTAL) inside_sizer_h9 = wx.BoxSizer(wx.HORIZONTAL) #Test Approve Label self.test_app_label = wx.StaticText(self.scroll, -1 , label="Test Approved By :") inside_sizer_h1.Add(self.test_app_label, 1, wx.ALL,5) #Test Approve Combo self.tes_app_combo = wx.ComboBox(self.scroll, -1, value='None', choices= ['None', 'approver1', 'approver2', 'approver3', 'approver4'] ) inside_sizer_h1.Add(self.tes_app_combo, 1, wx.ALL, 5 ) #Workspace Label self.wrksp_label = wx.StaticText(self.scroll, -1 , label="Workspace :") inside_sizer_h2.Add(self.wrksp_label, 1, wx.ALL,5) #Workspace file selector self.select_wrksp_dir = filebrowse.DirBrowseButton(self.scroll, -1,labelText = "", toolTip = 'Select tip of your workspace') inside_sizer_h2.Add(self.select_wrksp_dir, 1, wx.ALL|wx.EXPAND, 5 ) # Issuelist label self.ar_list_label = wx.StaticText(self.scroll, -1 , label="Issue List :") inside_sizer_h3.Add(self.ar_list_label, 1, wx.ALL,5) # Issue Text box self.ar_list_text = wx.TextCtrl(self.scroll, -1, value=u"Enter The issue, one per line", style=wx.TE_MULTILINE) inside_sizer_h3.Add(self.ar_list_text, 1, wx.ALL, 5 ) # Summary of change Title self.change_summary_label = wx.StaticText(self.scroll, -1 , label=u"Summary of change :") inside_sizer_h4.Add(self.change_summary_label, 1, wx.ALL, 5) # Summary of change Text Box self.change_summary_text = wx.TextCtrl(self.scroll, -1, value=u"What componet has changed?",style=wx.TE_MULTILINE) inside_sizer_h4.Add(self.change_summary_text, 1, wx.ALL, 5 ) # Changed File List Title self.change_file_list_label = wx.StaticText(self.scroll, -1 , label=u"Changed File List :") inside_sizer_h5.Add(self.change_file_list_label,1, wx.ALL, 5) # Changed File List Box self.change_summary_text = wx.TextCtrl(self.scroll, -1, u' enter list of changed files',style=wx.TE_MULTILINE) inside_sizer_h5.Add(self.change_summary_text,1, wx.ALL, 5) # GUI Testing done label self.testing_done_label = wx.StaticText(self.scroll, -1 , label=u"What tests have you done? :") inside_sizer_h6.Add(self.testing_done_label,1, wx.ALL, 5) #FlexGUi Checkbox self.gui_check_list = wx.CheckListBox(self.scroll, -1, choices=['GUI Builds Successfully', 'GUI Automation Tests', 'CLI Automation Tests']) inside_sizer_h6.Add(self.gui_check_list,1, wx.ALL, 5) # GUI Automation test logs label self.gui_auto_log_label = wx.StaticText(self.scroll, -1 , label=u"Enter the logs :") inside_sizer_h7.Add(self.gui_auto_log_label,1, wx.ALL, 5) #GUI Automation test box self.gui_auto_log = wx.TextCtrl(self.scroll, -1, u'Copy and paste the logs.',style=wx.TE_MULTILINE) inside_sizer_h7.Add(self.gui_auto_log,1, wx.ALL, 5) # Review URL Text self.review_url_label = wx.StaticText(self.scroll, -1 , label=u"Code review URL :") inside_sizer_h8.Add(self.review_url_label,1, wx.ALL, 5) #Code Review Textbox self.review_url_tbox = wx.TextCtrl(self.scroll, -1, value=u"Enter the code review URL",style=wx.TE_MULTILINE) inside_sizer_h8.Add(self.review_url_tbox,1, wx.ALL, 5) #Submit button self.sub_button = wx.Button(self.scroll, label = 'Submit') inside_sizer_h9.Add(self.sub_button, wx.ALL, 5) #Cancel button self.canc_button = wx.Button(self.scroll, label = 'Cancel') inside_sizer_h9.Add(self.canc_button,1, wx.ALL, 5) vbox.Add(inside_sizer_h1, 0 , wx.TOP|wx.EXPAND, 40 ) vbox.Add(inside_sizer_h2, 0 , wx.ALL|wx.EXPAND, 5 ) vbox.Add(inside_sizer_h3, 0 , wx.ALL|wx.EXPAND, 5 ) vbox.Add(inside_sizer_h4, 0 , wx.ALL|wx.EXPAND, 10) vbox.Add(inside_sizer_h5, 0 , wx.ALL|wx.EXPAND, 10) vbox.Add(inside_sizer_h6, 0 , wx.ALL|wx.EXPAND, 10) vbox.Add(inside_sizer_h7, 0 , wx.ALL|wx.EXPAND, 10) vbox.Add(inside_sizer_h8, 0 , wx.ALL|wx.EXPAND, 10) vbox.Add(inside_sizer_h9, 0 , wx.ALL|wx.EXPAND, 10) self.Maximize() self.scroll.Size = self.GetSize() print self.GetSize() self.scroll.SetScrollbars(20,25,45,50) self.SetSizer( vbox ) self.SetSizerAndFit(vbox) self.Layout() self.notebook.AddPage( self.scroll, u"Delivery", True ) sizer.Add( self.notebook, 1, wx.EXPAND |wx.ALIGN_RIGHT|wx.ALL, 0 ) self.SetSizer( sizer ) self.Layout() self.Centre( wx.BOTH ) self.Show() ``` if **name** == "**main**": app = wx.App() Frame(None) app.MainLoop()
2015/08/28
[ "https://Stackoverflow.com/questions/32277153", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4672258/" ]
Your Java application is going to run in a Linux container, so you can use any Linux or Java method of setting the timezone. The easy ones that come to mind... `cf set-env <app-name> TZ 'America/Los_Angeles' cf restage <app-name>` or `cf set-env <app-name> JAVA_OPTS '-Duser.timezone=Europe/Sofia' cf restage <app-name>` The first should be generic to any application that respects the TZ environment variable. The second is specific to Java. If you don't want to use `cf set-env` you could alternatively set the environment variables via your `manifest.yml` file.
To expand on @DanielMikusa. (Great Answer) SAMPLE MANIFEST: ``` applications: - path: . buildpack: nodejs_buildpack memory: 128M instances: 1 name: sampleCronJobService health-check-type: process disk_quota: 1024M env: TZ: Etc/Greenwich CF_STAGING_TIMEOUT: 15 CF_STARTUP_TIMEOUT: 15 TIMEOUT: 180 SuperSecret: passwordToTheWorld ``` WHY I KNOW IT WORKS: ``` var date = new Date(); var utc_hour = date.getUTCHours(); var local_hour = date.getHours(); console.log('(Timezone) Local Hour: ' + local_hour + ' UTC Hour: ' + utc_hour); ``` Prints: `(Timezone) Local Hour: 23 UTC Hour: 23` If I set the manifest to have `TZ: America/Los_Angeles` It Prints: `(Timezone) Local Hour: 16 UTC Hour: 23`
50,238,512
I have installed virtualenv on my system using <http://www.pythonforbeginners.com/basics/how-to-use-python-virtualenv> according to these [guidelines](http://blog.niandrei.com/2016/03/01/install-tensorflow-on-ubuntu-with-virtualenv/#comment-21), the initial step is: $ sudo apt-get install python-pip python-dev python-virtualenv However, I do not want to touch my parent environment. The only reason I believe virtualenv might be of some help for my case is because I have some weird errors that point to python version inconsistencies. So my requirements are: * virtualenv with e.g. python 3.5 * tensorflow * no influence on my parent environment * ability to disable virtualenv with no side effects Is it doable how?
2018/05/08
[ "https://Stackoverflow.com/questions/50238512", "https://Stackoverflow.com", "https://Stackoverflow.com/users/9332343/" ]
Have you noted that in the screen shot you're using version 2.5 in the references for the OpenXml assembly, but the exception message is referencing the newer 2.7.2? That could be your issue. It could be that you've referenced 2.5, but the "ClosedXML" is expecting 2.7.2 and when it doesn't find it tosses an error? I would get 2.7.2, update your reference and then see if that works.
you have to go to nugget and update the document xml reference to the latest one which is 2.7 and the issue is fixed
65,871,734
**Update: I let-rally tried 12 suggested solutions but nothing worked at all.** Is my question missing any details? The suggested answer doesn't solve the problem In python I wrote: ``` print(s.cookies.get_dict()) ``` where s is my session, the output is: ``` {'lubl': 'https%3A%2F%2Fopenworld.com%2Fconfirm', 'rishum': 'SHjshd2398-'} ``` Now my question is how can I edit rishum cookie such that I want to append 'test' next to it (or to make things simple replace it by 'test')? For example, I want: ``` 'rishum': 'SHjshd2398-test' ``` --- **Note: as someone suggested I tried the following but didn't work:** ``` print(s.cookies.get_dict()) s.cookies.get_dict()['rishum'] = 'test' print(s.cookies.get_dict()) ``` output before and after is: {'lubl': 'confirm', 'rishum': 'SUqsadkjn239s8n-', 'PHPSESSID': 'nfdskjfn3k42342', 'authchallenge': 'asjkdnjnkj34'} {'rishum': 'SUqsadkjn239s8n-', 'lubl': 'confirm', 'PHPSESSID': 'nfdskjfn3k42342', 'authchallenge': 'asjkdnjnkj34'} Note the order has changed.
2021/01/24
[ "https://Stackoverflow.com/questions/65871734", "https://Stackoverflow.com", "https://Stackoverflow.com/users/-1/" ]
What you are looking for is [`pandas.DataFrame.applymap`](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.applymap.html), which applies a function element-wise: ``` df.applymap(lambda x: -1 if x < low else (1 if x > high else 0)) ``` The method [`pandas.DataFrame.apply`](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.apply.html) applies a function along a given axis (default is column-wise).
You are sending the lambda the full dataframe, but you need to send it a column: ``` for col in df.columns: df[col].apply(lambda x: -1 if x < low else (1 if x > high else 0)) ```
56,105,090
I'm trying to upload a file using the built in wagtaildocs application in my Wagtail application. I've setup my Ubuntu 16.04 server was setup with the Digital Ocean tutorial methods for Nginx | Gunicorn | Postgres Some initial clarifications: 1. In my Nginx config I've set `client_max_body_size` 10000M; 2. In my production settings I have the following lines: `MAX_UPLOAD_SIZE = "5242880000" WAGTAILIMAGES_MAX_UPLOAD_SIZE = 5000 * 1024 * 1024` 3. My file type is a `.zip` 4. This a production test at this point. I've only implemented a basic wagtail application without an additional modules. So as along as my File size is below 10Gb I should be fine from a configuration stand point unless I'm missing something or am blind to a typo. I've already tried adjusting all the configuration values even to unreasonably large values. I've tried using other file extensions and doesn't change my error. I assume this has to do with a TCP or SSL connection being closed during the session. I've never encountered this problem before so I'd appreciate some help. Here is my error message: ``` Internal Server Error: /admin/documents/multiple/add/ Traceback (most recent call last): File "/Users/wgarlock/Git/wagtaildev/lib/python3.7/site-packages/django/db/backends/utils.py", line 84, in _execute return self.cursor.execute(sql, params) psycopg2.DatabaseError: SSL SYSCALL error: Operation timed out The above exception was the direct cause of the following exception: Traceback (most recent call last): File "/Users/wgarlock/Git/wagtaildev/lib/python3.7/site-packages/django/core/handlers/exception.py", line 34, in inner response = get_response(request) File "/Users/wgarlock/Git/wagtaildev/lib/python3.7/site-packages/django/core/handlers/base.py", line 115, in _get_response response = self.process_exception_by_middleware(e, request) File "/Users/wgarlock/Git/wagtaildev/lib/python3.7/site-packages/django/core/handlers/base.py", line 113, in _get_response response = wrapped_callback(request, *callback_args, **callback_kwargs) File "/Users/wgarlock/Git/wagtaildev/lib/python3.7/site-packages/django/views/decorators/cache.py", line 44, in _wrapped_view_func response = view_func(request, *args, **kwargs) File "/Users/wgarlock/Git/wagtaildev/lib/python3.7/site-packages/wagtail/admin/urls/__init__.py", line 102, in wrapper return view_func(request, *args, **kwargs) File "/Users/wgarlock/Git/wagtaildev/lib/python3.7/site-packages/wagtail/admin/decorators.py", line 34, in decorated_view return view_func(request, *args, **kwargs) File "/Users/wgarlock/Git/wagtaildev/lib/python3.7/site-packages/wagtail/admin/utils.py", line 151, in wrapped_view_func return view_func(request, *args, **kwargs) File "/Users/wgarlock/Git/wagtaildev/lib/python3.7/site-packages/django/views/decorators/vary.py", line 20, in inner_func response = func(*args, **kwargs) File "/Users/wgarlock/Git/wagtaildev/lib/python3.7/site-packages/wagtail/documents/views/multiple.py", line 60, in add doc.save() File "/Users/wgarlock/Git/wagtaildev/lib/python3.7/site-packages/django/db/models/base.py", line 741, in save force_update=force_update, update_fields=update_fields) File "/Users/wgarlock/Git/wagtaildev/lib/python3.7/site-packages/django/db/models/base.py", line 779, in save_base force_update, using, update_fields, File "/Users/wgarlock/Git/wagtaildev/lib/python3.7/site-packages/django/db/models/base.py", line 870, in _save_table result = self._do_insert(cls._base_manager, using, fields, update_pk, raw) File "/Users/wgarlock/Git/wagtaildev/lib/python3.7/site-packages/django/db/models/base.py", line 908, in _do_insert using=using, raw=raw) File "/Users/wgarlock/Git/wagtaildev/lib/python3.7/site-packages/django/db/models/manager.py", line 82, in manager_method return getattr(self.get_queryset(), name)(*args, **kwargs) File "/Users/wgarlock/Git/wagtaildev/lib/python3.7/site-packages/django/db/models/query.py", line 1186, in _insert return query.get_compiler(using=using).execute_sql(return_id) File "/Users/wgarlock/Git/wagtaildev/lib/python3.7/site-packages/django/db/models/sql/compiler.py", line 1335, in execute_sql cursor.execute(sql, params) File "/Users/wgarlock/Git/wagtaildev/lib/python3.7/site-packages/django/db/backends/utils.py", line 99, in execute return super().execute(sql, params) File "/Users/wgarlock/Git/wagtaildev/lib/python3.7/site-packages/django/db/backends/utils.py", line 67, in execute return self._execute_with_wrappers(sql, params, many=False, executor=self._execute) File "/Users/wgarlock/Git/wagtaildev/lib/python3.7/site-packages/django/db/backends/utils.py", line 76, in _execute_with_wrappers return executor(sql, params, many, context) File "/Users/wgarlock/Git/wagtaildev/lib/python3.7/site-packages/django/db/backends/utils.py", line 84, in _execute return self.cursor.execute(sql, params) File "/Users/wgarlock/Git/wagtaildev/lib/python3.7/site-packages/django/db/utils.py", line 89, in __exit__ raise dj_exc_value.with_traceback(traceback) from exc_value File "/Users/wgarlock/Git/wagtaildev/lib/python3.7/site-packages/django/db/backends/utils.py", line 84, in _execute return self.cursor.execute(sql, params) django.db.utils.DatabaseError: SSL SYSCALL error: Operation timed out ``` Here are my settings ``` ### base.py ### import os PROJECT_DIR = os.path.dirname(os.path.dirname(os.path.abspath(__file__))) BASE_DIR = os.path.dirname(PROJECT_DIR) SECRET_KEY = os.getenv('SECRET_KEY_WAGTAILDEV') # Quick-start development settings - unsuitable for production # See https://docs.djangoproject.com/en/2.2/howto/deployment/checklist/ # Application definition INSTALLED_APPS = [ 'home', 'search', 'wagtail.contrib.forms', 'wagtail.contrib.redirects', 'wagtail.embeds', 'wagtail.sites', 'wagtail.users', 'wagtail.snippets', 'wagtail.documents', 'wagtail.images', 'wagtail.search', 'wagtail.admin', 'wagtail.core', 'modelcluster', 'taggit', 'django.contrib.admin', 'django.contrib.auth', 'django.contrib.contenttypes', 'django.contrib.sessions', 'django.contrib.messages', 'django.contrib.staticfiles', 'storages', ] MIDDLEWARE = [ 'django.contrib.sessions.middleware.SessionMiddleware', 'django.middleware.common.CommonMiddleware', 'django.middleware.csrf.CsrfViewMiddleware', 'django.contrib.auth.middleware.AuthenticationMiddleware', 'django.contrib.messages.middleware.MessageMiddleware', 'django.middleware.clickjacking.XFrameOptionsMiddleware', 'django.middleware.security.SecurityMiddleware', 'wagtail.core.middleware.SiteMiddleware', 'wagtail.contrib.redirects.middleware.RedirectMiddleware', ] ROOT_URLCONF = 'wagtaildev.urls' TEMPLATES = [ { 'BACKEND': 'django.template.backends.django.DjangoTemplates', 'DIRS': [ os.path.join(PROJECT_DIR, 'templates'), ], 'APP_DIRS': True, 'OPTIONS': { 'context_processors': [ 'django.template.context_processors.debug', 'django.template.context_processors.request', 'django.contrib.auth.context_processors.auth', 'django.contrib.messages.context_processors.messages', ], }, }, ] WSGI_APPLICATION = 'wagtaildev.wsgi.application' # Database # https://docs.djangoproject.com/en/2.2/ref/settings/#databases DATABASES = { 'default': { 'ENGINE': 'django.db.backends.postgresql_psycopg2', 'HOST': os.getenv('DATABASE_HOST_WAGTAILDEV'), 'USER': os.getenv('DATABASE_USER_WAGTAILDEV'), 'PASSWORD': os.getenv('DATABASE_PASSWORD_WAGTAILDEV') , 'NAME': os.getenv('DATABASE_NAME_WAGTAILDEV'), 'PORT': '5432', } } # Password validation # https://docs.djangoproject.com/en/2.2/ref/settings/#auth-password-validators AUTH_PASSWORD_VALIDATORS = [ { 'NAME': 'django.contrib.auth.password_validation.UserAttributeSimilarityValidator', }, { 'NAME': 'django.contrib.auth.password_validation.MinimumLengthValidator', }, { 'NAME': 'django.contrib.auth.password_validation.CommonPasswordValidator', }, { 'NAME': 'django.contrib.auth.password_validation.NumericPasswordValidator', }, ] # Internationalization # https://docs.djangoproject.com/en/2.2/topics/i18n/ LANGUAGE_CODE = 'en-us' TIME_ZONE = 'UTC' USE_I18N = True USE_L10N = True USE_TZ = True # Static files (CSS, JavaScript, Images) # https://docs.djangoproject.com/en/2.2/howto/static-files/ STATICFILES_FINDERS = [ 'django.contrib.staticfiles.finders.FileSystemFinder', 'django.contrib.staticfiles.finders.AppDirectoriesFinder', ] STATICFILES_DIRS = [ os.path.join(PROJECT_DIR, 'static'), ] # ManifestStaticFilesStorage is recommended in production, to prevent outdated # Javascript / CSS assets being served from cache (e.g. after a Wagtail upgrade). # See https://docs.djangoproject.com/en/2.2/ref/contrib/staticfiles/#manifeststaticfilesstorage STATICFILES_STORAGE = 'django.contrib.staticfiles.storage.ManifestStaticFilesStorage' STATIC_ROOT = os.path.join(BASE_DIR, 'static') STATIC_URL = '/static/' MEDIA_ROOT = os.path.join(BASE_DIR, 'media') MEDIA_URL = '/media/' # Wagtail settings WAGTAIL_SITE_NAME = "wagtaildev" # Base URL to use when referring to full URLs within the Wagtail admin backend - # e.g. in notification emails. Don't include '/admin' or a trailing slash BASE_URL = 'http://example.com' ### production.py ### from .base import * DEBUG = True ALLOWED_HOSTS = ['wagtaildev.wesgarlock.com', '127.0.0.1','134.209.230.125'] from wagtaildev.aws.conf import * EMAIL_BACKEND = 'django.core.mail.backends.console.EmailBackend' MAX_UPLOAD_SIZE = "5242880000" WAGTAILIMAGES_MAX_UPLOAD_SIZE = 5000 * 1024 * 1024 FILE_UPLOAD_TEMP_DIR = str(os.path.join(BASE_DIR, 'tmp')) ``` Here are my Nginx settings ``` server { listen 80; server_name wagtaildev.wesgarlock.com; client_max_body_size 10000M; location = /favicon.ico { access_log off; log_not_found off; } location / { include proxy_params; proxy_pass http://unix:/home/wesgarlock/run/wagtaildev.sock; } } ```
2019/05/13
[ "https://Stackoverflow.com/questions/56105090", "https://Stackoverflow.com", "https://Stackoverflow.com/users/11490399/" ]
I was never able to solve this problem directly, but I did come up with a hack to get around it. I'm not a Wagtail or Django expert so I'm sure there is a proper solution to this answer, but anyway here's what I did. If you have any recommendations on improvement feel free to leave a comment. As a note this is really documentation to remind me what I did as well. There are many redundant lines of code at this point (05-25-19) because I Frankenstein'ed a lot of code together. I'll edit it down overtime. Here are the tutorials I Frankenstein'ed together to create this solution. 1. <https://www.codingforentrepreneurs.com/blog/large-file-uploads-with-amazon-s3-django/> 2. <http://docs.wagtail.io/en/v2.1.1/advanced_topics/documents/custom_document_model.html> 3. <https://docs.aws.amazon.com/AWSJavaScriptSDK/latest/AWS/S3.html> 4. <https://medium.com/faun/summary-667d0fdbcdae> 5. <http://docs.aws.amazon.com/sdk-for-javascript/v2/developer-guide/loading-browser-credentials-federated-id.html> 6. <https://kite.com/python/examples/454/threading-wait-for-a-thread-to-finish> 7. <http://docs.celeryproject.org/en/latest/userguide/daemonizing.html#usage-systemd> There may be a few others, but these were the principles. Okay here we go. I create an app called "files" and then a Custom Document models a models.py file. You need to specify WAGTAILDOCS\_DOCUMENT\_MODEL = 'files.LargeDocument' in your settings file. The only reason I did this was to track the behavior I was changing more explicitly. This custom Document model simply extended the standard Document Model in Wagtail. ``` #models.py from django.db import models from wagtail.documents.models import AbstractDocument from wagtail.admin.edit_handlers import FieldPanel # Create your models here. class LargeDocument(AbstractDocument): admin_form_fields = ( 'file', ) panels = [ FieldPanel('file', classname='fn'), ] ``` Next you'll need to create a wagtail\_hook.py file with following content. ``` #wagtail_hook.py from wagtail.contrib.modeladmin.options import ( ModelAdmin, modeladmin_register) from .models import LargeDocument from .views import LargeDocumentAdminView class LargeDocumentAdmin(ModelAdmin): model = LargeDocument menu_label = 'Large Documents' # ditch this to use verbose_name_plural from model menu_icon = 'pilcrow' # change as required menu_order = 200 # will put in 3rd place (000 being 1st, 100 2nd) add_to_settings_menu = False # or True to add your model to the Settings sub-menu exclude_from_explorer = False # or True to exclude pages of this type from Wagtail's explorer view create_template_name ='large_document_index.html' # Now you just need to register your customised ModelAdmin class with Wagtail modeladmin_register(LargeDocumentAdmin) ``` This allows you to do 2 things: 1. Create a new menu item for uploading Large Documents while maintaining your standard document menu item with its standard functionality. 2. Specify a custom html file for handling large uploads. Here is the html ``` {% extends "wagtailadmin/base.html" %} {% load staticfiles cache %} {% load static wagtailuserbar %} {% load compress %} {% load underscore_hyphan_to_space %} {% load url_vars %} {% load pagination_value %} {% load static %} {% load i18n %} {% block titletag %}{{ view.page_title }}{% endblock %} {% block content %} {% include "wagtailadmin/shared/header.html" with title=view.page_title icon=view.header_icon %} <!-- Google Signin Button --> <div class="g-signin2" data-onsuccess="onSignIn" data-theme="dark"> </div> <!-- Select the file to upload --> <div class="input-group mb-3"> <link rel="stylesheet" href="{% static 'css/input.css'%}"/> <div class="custom-file"> <input type="file" class="custom-file-input" id="file" name="file"> <label id="file_label" class="custom-file-label" style="width:auto!important;" for="inputGroupFile02" aria-describedby="inputGroupFileAddon02">Choose file</label> </div> <div class="input-group-append"> <span class="input-group-text" id="file_submission_button">Upload</span> </div> <div id="start_progress"></div> </div> <div class="progress-upload"> <div class="progress-upload-bar" role="progressbar" style="width: 100%;" aria-valuenow="100" aria-valuemin="0" aria-valuemax="100"></div> </div> {% endblock %} {% block extra_js %} {{ block.super }} {{ form.media.js }} <script src="https://apis.google.com/js/platform.js" async defer></script> <script src="https://sdk.amazonaws.com/js/aws-sdk-2.148.0.min.js"></script> <script src="https://ajax.googleapis.com/ajax/libs/jquery/3.2.1/jquery.min.js"></script> <script src="{% static 'js/awsupload.js' %}"></script> {% endblock %} {% block extra_css %} {{ block.super }} {{ form.media.css }} <meta name="google-signin-client_id" content="847336061839-9h651ek1dv7u1i0t4edsk8pd20d0lkf3.apps.googleusercontent.com"> <link rel="stylesheet" href="https://stackpath.bootstrapcdn.com/bootstrap/4.3.1/css/bootstrap.min.css" integrity="sha384-ggOyR0iXCbMQv3Xipma34MD+dH/1fQ784/j6cY/iJTQUOhcWr7x9JvoRxT2MZw1T" crossorigin="anonymous"> {% endblock %} ``` I then created some objects in views.py ``` #views.py from django.shortcuts import render # Create your views here. import base64 import hashlib import hmac import os import time from rest_framework import permissions, status, authentication from rest_framework.response import Response from rest_framework.views import APIView from .config_aws import ( AWS_UPLOAD_BUCKET, AWS_UPLOAD_REGION, AWS_UPLOAD_ACCESS_KEY_ID, AWS_UPLOAD_SECRET_KEY ) from .models import LargeDocument import datetime from wagtail.contrib.modeladmin.views import WMABaseView from django.db.models.fields.files import FieldFile from django.core.files import File import urllib.request from django.core.mail import send_mail from .tasks import file_creator class FilePolicyAPI(APIView): """ This view is to get the AWS Upload Policy for our s3 bucket. What we do here is first create a LargeDocument object instance in our Django backend. This is to include the LargeDocument instance in the path we will use within our bucket as you'll see below. """ permission_classes = [permissions.IsAuthenticated] authentication_classes = [authentication.SessionAuthentication] def post(self, request, *args, **kwargs): """ The initial post request includes the filename and auth credientails. In our case, we'll use Session Authentication but any auth should work. """ filename_req = request.data.get('filename') if not filename_req: return Response({"message": "A filename is required"}, status=status.HTTP_400_BAD_REQUEST) policy_expires = int(time.time()+5000) user = request.user username_str = str(request.user.username) """ Below we create the Django object. We'll use this in our upload path to AWS. Example: To-be-uploaded file's name: Some Random File.mp4 Eventual Path on S3: <bucket>/username/2312/2312.mp4 """ doc_obj = LargeDocument.objects.create(uploaded_by_user=user, ) doc_obj_id = doc_obj.id doc_obj.title=filename_req upload_start_path = "{location}".format( location = "LargeDocuments/", ) file_extension = os.path.splitext(filename_req) filename_final = "{title}".format( title= filename_req, ) """ Eventual file_upload_path includes the renamed file to the Django-stored LargeDocument instance ID. Renaming the file is done to prevent issues with user generated formatted names. """ final_upload_path = "{upload_start_path}/{filename_final}".format( upload_start_path=upload_start_path, filename_final=filename_final, ) if filename_req and file_extension: """ Save the eventual path to the Django-stored LargeDocument instance """ policy_document_context = { "expire": policy_expires, "bucket_name": AWS_UPLOAD_BUCKET, "key_name": "", "acl_name": "public-read", "content_name": "", "content_length": 524288000, "upload_start_path": upload_start_path, } policy_document = """ {"expiration": "2020-01-01T00:00:00Z", "conditions": [ {"bucket": "%(bucket_name)s"}, ["starts-with", "$key", "%(upload_start_path)s"], {"acl": "public-read"}, ["starts-with", "$Content-Type", "%(content_name)s"], ["starts-with", "$filename", ""], ["content-length-range", 0, %(content_length)d] ] } """ % policy_document_context aws_secret = str.encode(AWS_UPLOAD_SECRET_KEY) policy_document_str_encoded = str.encode(policy_document.replace(" ", "")) url = 'https://thearchmedia.s3.amazonaws.com/' policy = base64.b64encode(policy_document_str_encoded) signature = base64.b64encode(hmac.new(aws_secret, policy, hashlib.sha1).digest()) doc_obj.file_hash = signature doc_obj.path = final_upload_path doc_obj.save() data = { "policy": policy, "signature": signature, "key": AWS_UPLOAD_ACCESS_KEY_ID, "file_bucket_path": upload_start_path, "file_id": doc_obj_id, "filename": filename_final, "url": url, "username": username_str, } return Response(data, status=status.HTTP_200_OK) class FileUploadCompleteHandler(APIView): permission_classes = [permissions.IsAuthenticated] authentication_classes = [authentication.SessionAuthentication] def post(self, request, *args, **kwargs): file_id = request.POST.get('file') size = request.POST.get('fileSize') data = {} type_ = request.POST.get('fileType') if file_id: obj = LargeDocument.objects.get(id=int(file_id)) obj.size = int(size) obj.uploaded = True obj.type = type_ obj.file_hash obj.save() data['id'] = obj.id data['saved'] = True data['url']=obj.url return Response(data, status=status.HTTP_200_OK) class ModelFileCompletion(APIView): permission_classes = [permissions.IsAuthenticated] authentication_classes = [authentication.SessionAuthentication] def post(self, request, *args, **kwargs): file_id = request.POST.get('file') url = request.POST.get('aws_url') data = {} if file_id: obj = LargeDocument.objects.get(id=int(file_id)) file_creator.delay(obj.pk) data['test'] = 'process started' return Response(data, status=status.HTTP_200_OK) def LargeDocumentAdminView(request): context = super(WMABaseView, self).get_context(request) render(request, 'modeladmin/files/index.html', context) ``` This views goes around the standard file handling system. I didn't want to abandon the standard file handling system or write a new one. This is the reason why I call this hack and a non ideal solution. ``` // javascript upload file "awsupload.js" var id_token; //token we get upon Authentication with Web Identiy Provider function onSignIn(googleUser) { var profile = googleUser.getBasicProfile(); // The ID token you need to pass to your backend: id_token = googleUser.getAuthResponse().id_token; } $(document).ready(function(){ // setup session cookie data. This is Django-related function getCookie(name) { var cookieValue = null; if (document.cookie && document.cookie !== '') { var cookies = document.cookie.split(';'); for (var i = 0; i < cookies.length; i++) { var cookie = jQuery.trim(cookies[i]); // Does this cookie string begin with the name we want? if (cookie.substring(0, name.length + 1) === (name + '=')) { cookieValue = decodeURIComponent(cookie.substring(name.length + 1)); break; } } } return cookieValue; } var csrftoken = getCookie('csrftoken'); function csrfSafeMethod(method) { // these HTTP methods do not require CSRF protection return (/^(GET|HEAD|OPTIONS|TRACE)$/.test(method)); } $.ajaxSetup({ beforeSend: function(xhr, settings) { if (!csrfSafeMethod(settings.type) && !this.crossDomain) { xhr.setRequestHeader("X-CSRFToken", csrftoken); } } }); // end session cookie data setup. // declare an empty array for potential uploaded files var fileItemList = [] $(document).on('click','#file_submission_button', function(event){ var selectedFiles = $('#file').prop('files'); formItem = $(this).parent() $.each(selectedFiles, function(index, item){ uploadFile(item) }) $(this).val(''); $('.progress-upload-bar').attr('aria-valuenow',progress); $('.progress-upload-bar').attr('width',progress.toString()+'%'); $('.progress-upload-bar').attr('style',"width:"+progress.toString()+'%'); $('.progress-upload-bar').text(progress.toString()+'%'); }) $(document).on('change','#file', function(event){ var selectedFiles = $('#file').prop('files'); $('#file_label').text(selectedFiles[0].name) }) function constructFormPolicyData(policyData, fileItem) { var contentType = fileItem.type != '' ? fileItem.type : 'application/octet-stream' var url = policyData.url var filename = policyData.filename var repsonseUser = policyData.user // var keyPath = 'www/' + repsonseUser + '/' + filename var keyPath = policyData.file_bucket_path var fd = new FormData() fd.append('key', keyPath + filename); fd.append('acl','private'); fd.append('Content-Type', contentType); fd.append("AWSAccessKeyId", policyData.key) fd.append('Policy', policyData.policy); fd.append('filename', filename); fd.append('Signature', policyData.signature); fd.append('file', fileItem); return fd } function fileUploadComplete(fileItem, policyData){ data = { uploaded: true, fileSize: fileItem.size, file: policyData.file_id, } $.ajax({ method:"POST", data: data, url: "/api/files/complete/", success: function(data){ displayItems(fileItemList) }, error: function(jqXHR, textStatus, errorThrown){ alert("An error occured, please refresh the page.") } }) } function modelComplete(policyData, aws_url){ data = { file: policyData.file_id, aws_url: aws_url } $.ajax({ method:"POST", data: data, url: "/api/files/modelcomplete/", success: console.log('model complete success') , error: function(jqXHR, textStatus, errorThrown){ alert("An error occured, please refresh the page.") } }) } function displayItems(fileItemList){ var itemList = $('.item-loading-queue') itemList.html("") $.each(fileItemList, function(index, obj){ var item = obj.file var id_ = obj.id var order_ = obj.order var html_ = "<div class=\"progress\">" + "<div class=\"progress-bar\" role=\"progressbar\" style='width:" + item.progress + "%' aria-valuenow='" + item.progress + "' aria-valuemin=\"0\" aria-valuemax=\"100\"></div></div>" itemList.append("<div>" + order_ + ") " + item.name + "<a href='#' class='srvup-item-upload float-right' data-id='" + id_ + ")'>X</a> <br/>" + html_ + "</div><hr/>") }) } function uploadFile(fileItem){ var policyData; var newLoadingItem; // get AWS upload policy for each file uploaded through the POST method // Remember we're creating an instance in the backend so using POST is // needed. $.ajax({ method:"POST", data: { filename: fileItem.name }, url: "/api/files/policy/", success: function(data){ policyData = data }, error: function(data){ alert("An error occured, please try again later") } }).done(function(){ // construct the needed data using the policy for AWS var file = fileItem; AWS.config.credentials = new AWS.WebIdentityCredentials({ RoleArn: 'arn:aws:iam::120974195102:role/thearchmedia-google-role', ProviderId: null, // this is null for Google WebIdentityToken: id_token // Access token from identity provider }); var bucket = 'thearchmedia' var key = 'LargeDocuments/'+file.name var aws_url = 'https://'+bucket+'.s3.amazonaws.com/'+ key var s3bucket = new AWS.S3({params: {Bucket: bucket}}); var params = {Key: key , ContentType: file.type, Body: file, ACL:'public-read', }; s3bucket.upload(params, function (err, data) { $('#results').html(err ? 'ERROR!' : 'UPLOADED :' + data.Location); }).on( 'httpUploadProgress', function(evt) { progress = parseInt((evt.loaded * 100) / evt.total) $('.progress-upload-bar').attr('aria-valuenow',progress) $('.progress-upload-bar').attr('width',progress.toString()+'%') $('.progress-upload-bar').attr('style',"width:"+progress.toString()+'%') $('.progress-upload-bar').text(progress.toString()+'%') }).send( function(err, data) { alert("File uploaded successfully.") fileUploadComplete(fileItem, policyData) modelComplete(policyData, aws_url) }); }) } }) ``` Explanation of .js and .view.py interaction First an Ajax call with file information in the head creates the Document object, but since the file never touches the server a "File" object is not created in the Document object. This "File" object contained the functionality I needed so I needed to do more. Next my javascript file uploads the file to my s3 bucket using the AWS Javascript SDK. The s3bucket.upload() function from the SDK is robust enough to upload files up to 5GB, but with a few other modifications not included it can upload up to 5TB (aws limit). After the file is uploaded to the s3 bucket, my final API call occurs. The final API call triggers a Celery task that downloads the file to a temporary directory on my remote server. Once the file exists on my remote server the File objects is created and saved to document model. The task.py file that handles the download of the file from S3 bucket to the remote server and then creates & saves the File object to the document file. ``` #task.py from .models import LargeDocument from celery import shared_task import urllib.request from django.core.mail import send_mail from django.core.files import File import threading @shared_task def file_creator(pk_num): obj = LargeDocument.objects.get(pk=pk_num) tmp_loc = 'tmp/'+ obj.title def downloadit(): urllib.request.urlretrieve('https://thearchmedia.s3.amazonaws.com/LargeDocuments/' + obj.title, tmp_loc) def after_dwn(): dwn_thread.join() #waits till thread1 has completed executing #next chunk of code after download, goes here send_mail( obj.title + ' has finished to downloading to the server', obj.title + 'Downloaded to server', 'info@thearchmedia.com', ['wes@wesgarlock.com'], fail_silently=False, ) reopen = open(tmp_loc, 'rb') django_file = File(reopen) obj.file = django_file obj.save() send_mail( obj.title + ' has finished to downloading to the server', 'File Model Created for' + obj.title, 'info@thearchmedia.com', ['wes@wesgarlock.com'], fail_silently=False, ) dwn_thread = threading.Thread(target=downloadit) dwn_thread.start() metadata_thread = threading.Thread(target=after_dwn) metadata_thread.start() ``` This is process needs to run in Celery because downloading large files takes time and I didn't want to wait around with a browser open. Also inside of this task.py is a python thread() which forces the process to wait until the file is successfully downloaded to the remote server. If you are new to Celery here is the start of their documentation (<http://docs.celeryproject.org/en/master/getting-started/introduction.html>) Also I've added some email notifications for confirmation that the processes completed. Final note I created a /tmp directory in my project and setup a daily delete of all files to give it the tmp functionality. ``` crontab -e find ~/thearchmedia/tmp -mtime +1 -delete ```
I would suspect that the exception `psycopg2.DatabaseError SSL SYSCALL error: Operation timed out` will happen if the droplet is running out of memory. Try to create a swap partition or extend your memory. [Creating a swap partition](https://www.digitalocean.com/community/tutorials/how-to-add-swap-space-on-ubuntu-16-04)
9,301,531
Why doesn't the following work? ``` $ alias sayHello='/bin/echo "Hello world!"' $ sayHello Hello world! $ nohup sayHello nohup: appending output to `nohup.out' nohup: cannot run command `sayHello': No such file or directory ``` (the reason I ask this question is because I've aliased my `perl` and `python` to different perl/python binaries which were optimized for my own purposes; however, nohup gives me troubles if I don't supply full path to my perl/python binaries)
2012/02/15
[ "https://Stackoverflow.com/questions/9301531", "https://Stackoverflow.com", "https://Stackoverflow.com/users/884447/" ]
Because the shell doesn't pass aliases on to child processes (except when you use $() or ``). $ alias sayHello='/bin/echo "Hello world!"' Now an alias is known in this shell process, which is fine but only works in this one shell process. ``` $ sayHello Hello world! ``` Since you said "sayHello" in the same shell it worked. ``` $ nohup sayHello ``` Here, a program "nohup" is being started as a child process. Therefore, it will not receive the aliases. Then it starts the child process "sayHello" - which isn't found. For your specific problem, it's best to make the new "perl" and "python" look like the normal ones as much as possible. I'd suggest to set the search path. In your `~/.bash_profile` add ``` export PATH="/my/shiny/interpreters/bin:${PATH}" ``` Then re-login. Since this is an environment variable, it *will* be passed to all the child processes, be they shells or not - it should now work very often.
If you look at the [Aliases](http://www.gnu.org/software/bash/manual/html_node/Aliases.html) section of the Bash manual, it says > > The first word of each simple command, if unquoted, is checked to see > if it has an alias. > > > Unfortunately, it doesn't seem like `bash` has anything like `zsh`'s [global aliases](http://zsh.sourceforge.net/Doc/Release/Shell-Builtin-Commands.html#index-alias), which are expanded in any position.
9,301,531
Why doesn't the following work? ``` $ alias sayHello='/bin/echo "Hello world!"' $ sayHello Hello world! $ nohup sayHello nohup: appending output to `nohup.out' nohup: cannot run command `sayHello': No such file or directory ``` (the reason I ask this question is because I've aliased my `perl` and `python` to different perl/python binaries which were optimized for my own purposes; however, nohup gives me troubles if I don't supply full path to my perl/python binaries)
2012/02/15
[ "https://Stackoverflow.com/questions/9301531", "https://Stackoverflow.com", "https://Stackoverflow.com/users/884447/" ]
Because the shell doesn't pass aliases on to child processes (except when you use $() or ``). $ alias sayHello='/bin/echo "Hello world!"' Now an alias is known in this shell process, which is fine but only works in this one shell process. ``` $ sayHello Hello world! ``` Since you said "sayHello" in the same shell it worked. ``` $ nohup sayHello ``` Here, a program "nohup" is being started as a child process. Therefore, it will not receive the aliases. Then it starts the child process "sayHello" - which isn't found. For your specific problem, it's best to make the new "perl" and "python" look like the normal ones as much as possible. I'd suggest to set the search path. In your `~/.bash_profile` add ``` export PATH="/my/shiny/interpreters/bin:${PATH}" ``` Then re-login. Since this is an environment variable, it *will* be passed to all the child processes, be they shells or not - it should now work very often.
For bash: Try doing nohup '`your_alias`'. It works for me. I don't know why back quote is not shown. Put your alias within back quotes.
9,301,531
Why doesn't the following work? ``` $ alias sayHello='/bin/echo "Hello world!"' $ sayHello Hello world! $ nohup sayHello nohup: appending output to `nohup.out' nohup: cannot run command `sayHello': No such file or directory ``` (the reason I ask this question is because I've aliased my `perl` and `python` to different perl/python binaries which were optimized for my own purposes; however, nohup gives me troubles if I don't supply full path to my perl/python binaries)
2012/02/15
[ "https://Stackoverflow.com/questions/9301531", "https://Stackoverflow.com", "https://Stackoverflow.com/users/884447/" ]
Because the shell doesn't pass aliases on to child processes (except when you use $() or ``). $ alias sayHello='/bin/echo "Hello world!"' Now an alias is known in this shell process, which is fine but only works in this one shell process. ``` $ sayHello Hello world! ``` Since you said "sayHello" in the same shell it worked. ``` $ nohup sayHello ``` Here, a program "nohup" is being started as a child process. Therefore, it will not receive the aliases. Then it starts the child process "sayHello" - which isn't found. For your specific problem, it's best to make the new "perl" and "python" look like the normal ones as much as possible. I'd suggest to set the search path. In your `~/.bash_profile` add ``` export PATH="/my/shiny/interpreters/bin:${PATH}" ``` Then re-login. Since this is an environment variable, it *will* be passed to all the child processes, be they shells or not - it should now work very often.
With bash, you can invoke a subshell interactively using the `-i` option. This will source your `.bashrc` as well as enable the `expand_aliases` shell option. Granted, this will only work if your *alias* is defined in your `.bashrc` which is the convention. **Bash manpage:** > > If the `-i` option is present, the shell is **interactive**. > > > **expand\_aliases**: If set, aliases are expanded as described above under ALIASES. This option is *enabled by default* for interactive shells. > > > When an interactive shell that is not a login shell is started, bash reads and executes commands from `/etc/bash.bashrc` and `~/.bashrc`, if these files exist. > > > --- ``` $ nohup bash -ci 'sayHello' ```
9,301,531
Why doesn't the following work? ``` $ alias sayHello='/bin/echo "Hello world!"' $ sayHello Hello world! $ nohup sayHello nohup: appending output to `nohup.out' nohup: cannot run command `sayHello': No such file or directory ``` (the reason I ask this question is because I've aliased my `perl` and `python` to different perl/python binaries which were optimized for my own purposes; however, nohup gives me troubles if I don't supply full path to my perl/python binaries)
2012/02/15
[ "https://Stackoverflow.com/questions/9301531", "https://Stackoverflow.com", "https://Stackoverflow.com/users/884447/" ]
For bash: Try doing nohup '`your_alias`'. It works for me. I don't know why back quote is not shown. Put your alias within back quotes.
If you look at the [Aliases](http://www.gnu.org/software/bash/manual/html_node/Aliases.html) section of the Bash manual, it says > > The first word of each simple command, if unquoted, is checked to see > if it has an alias. > > > Unfortunately, it doesn't seem like `bash` has anything like `zsh`'s [global aliases](http://zsh.sourceforge.net/Doc/Release/Shell-Builtin-Commands.html#index-alias), which are expanded in any position.
9,301,531
Why doesn't the following work? ``` $ alias sayHello='/bin/echo "Hello world!"' $ sayHello Hello world! $ nohup sayHello nohup: appending output to `nohup.out' nohup: cannot run command `sayHello': No such file or directory ``` (the reason I ask this question is because I've aliased my `perl` and `python` to different perl/python binaries which were optimized for my own purposes; however, nohup gives me troubles if I don't supply full path to my perl/python binaries)
2012/02/15
[ "https://Stackoverflow.com/questions/9301531", "https://Stackoverflow.com", "https://Stackoverflow.com/users/884447/" ]
With bash, you can invoke a subshell interactively using the `-i` option. This will source your `.bashrc` as well as enable the `expand_aliases` shell option. Granted, this will only work if your *alias* is defined in your `.bashrc` which is the convention. **Bash manpage:** > > If the `-i` option is present, the shell is **interactive**. > > > **expand\_aliases**: If set, aliases are expanded as described above under ALIASES. This option is *enabled by default* for interactive shells. > > > When an interactive shell that is not a login shell is started, bash reads and executes commands from `/etc/bash.bashrc` and `~/.bashrc`, if these files exist. > > > --- ``` $ nohup bash -ci 'sayHello' ```
If you look at the [Aliases](http://www.gnu.org/software/bash/manual/html_node/Aliases.html) section of the Bash manual, it says > > The first word of each simple command, if unquoted, is checked to see > if it has an alias. > > > Unfortunately, it doesn't seem like `bash` has anything like `zsh`'s [global aliases](http://zsh.sourceforge.net/Doc/Release/Shell-Builtin-Commands.html#index-alias), which are expanded in any position.
9,301,531
Why doesn't the following work? ``` $ alias sayHello='/bin/echo "Hello world!"' $ sayHello Hello world! $ nohup sayHello nohup: appending output to `nohup.out' nohup: cannot run command `sayHello': No such file or directory ``` (the reason I ask this question is because I've aliased my `perl` and `python` to different perl/python binaries which were optimized for my own purposes; however, nohup gives me troubles if I don't supply full path to my perl/python binaries)
2012/02/15
[ "https://Stackoverflow.com/questions/9301531", "https://Stackoverflow.com", "https://Stackoverflow.com/users/884447/" ]
For bash: Try doing nohup '`your_alias`'. It works for me. I don't know why back quote is not shown. Put your alias within back quotes.
With bash, you can invoke a subshell interactively using the `-i` option. This will source your `.bashrc` as well as enable the `expand_aliases` shell option. Granted, this will only work if your *alias* is defined in your `.bashrc` which is the convention. **Bash manpage:** > > If the `-i` option is present, the shell is **interactive**. > > > **expand\_aliases**: If set, aliases are expanded as described above under ALIASES. This option is *enabled by default* for interactive shells. > > > When an interactive shell that is not a login shell is started, bash reads and executes commands from `/etc/bash.bashrc` and `~/.bashrc`, if these files exist. > > > --- ``` $ nohup bash -ci 'sayHello' ```
21,822,054
I've tried what's told in [How to force /bin/bash interpreter for oneliners](https://stackoverflow.com/questions/20906073/how-to-force-bin-bash-interpreter-for-oneliners) By doing ``` os.system('GREPDB="my command"') os.system('/bin/bash -c \'$GREPDB\'') ``` However no luck, unfortunately I need to run this command with bash and subp isn't an option in this environment, I'm limited to python 2.4. Any suggestions to get me in the right direction?
2014/02/17
[ "https://Stackoverflow.com/questions/21822054", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2453153/" ]
Both commands are executed in different subshells. Setting variables in the first `system` call does not affect the second `system` call. You need to put two command in one string (combining them with `;`). ``` >>> import os >>> os.system('GREPDB="echo 123"; /bin/bash -c "$GREPDB"') 123 0 ``` **NOTE** You need to use `"$GREPDB"` instead of `'$GREPDBS'`. Otherwise it is interpreted literally instead of being expanded. If you can use `subprocess`: ``` >>> import subprocess >>> subprocess.call('/bin/bash -c "$GREPDB"', shell=True, ... env={'GREPDB': 'echo 123'}) 123 0 ```
The solution below still initially invokes a shell, but it switches to bash for the command you are trying to execute: ``` os.system('/bin/bash -c "echo hello world"') ```
21,822,054
I've tried what's told in [How to force /bin/bash interpreter for oneliners](https://stackoverflow.com/questions/20906073/how-to-force-bin-bash-interpreter-for-oneliners) By doing ``` os.system('GREPDB="my command"') os.system('/bin/bash -c \'$GREPDB\'') ``` However no luck, unfortunately I need to run this command with bash and subp isn't an option in this environment, I'm limited to python 2.4. Any suggestions to get me in the right direction?
2014/02/17
[ "https://Stackoverflow.com/questions/21822054", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2453153/" ]
Both commands are executed in different subshells. Setting variables in the first `system` call does not affect the second `system` call. You need to put two command in one string (combining them with `;`). ``` >>> import os >>> os.system('GREPDB="echo 123"; /bin/bash -c "$GREPDB"') 123 0 ``` **NOTE** You need to use `"$GREPDB"` instead of `'$GREPDBS'`. Otherwise it is interpreted literally instead of being expanded. If you can use `subprocess`: ``` >>> import subprocess >>> subprocess.call('/bin/bash -c "$GREPDB"', shell=True, ... env={'GREPDB': 'echo 123'}) 123 0 ```
I use this: ``` subprocess.call(["bash","-c",cmd]) ``` //OK, ignore this because I have not notice subprocess not considered.
21,822,054
I've tried what's told in [How to force /bin/bash interpreter for oneliners](https://stackoverflow.com/questions/20906073/how-to-force-bin-bash-interpreter-for-oneliners) By doing ``` os.system('GREPDB="my command"') os.system('/bin/bash -c \'$GREPDB\'') ``` However no luck, unfortunately I need to run this command with bash and subp isn't an option in this environment, I'm limited to python 2.4. Any suggestions to get me in the right direction?
2014/02/17
[ "https://Stackoverflow.com/questions/21822054", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2453153/" ]
Both commands are executed in different subshells. Setting variables in the first `system` call does not affect the second `system` call. You need to put two command in one string (combining them with `;`). ``` >>> import os >>> os.system('GREPDB="echo 123"; /bin/bash -c "$GREPDB"') 123 0 ``` **NOTE** You need to use `"$GREPDB"` instead of `'$GREPDBS'`. Otherwise it is interpreted literally instead of being expanded. If you can use `subprocess`: ``` >>> import subprocess >>> subprocess.call('/bin/bash -c "$GREPDB"', shell=True, ... env={'GREPDB': 'echo 123'}) 123 0 ```
I searched this command for some days and found really working code: ``` import subprocess def bash_command(cmd): subprocess.Popen(['/bin/bash', '-c', cmd]) code="abcde" // you can use echo options such as -e bash_command('echo -ne "'+code+'"') ``` Output: ``` abcde ```
21,822,054
I've tried what's told in [How to force /bin/bash interpreter for oneliners](https://stackoverflow.com/questions/20906073/how-to-force-bin-bash-interpreter-for-oneliners) By doing ``` os.system('GREPDB="my command"') os.system('/bin/bash -c \'$GREPDB\'') ``` However no luck, unfortunately I need to run this command with bash and subp isn't an option in this environment, I'm limited to python 2.4. Any suggestions to get me in the right direction?
2014/02/17
[ "https://Stackoverflow.com/questions/21822054", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2453153/" ]
Both commands are executed in different subshells. Setting variables in the first `system` call does not affect the second `system` call. You need to put two command in one string (combining them with `;`). ``` >>> import os >>> os.system('GREPDB="echo 123"; /bin/bash -c "$GREPDB"') 123 0 ``` **NOTE** You need to use `"$GREPDB"` instead of `'$GREPDBS'`. Otherwise it is interpreted literally instead of being expanded. If you can use `subprocess`: ``` >>> import subprocess >>> subprocess.call('/bin/bash -c "$GREPDB"', shell=True, ... env={'GREPDB': 'echo 123'}) 123 0 ```
``` subprocess.Popen(cmd, shell=True, executable='/bin/bash') ```
21,822,054
I've tried what's told in [How to force /bin/bash interpreter for oneliners](https://stackoverflow.com/questions/20906073/how-to-force-bin-bash-interpreter-for-oneliners) By doing ``` os.system('GREPDB="my command"') os.system('/bin/bash -c \'$GREPDB\'') ``` However no luck, unfortunately I need to run this command with bash and subp isn't an option in this environment, I'm limited to python 2.4. Any suggestions to get me in the right direction?
2014/02/17
[ "https://Stackoverflow.com/questions/21822054", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2453153/" ]
The solution below still initially invokes a shell, but it switches to bash for the command you are trying to execute: ``` os.system('/bin/bash -c "echo hello world"') ```
I use this: ``` subprocess.call(["bash","-c",cmd]) ``` //OK, ignore this because I have not notice subprocess not considered.
21,822,054
I've tried what's told in [How to force /bin/bash interpreter for oneliners](https://stackoverflow.com/questions/20906073/how-to-force-bin-bash-interpreter-for-oneliners) By doing ``` os.system('GREPDB="my command"') os.system('/bin/bash -c \'$GREPDB\'') ``` However no luck, unfortunately I need to run this command with bash and subp isn't an option in this environment, I'm limited to python 2.4. Any suggestions to get me in the right direction?
2014/02/17
[ "https://Stackoverflow.com/questions/21822054", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2453153/" ]
The solution below still initially invokes a shell, but it switches to bash for the command you are trying to execute: ``` os.system('/bin/bash -c "echo hello world"') ```
I searched this command for some days and found really working code: ``` import subprocess def bash_command(cmd): subprocess.Popen(['/bin/bash', '-c', cmd]) code="abcde" // you can use echo options such as -e bash_command('echo -ne "'+code+'"') ``` Output: ``` abcde ```
21,822,054
I've tried what's told in [How to force /bin/bash interpreter for oneliners](https://stackoverflow.com/questions/20906073/how-to-force-bin-bash-interpreter-for-oneliners) By doing ``` os.system('GREPDB="my command"') os.system('/bin/bash -c \'$GREPDB\'') ``` However no luck, unfortunately I need to run this command with bash and subp isn't an option in this environment, I'm limited to python 2.4. Any suggestions to get me in the right direction?
2014/02/17
[ "https://Stackoverflow.com/questions/21822054", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2453153/" ]
The solution below still initially invokes a shell, but it switches to bash for the command you are trying to execute: ``` os.system('/bin/bash -c "echo hello world"') ```
``` subprocess.Popen(cmd, shell=True, executable='/bin/bash') ```
21,822,054
I've tried what's told in [How to force /bin/bash interpreter for oneliners](https://stackoverflow.com/questions/20906073/how-to-force-bin-bash-interpreter-for-oneliners) By doing ``` os.system('GREPDB="my command"') os.system('/bin/bash -c \'$GREPDB\'') ``` However no luck, unfortunately I need to run this command with bash and subp isn't an option in this environment, I'm limited to python 2.4. Any suggestions to get me in the right direction?
2014/02/17
[ "https://Stackoverflow.com/questions/21822054", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2453153/" ]
I use this: ``` subprocess.call(["bash","-c",cmd]) ``` //OK, ignore this because I have not notice subprocess not considered.
I searched this command for some days and found really working code: ``` import subprocess def bash_command(cmd): subprocess.Popen(['/bin/bash', '-c', cmd]) code="abcde" // you can use echo options such as -e bash_command('echo -ne "'+code+'"') ``` Output: ``` abcde ```
21,822,054
I've tried what's told in [How to force /bin/bash interpreter for oneliners](https://stackoverflow.com/questions/20906073/how-to-force-bin-bash-interpreter-for-oneliners) By doing ``` os.system('GREPDB="my command"') os.system('/bin/bash -c \'$GREPDB\'') ``` However no luck, unfortunately I need to run this command with bash and subp isn't an option in this environment, I'm limited to python 2.4. Any suggestions to get me in the right direction?
2014/02/17
[ "https://Stackoverflow.com/questions/21822054", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2453153/" ]
I use this: ``` subprocess.call(["bash","-c",cmd]) ``` //OK, ignore this because I have not notice subprocess not considered.
``` subprocess.Popen(cmd, shell=True, executable='/bin/bash') ```
21,822,054
I've tried what's told in [How to force /bin/bash interpreter for oneliners](https://stackoverflow.com/questions/20906073/how-to-force-bin-bash-interpreter-for-oneliners) By doing ``` os.system('GREPDB="my command"') os.system('/bin/bash -c \'$GREPDB\'') ``` However no luck, unfortunately I need to run this command with bash and subp isn't an option in this environment, I'm limited to python 2.4. Any suggestions to get me in the right direction?
2014/02/17
[ "https://Stackoverflow.com/questions/21822054", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2453153/" ]
``` subprocess.Popen(cmd, shell=True, executable='/bin/bash') ```
I searched this command for some days and found really working code: ``` import subprocess def bash_command(cmd): subprocess.Popen(['/bin/bash', '-c', cmd]) code="abcde" // you can use echo options such as -e bash_command('echo -ne "'+code+'"') ``` Output: ``` abcde ```
14,402,654
extreme python/sql beginner here. I've looked around for some help with this but wasn't able to find exactly what I need- would really appreciate any assistance. As the title indicates, I have a very large text file that I want to parse into a sql database preferably using python. The text file is set up as so: ``` #Parent field 1.1 child 1.1 child 1.1 continued # Parent field 1.2 child 1.2 # Parent field 1.3 child 1.3 text child 1.3 text more child 1.3 text ... # Parent field 1.88 child 1.88 #Parent field 2.1 child 2.1 etc... ``` Some key points about the list: * the first field (i.e. 1.1, 2.1) has no space after the # * the length of each child row has variable character lengths and line breaks but there is always an empty line before the next parent * there are 88 fields for each parent * there are hundreds of parent fields Now, I'd like each parent field (1.1, 1.2, 1.3 --> .88) to be a column and the rows populated by subsequent numbers (2.1, 3.1 -->100s) Could someone help me set up a python script and give me some direction of how to begin parsing? Let me know if I haven't explained the task properly and I'll promptly provide more details. Thanks so much! Ben EDIT: I just realized that the # of columns is NOT constant 88, it is variable
2013/01/18
[ "https://Stackoverflow.com/questions/14402654", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1756574/" ]
A few points: 1. From the description it seems like you aim at your data being denormalized in one table. This is generally not a good idea. Split your data into two tables: PARENT and CHILDREN. PARENT should contain ID and CHILDREN should have at least two columns: PARENT\_ID and CHILD\_VALUE (or smth like it) with PARENT\_ID being ID of a parent, whether linked explicitly as foreign key DB construct or not (depending on database). Then, while parsing, INSERT into table CHILDREN relevant record with VALUES("1.1", "1.1childA"), VALUES("1.1", "1.1childB") and so on. 2. parsing should be trivial: iterate line by line and on "parent" line change parent\_id and INSERT into PARENT and read child rows as it goes and INSERT those into CHILDREN table. You could also do it in two passes. Smth like this: ``` #!/usr/bin/python parent='' child='' for line in open('input.txt'): if line.find('#Parent') > -1 or line.find('# Parent') > -1: parent = field_extract(line) # fun where you extract parent value parent_id = ... # write it down or generate # INSERT into PARENT elif line: child = field_extract(line) # INSERT into CHILDREN with parent_id and child values ``` Although... I shudder when I see smth so primitive. I'd urge you to learn Pyparsing module, absolutely great for this kind of work.
you should look into **file handling** in python. the `open() , .readlines()` methods and lists will help you **alot**. for example: ``` f = open("NAMEOFTXTFILE.TXT","r") #r for read, w for write, a for append. cell = f.readlines() # Displays the content in a list f.seek(0) # Just takes the cursor to the first cell (start of document) print cell[2] # Prints the word or letter in the second cell. ``` then from there, you can send `cell[2]` with sql statements.
36,064,495
currently I need to make some distance calculation. For this I am trying the following on my ipython-notebook (version 4.0.4): ``` from geopy.distance import vincenty ig_gruendau = (50.195883, 9.115557) delphi = (49.99908,19.84481) print(vincenty(ig_gruendau,delphi).miles) ``` Unfortunately I receive the following error when running the code above: ImportError: No module named 'geopy' Since I am pretty new at python, I wonder how can I install this module (without admin rights) or what other simple options I do have for this calculations? Thanks, ML
2016/03/17
[ "https://Stackoverflow.com/questions/36064495", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5446609/" ]
You need to install the missing module in your python installation. So you have to run the command: ``` pip install geopy ``` in your terminal. If you don't have pip, you'll have to install it using: ``` easy_install pip ``` and if both command fail with `Permission denied`, then you'll have to either launch the command as root: ``` sudo easy_install pip sudo pip install geopy ``` or for pip, install it only for your user: ``` pip install geopy --user ``` And for future reference, whenever you get that kind of error: ``` ImportError: No module named 'XXXXX' ``` you can search for it on pypi using pip: ``` % pip search XXXXX ``` and in your case: ``` % pip search geopy tornado-geopy (0.1.0) - tornado-geopy is an asynchronous version of the awesome geopy library. geopy.jp (0.1.0) - Geocoding library for Python. geopy.jp-2.7 (0.1.0) - Geocoding library for Python. geopy (1.11.0) - Python Geocoding Toolbox ``` HTH
Even if you install using `pip install` command you still have to use: ``` conda install -c conda-forge geopy ``` This command is in the anaconda server so that it gets installed in the anaconda.
34,013,185
Let's say that I have this list in python ``` A = ["(a,1)", "(b,2)", "(c,3)", "(d,4)"] ``` so how can I print it out in the following format: ``` (a,1), (b,2), (c,3), (d,4) ``` using one line, better without using for loop Thanks in advance
2015/12/01
[ "https://Stackoverflow.com/questions/34013185", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5482492/" ]
When A is a list of str: ``` print(', '.join(A)) ``` Or more general: ``` print(', '.join(map(str, A))) ```
In your case below code will work ``` print(', '.join(A)) ```
34,013,185
Let's say that I have this list in python ``` A = ["(a,1)", "(b,2)", "(c,3)", "(d,4)"] ``` so how can I print it out in the following format: ``` (a,1), (b,2), (c,3), (d,4) ``` using one line, better without using for loop Thanks in advance
2015/12/01
[ "https://Stackoverflow.com/questions/34013185", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5482492/" ]
When A is a list of str: ``` print(', '.join(A)) ``` Or more general: ``` print(', '.join(map(str, A))) ```
You can also use regular expression: ``` >>> A = ["(a,1)", "(b,2)", "(c,3)", "(d,4)"] >>> import re >>> re.sub(r'[\[\]\"]', r"",",".join(A)) '(a,1),(b,2),(c,3),(d,4)' ```
49,514,684
I'm relatively new to using sklearn and python for data analysis and am trying to run some linear regression on a dataset that I loaded from a `.csv` file. I have loaded my data into `train_test_split` without any issues, but when I try to fit my training data I receive an error `ValueError: Expected 2D array, got 1D array instead: ... Reshape your data either using array.reshape(-1, 1) if your data has a single feature or array.reshape(1, -1) if it contains a single sample.`. Error at `model = lm.fit(X_train, y_train)` Because of my freshness with working with these packages, I'm trying to determine if this is the result of not setting my imported csv to a pandas data frame before running the regression or if this has to do with something else. My CSV is in the format of: ``` Month,Date,Day of Week,Growth,Sunlight,Plants 7,7/1/17,Saturday,44,611,26 7,7/2/17,Sunday,30,507,14 7,7/5/17,Wednesday,55,994,25 7,7/6/17,Thursday,50,1014,23 7,7/7/17,Friday,78,850,49 7,7/8/17,Saturday,81,551,50 7,7/9/17,Sunday,59,506,29 ``` Here is how I set up the regression: ``` import numpy as np import pandas as pd from sklearn import linear_model from sklearn.model_selection import train_test_split from matplotlib import pyplot as plt organic = pd.read_csv("linear-regression.csv") organic.columns Index(['Month', 'Date', 'Day of Week', 'Growth', 'Sunlight', 'Plants'], dtype='object') # Set the depedent (Growth) and independent (Sunlight) y = organic['Growth'] X = organic['Sunlight'] # Test train split X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2) print (X_train.shape, X_test.shape) print (y_train.shape, y_test.shape) (192,) (49,) (192,) (49,) lm = linear_model.LinearRegression() model = lm.fit(X_train, y_train) # Error pointing to an array with values from Sunlight [611, 507, 994, ...] ```
2018/03/27
[ "https://Stackoverflow.com/questions/49514684", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1061892/" ]
You are only using one feature, so it tells you what to do within the error: > > Reshape your data either using array.reshape(-1, 1) if your data has a single feature. > > > The data always has to be 2D in scikit-learn. (Don't forget the typo in `X = organic['Sunglight']`)
Once you load the data into `train_test_split(X, y, test_size=0.2)`, it returns Pandas Series `X_train` and `X_test` with `(192, )` and `(49, )` dimensions. As mentioned in the previous answers, sklearn expect matrices of shape `[n_samples,n_features]` as the `X_train`, `X_test` data. You can simply convert the Pandas Series `X_train` and `X_test` to Pandas Dataframes to change their dimensions to `(192, 1)` and `(49, 1)`. ``` lm = linear_model.LinearRegression() model = lm.fit(X_train.to_frame(), y_train) ```
49,514,684
I'm relatively new to using sklearn and python for data analysis and am trying to run some linear regression on a dataset that I loaded from a `.csv` file. I have loaded my data into `train_test_split` without any issues, but when I try to fit my training data I receive an error `ValueError: Expected 2D array, got 1D array instead: ... Reshape your data either using array.reshape(-1, 1) if your data has a single feature or array.reshape(1, -1) if it contains a single sample.`. Error at `model = lm.fit(X_train, y_train)` Because of my freshness with working with these packages, I'm trying to determine if this is the result of not setting my imported csv to a pandas data frame before running the regression or if this has to do with something else. My CSV is in the format of: ``` Month,Date,Day of Week,Growth,Sunlight,Plants 7,7/1/17,Saturday,44,611,26 7,7/2/17,Sunday,30,507,14 7,7/5/17,Wednesday,55,994,25 7,7/6/17,Thursday,50,1014,23 7,7/7/17,Friday,78,850,49 7,7/8/17,Saturday,81,551,50 7,7/9/17,Sunday,59,506,29 ``` Here is how I set up the regression: ``` import numpy as np import pandas as pd from sklearn import linear_model from sklearn.model_selection import train_test_split from matplotlib import pyplot as plt organic = pd.read_csv("linear-regression.csv") organic.columns Index(['Month', 'Date', 'Day of Week', 'Growth', 'Sunlight', 'Plants'], dtype='object') # Set the depedent (Growth) and independent (Sunlight) y = organic['Growth'] X = organic['Sunlight'] # Test train split X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2) print (X_train.shape, X_test.shape) print (y_train.shape, y_test.shape) (192,) (49,) (192,) (49,) lm = linear_model.LinearRegression() model = lm.fit(X_train, y_train) # Error pointing to an array with values from Sunlight [611, 507, 994, ...] ```
2018/03/27
[ "https://Stackoverflow.com/questions/49514684", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1061892/" ]
You just need to adjust your last columns to ``` lm = linear_model.LinearRegression() model = lm.fit(X_train.values.reshape(-1,1), y_train) ``` and the model will fit. The reason for this is that the linear model from sklearn expects > > X : numpy array or sparse matrix of shape [n\_samples,n\_features] > > > So our training data must be of form [7,1] in this particular case
Once you load the data into `train_test_split(X, y, test_size=0.2)`, it returns Pandas Series `X_train` and `X_test` with `(192, )` and `(49, )` dimensions. As mentioned in the previous answers, sklearn expect matrices of shape `[n_samples,n_features]` as the `X_train`, `X_test` data. You can simply convert the Pandas Series `X_train` and `X_test` to Pandas Dataframes to change their dimensions to `(192, 1)` and `(49, 1)`. ``` lm = linear_model.LinearRegression() model = lm.fit(X_train.to_frame(), y_train) ```
72,484,522
I am making a little math game, similar to [zeta mac](https://arithmetic.zetamac.com/game?key=a7220a92). Everything seems to be working well. Ideally I would like this console output to erase incorrect answers entered by the user, without reprinting the math problem again for them to solve. Is something like this possible? For example, I may prompt the user to answer "57 + 37 = " in the console, then if they type 24 (console would look like this "57 + 37 = 24", I would like the 24 to be erased, and for the "57 + 37 = " to remain, allowing the user to guess again, without the same equation having to be printed again on a line below. Here is the source code (sorry if its messy, I just started learning python): ``` import random import time def play(seconds): start_time = time.time() score = 0 while True: current_time = time.time() elapsed_time = current_time - start_time a = random.randint(2, 100) b = random.randint(2, 100) d = random.randint(2, 12) asmd = random.choice([1, 2, 3, 4]) if (asmd == 1): solve = a + b answer = input("%d + %d = " % (a, b)) elif (asmd == 2): if (a > b): solve = a - b answer = input("%d - %d = " % (a, b)) else: solve = b - a answer = input("%d - %d = " % (b, a)) elif (asmd == 3): solve = a * d answer = input("%d * %d = " % (a, d)) else: solve = d c = a * d answer = input("%d / %d = " % (c, a)) while (solve != int(answer)): answer = input("= ") score += 1 if elapsed_time > seconds: print("Time\'s up! Your score was %d." % (score)) break play(10) ```
2022/06/03
[ "https://Stackoverflow.com/questions/72484522", "https://Stackoverflow.com", "https://Stackoverflow.com/users/19244670/" ]
Keep the duration in a variable and decrease the duration in every loop ``` def blink_green2(): red1.on() sleep_duration = 0.5 for i in range(5): green2.toggle() time.sleep(sleep_duration) green2.toggle() time.sleep(sleep_duration) sleep_duration -= 0.01 ```
Gradually increasing the speed of blinking means that you need to decrease the sleep duration between the blinking. So in the for loop you need to decrease the value of i. so something like this. ``` def blink_green2(): red1.on() for i in range(0,0.5,0.1): green2.toggle() time.sleep(0.5-i) ```
72,484,522
I am making a little math game, similar to [zeta mac](https://arithmetic.zetamac.com/game?key=a7220a92). Everything seems to be working well. Ideally I would like this console output to erase incorrect answers entered by the user, without reprinting the math problem again for them to solve. Is something like this possible? For example, I may prompt the user to answer "57 + 37 = " in the console, then if they type 24 (console would look like this "57 + 37 = 24", I would like the 24 to be erased, and for the "57 + 37 = " to remain, allowing the user to guess again, without the same equation having to be printed again on a line below. Here is the source code (sorry if its messy, I just started learning python): ``` import random import time def play(seconds): start_time = time.time() score = 0 while True: current_time = time.time() elapsed_time = current_time - start_time a = random.randint(2, 100) b = random.randint(2, 100) d = random.randint(2, 12) asmd = random.choice([1, 2, 3, 4]) if (asmd == 1): solve = a + b answer = input("%d + %d = " % (a, b)) elif (asmd == 2): if (a > b): solve = a - b answer = input("%d - %d = " % (a, b)) else: solve = b - a answer = input("%d - %d = " % (b, a)) elif (asmd == 3): solve = a * d answer = input("%d * %d = " % (a, d)) else: solve = d c = a * d answer = input("%d / %d = " % (c, a)) while (solve != int(answer)): answer = input("= ") score += 1 if elapsed_time > seconds: print("Time\'s up! Your score was %d." % (score)) break play(10) ```
2022/06/03
[ "https://Stackoverflow.com/questions/72484522", "https://Stackoverflow.com", "https://Stackoverflow.com/users/19244670/" ]
Keep the duration in a variable and decrease the duration in every loop ``` def blink_green2(): red1.on() sleep_duration = 0.5 for i in range(5): green2.toggle() time.sleep(sleep_duration) green2.toggle() time.sleep(sleep_duration) sleep_duration -= 0.01 ```
If you want the led to blink faster, you could decrease the amount of time the task sleeps. As the loop increases, the value of **i** increases. Try this: ```py def blink_green2(): red1.on() for i in range(5): green2.toggle() time.sleep(0.5 - (i / 25)) green2.toggle() time.sleep(0.5 / (i / 25)) ``` Do note that you can increase the range number. You can also change what **i** is divided by to make the led blink at a different rate. Call the function the same as before, like this: ```py while True: blink_green2() ```
72,484,522
I am making a little math game, similar to [zeta mac](https://arithmetic.zetamac.com/game?key=a7220a92). Everything seems to be working well. Ideally I would like this console output to erase incorrect answers entered by the user, without reprinting the math problem again for them to solve. Is something like this possible? For example, I may prompt the user to answer "57 + 37 = " in the console, then if they type 24 (console would look like this "57 + 37 = 24", I would like the 24 to be erased, and for the "57 + 37 = " to remain, allowing the user to guess again, without the same equation having to be printed again on a line below. Here is the source code (sorry if its messy, I just started learning python): ``` import random import time def play(seconds): start_time = time.time() score = 0 while True: current_time = time.time() elapsed_time = current_time - start_time a = random.randint(2, 100) b = random.randint(2, 100) d = random.randint(2, 12) asmd = random.choice([1, 2, 3, 4]) if (asmd == 1): solve = a + b answer = input("%d + %d = " % (a, b)) elif (asmd == 2): if (a > b): solve = a - b answer = input("%d - %d = " % (a, b)) else: solve = b - a answer = input("%d - %d = " % (b, a)) elif (asmd == 3): solve = a * d answer = input("%d * %d = " % (a, d)) else: solve = d c = a * d answer = input("%d / %d = " % (c, a)) while (solve != int(answer)): answer = input("= ") score += 1 if elapsed_time > seconds: print("Time\'s up! Your score was %d." % (score)) break play(10) ```
2022/06/03
[ "https://Stackoverflow.com/questions/72484522", "https://Stackoverflow.com", "https://Stackoverflow.com/users/19244670/" ]
Keep the duration in a variable and decrease the duration in every loop ``` def blink_green2(): red1.on() sleep_duration = 0.5 for i in range(5): green2.toggle() time.sleep(sleep_duration) green2.toggle() time.sleep(sleep_duration) sleep_duration -= 0.01 ```
try this ```py def blink_green(): green2.on() delay = 0.5 for i in range(5): green2.toggle() time.sleep(delay) green2.toggle() time.sleep(delay) delay -= .1 while True: blink_green() ``` > > at an increasing rate > > > it should start with a .5s delay and speed up to a 0s delay
72,565,884
I have a following code that handle with sockets in Python: ```py import socket HOST = "127.0.0.1" PORT = 4999 TIMEOUT = 1 with socket.socket(socket.AF_INET, socket.SOCK_STREAM) as s: s.bind((HOST, PORT)) # s.listen() conn, addr = s.accept() conn.settimeout(TIMEOUT) with conn: print(f"Connected by {addr}") while True: recieved = conn.recv(1024) print(recieved) ``` I open a terminal and run this python script using `python3 server.py`. It works, but when the first connection is closed, my script ends. I would like to be able to handle with multiple connections. In other words, when the first connection closes, I would like to wait for the second, third connection etc. I try to do this with `threading`. See below: ```py import socket import threading HOST = "127.0.0.1" PORT = 4999 TIMEOUT = 1 with socket.socket(socket.AF_INET, socket.SOCK_STREAM) as s: s.bind((HOST, PORT)) # s.listen() conn, addr = s.accept() conn.settimeout(TIMEOUT) threading.Lock() with conn: print(f"Connected by {addr}") while True: recieved = conn.recv(1024) print(recieved) ``` but the situation is the same. What do I do wrong, please?
2022/06/09
[ "https://Stackoverflow.com/questions/72565884", "https://Stackoverflow.com", "https://Stackoverflow.com/users/14667788/" ]
You don't need threads to handle serial connections, and your example "with threads" didn't use threads at all. It just created (and didn't save) a `threading.Lock` object. Use a `while` loop to accept the next connection: ```py import socket HOST = '' PORT = 4999 with socket.socket() as s: s.bind((HOST, PORT)) s.listen() while True: conn, addr = s.accept() with conn: print(f"Connected by {addr}") while True: received = conn.recv(1024) if not received: break # connection was closed print(received) print(f"Disconnected by {addr}") ``` Manual 2-client demo: ```py >>> from socket import * >>> s=socket() >>> s.connect(('localhost',4999)) >>> s.sendall(b'hello') >>> s.close() >>> s=socket() >>> s.connect(('localhost',4999)) >>> s.sendall(b'goodbye') >>> s.close() ``` Server output: ```none Connected by ('127.0.0.1', 19639) b'hello' Disconnected by ('127.0.0.1', 19639) Connected by ('127.0.0.1', 19640) b'goodbye' Disconnected by ('127.0.0.1', 19640) ``` If you need parallel connections, start a thread for each connection: ```py import socket import threading HOST = '' # typically listen on any interface on the server. "127.0.0.1" will only accept connections on the *same* computer. PORT = 4999 def handler(conn,addr): with conn: print(f"Connected by {addr}") while True: received = conn.recv(1024) if not received: break # connection was closed print(f'{addr}: {received}') print(f"Disconnected by {addr}") with socket.socket() as s: s.bind((HOST, PORT)) s.listen() while True: conn, addr = s.accept() threading.Thread(target=handler, args=(conn,addr)).start() ``` Manual demo of 2 clients connecting at the same time: ```py >>> from socket import * >>> s=socket() >>> s.connect(('localhost',4999)) >>> s2=socket() >>> s2.connect(('localhost',4999)) >>> s.sendall(b'hello') >>> s2.sendall(b'goodbye') >>> s.close() >>> s2.close() ``` Server output: ```none Connected by ('127.0.0.1', 19650) Connected by ('127.0.0.1', 19651) ('127.0.0.1', 19650): b'hello' ('127.0.0.1', 19651): b'goodbye' Disconnected by ('127.0.0.1', 19650) Disconnected by ('127.0.0.1', 19651) ```
Something like this: Main thread receives data and append to list. Thread monitors list for data to do whatever. ``` def receive_tcp_packets(self): server_socket.bind(socket.AF_INET, socket.SOCK_STREAM) as server_socket: server_socket.listen() server_conn, address = server_socket.accept() # Wait for connection with server_conn: print("Connected by {}".format(str(address))) # Wait to receive data while True: data = server_conn.recv(self.packet_size) # Just keep appending the data self.recv_data.append(data) if not data: break # Close connection server_socket.close() def run_server(self): while True: self.receive_tcp_packets() # Run the function that process incoming data thread = threading.Thread(target=check_for_data, args = (xxx)) thread.start # Run the main function that waits for incoming data server_socket.run_server() ```
65,047,449
I am trying to fetch data from a MySQL database using python connector. I want to fetch records matching the `ID`. The ID has an integer data type. This is the code: ```py custID = int(input("Customer ID: ")) executeStr = "SELECT * FROM testrec WHERE ID=%d" cursor.execute(executeStr, custID) custData = cursor.fetchall() if bool(custData): pass else: print("Wrong Id") ``` But the code is raising an error which says: ``` mysql.connector.errors.ProgrammingError: 1064 (42000): You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near '%d' at line 1 ``` Any ideas why the integer placeholder `%d` is producing this?
2020/11/28
[ "https://Stackoverflow.com/questions/65047449", "https://Stackoverflow.com", "https://Stackoverflow.com/users/14631135/" ]
The string only accepts %s or %(name)s , not %d. ``` executeStr = "SELECT * FROM testrec WHERE ID=%s" ``` and the variables are supposed to be in a tuple, although that varies by implementation depending on what version you are using, you might need to use: ``` cursor.execute(executeStr, (custID,)) ``` More information here <https://dev.mysql.com/doc/connector-python/en/connector-python-api-mysqlcursor-execute.html> The [current version](https://github.com/mysql/mysql-connector-python/tree/403535dbdd45d894c780da23ba96b1a6c81e3866) supports variables of types: `int`,`long`,`float`,`string`,`unicode`,`bytes`,`bytearray`,`bool`,`none`,`datetime.datetime`,`datetime.date`,`datetime.time`,`time.struct_time`,`datetime.timedelta`, and `decimal.Decimal`
Your should try %s not %d. This is because your data type of MYSQL may be int but the parameter you should pass to cursor must be string. Try this: ``` custID = int(input("Customer ID: ")) executeStr = 'SELECT * FROM testrec WHERE ID=%s'%(custID,) cursor.execute(executeStr, custID) custData = cursor.fetchall() if bool(custData): pass else: print("Wrong Id") ```
65,047,449
I am trying to fetch data from a MySQL database using python connector. I want to fetch records matching the `ID`. The ID has an integer data type. This is the code: ```py custID = int(input("Customer ID: ")) executeStr = "SELECT * FROM testrec WHERE ID=%d" cursor.execute(executeStr, custID) custData = cursor.fetchall() if bool(custData): pass else: print("Wrong Id") ``` But the code is raising an error which says: ``` mysql.connector.errors.ProgrammingError: 1064 (42000): You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near '%d' at line 1 ``` Any ideas why the integer placeholder `%d` is producing this?
2020/11/28
[ "https://Stackoverflow.com/questions/65047449", "https://Stackoverflow.com", "https://Stackoverflow.com/users/14631135/" ]
The string only accepts %s or %(name)s , not %d. ``` executeStr = "SELECT * FROM testrec WHERE ID=%s" ``` and the variables are supposed to be in a tuple, although that varies by implementation depending on what version you are using, you might need to use: ``` cursor.execute(executeStr, (custID,)) ``` More information here <https://dev.mysql.com/doc/connector-python/en/connector-python-api-mysqlcursor-execute.html> The [current version](https://github.com/mysql/mysql-connector-python/tree/403535dbdd45d894c780da23ba96b1a6c81e3866) supports variables of types: `int`,`long`,`float`,`string`,`unicode`,`bytes`,`bytearray`,`bool`,`none`,`datetime.datetime`,`datetime.date`,`datetime.time`,`time.struct_time`,`datetime.timedelta`, and `decimal.Decimal`
Instead of this: ```sql custID = int(input("Customer ID: ")) executeStr = "SELECT * FROM testrec WHERE ID=%d" ``` You should write: ```sql custID = int(input("Customer ID: ")) executeStr = "SELECT * FROM testrec WHERE ID='"+custID+"';" ```
50,807,953
I am trying to write a code that finds repeated characters in a word using python 3.x. For instance, ``` "abcde" -> 0 # no characters repeats more than once "aabbcde" -> 2 # 'a' and 'b' "aabBcde" -> 2 # 'a' occurs twice and 'b' twice (bandB) "indivisibility" -> 1 # 'i' occurs six times ``` Here is the code I have so far, ``` def duplicate_count(text): my_list=list(text) final=[] for i in my_list: if i in final: return(len(final[i])) else: return(0) ``` I want to grow in my python skills so I want to understand what's going wrong with it and why. Any ideas?
2018/06/12
[ "https://Stackoverflow.com/questions/50807953", "https://Stackoverflow.com", "https://Stackoverflow.com/users/7852656/" ]
I don't understand what your code is supposed to do: you never put anything in `final` and you `return` immediately. The below solution keeps two `set`s as it iterates over the input. One is every character we've seen, the other is every character that occurs more than once. We use `set`s because they have very fast membership checking, and because when you try to add an item to a `set` that already contains that item, nothing happens: ``` def duplicate_count(text): seen = set() more_than_one = set() for letter in text: if letter in seen: more_than_one.add(letter) else: seen.add(letter) return len(more_than_one) ```
> > I want to grow in my python skills so I want to understand what's going wrong with it and why. > > > Any ideas? > > > Well, for one thing, since you never actually put anything in "final" you will never have a case for which "i" is in "final"... Here's a function that will do what you want: ``` def do_stuff(test_str): my_list=list(test_str) my_dict={} count=0 for i in my_list: if i in my_dict: my_dict[i]+=1 else: my_dict[i]=1 for k in my_dict: if my_dict[k]>1: count+=1 return count ``` This function can be improved (by you).
50,721,735
I have both the version of python.I have also installed jupyter notebook individually and when i open the jupyter notebook and go to new section it is showing python 2 I want to use python3 for newer packages.So how can we upgrade the python version.
2018/06/06
[ "https://Stackoverflow.com/questions/50721735", "https://Stackoverflow.com", "https://Stackoverflow.com/users/8722059/" ]
Creating an instance that uses a service account requires you have the compute.instances.setServiceAccount permission on that service account. To make this work, grant the [iam.serviceAccountUser](https://cloud.google.com/compute/docs/access/iam#the_serviceaccountuser_role) role to your service account (either on the entire project or on the specific service account you want to be able to create instances with).
### Find out who you are first * if you are using Web UI: what **email** address did you use to login? * if you are using local `gcloud` or `terraform`: find the json file that contains your credentials for gcloud (often named similarly to `myproject*.json`) and see if it contains the **email**: `grep client_email myproject*.json` ### GCP IAM change 1. Go to <https://console.cloud.google.com> 2. Go to IAM 3. Find your **email** address 4. Member -> Edit -> Add Another Role -> type in the role name `Service Account User` -> Add (You can narrow it down with a Condition, but lets keep it simple for a while).
50,721,735
I have both the version of python.I have also installed jupyter notebook individually and when i open the jupyter notebook and go to new section it is showing python 2 I want to use python3 for newer packages.So how can we upgrade the python version.
2018/06/06
[ "https://Stackoverflow.com/questions/50721735", "https://Stackoverflow.com", "https://Stackoverflow.com/users/8722059/" ]
Creating an instance that uses a service account requires you have the compute.instances.setServiceAccount permission on that service account. To make this work, grant the [iam.serviceAccountUser](https://cloud.google.com/compute/docs/access/iam#the_serviceaccountuser_role) role to your service account (either on the entire project or on the specific service account you want to be able to create instances with).
Make sure that NAME\_OF\_SERVICE\_ACCOUNT is service account from current project. If you change project ID, and don't change NAME\_OF\_SERVICE\_ACCOUNT, then you will encounter this error. This can be checked on Google Console -> IAM & Admin -> IAM. Then look for service name ....-compute@developer.gserviceaccount.com and check if numbers at the beginning are correct. Each project will have different numbers in this service name.
50,721,735
I have both the version of python.I have also installed jupyter notebook individually and when i open the jupyter notebook and go to new section it is showing python 2 I want to use python3 for newer packages.So how can we upgrade the python version.
2018/06/06
[ "https://Stackoverflow.com/questions/50721735", "https://Stackoverflow.com", "https://Stackoverflow.com/users/8722059/" ]
### Find out who you are first * if you are using Web UI: what **email** address did you use to login? * if you are using local `gcloud` or `terraform`: find the json file that contains your credentials for gcloud (often named similarly to `myproject*.json`) and see if it contains the **email**: `grep client_email myproject*.json` ### GCP IAM change 1. Go to <https://console.cloud.google.com> 2. Go to IAM 3. Find your **email** address 4. Member -> Edit -> Add Another Role -> type in the role name `Service Account User` -> Add (You can narrow it down with a Condition, but lets keep it simple for a while).
Make sure that NAME\_OF\_SERVICE\_ACCOUNT is service account from current project. If you change project ID, and don't change NAME\_OF\_SERVICE\_ACCOUNT, then you will encounter this error. This can be checked on Google Console -> IAM & Admin -> IAM. Then look for service name ....-compute@developer.gserviceaccount.com and check if numbers at the beginning are correct. Each project will have different numbers in this service name.
22,883,337
Hey all am new to python programming and i have noticed some code which is really confusing me. ``` import collectors s = 'mississippi' d = collectors.defaultdict(int) for k in s: d[k] += 1 d.items() ``` The thing i need to know is the use of d[k] here ..I know k is the value in the string s.But i didnt understood what d[k] returns.In defaultdict(int) new value is created if dictonary has no values.. Please help me any help would be appreciated ..Thanks ..
2014/04/05
[ "https://Stackoverflow.com/questions/22883337", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3501595/" ]
Dictionaries in Python are "mapping" types. (This applies to both regular `dict` dictionaries and the more specialized variations like `defaultdict`.) A mapping takes a key and "maps" it to a value. The syntax `d[k]` is used to look up the key `k` in the dictionary `d`. Depending on where it appears in your code, it can have slightly different semantics (either returning the existing value for the key or setting a new one). In your example, you're using `d[k] += 1`, which increments the value under key `k` in the dictionary. Since integers are immutable, it actually breaks out into `d[k] = d[k] + 1`. The right side `d[k]` does a look up of the value in the dictionary. Then it adds one and, using the `d[k]` on the left side, assigns the result into the dictionary as a new value. `defaultdict` changes things a bit in that keys that don't yet exist in the dictionary are treated as if they did exist. The argument to its constructor is a "factory" object which will be called to create the new values when an unknown key is requested.
Here you go ``` d[key] Return the item of d with key key. Raises a KeyError if key is not in the map. ``` Straigth from the [python docs](https://docs.python.org/2/library/stdtypes.html), under mapping types. Go to <https://docs.python.org/> and bookmark it. It will become your best friend.
45,471,477
I need the sacred package for a new code base I downloaded. It requires sacred. <https://pypi.python.org/pypi/sacred> conda install sacred fails with PackageNotFoundError: Package missing in current osx-64 channels: - sacred The instruction on the package site only explains how to install with pip. What do you do in this case?
2017/08/02
[ "https://Stackoverflow.com/questions/45471477", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1058511/" ]
That package is not available as a conda package at all. You can search for packages on anaconda.org: <https://anaconda.org/search?q=sacred> You can see the type of package in the 4th column. Other Python packages may be available as conda packages, for instance, NumPy: <https://anaconda.org/search?q=numpy> As you can see, the conda package numpy is available from a number of different channels (the channel is the name before the slash). If you wanted to install a package from a different channel, you can add the option to the install/create command with the `-c`/`--channel` option, or you can add the channel to your configuration `conda config --add channels channel-name`. If no conda package exists for a Python package, you can either install via pip (if available) or [build your own conda package](https://docs.conda.io/projects/conda-build/en/latest/user-guide/tutorials/building-conda-packages.html). This isn't usually too difficult to do for pure Python packages, especially if one can use `skeleton` to build a recipe from a package on PyPI.
It happens some issue to me before. If your system default Python environment is Conda, then you could download those files from <https://pypi.python.org/pypi/sacred#downloads> and manually install by ``` pip install C:/Destop/some-file.whl ```
60,699,328
I'm getting the following error when trying to install pychalk: ``` pip install pychalk --user ``` ``` FileNotFoundError: [Errno 2] No such file or directory: 'c:\\users\\jokzc\\appdata\\roaming\\python\\python38\\site-packages\\MarkupSafe-1.1.1.dist-info\\METADATA' ```
2020/03/16
[ "https://Stackoverflow.com/questions/60699328", "https://Stackoverflow.com", "https://Stackoverflow.com/users/12868860/" ]
You are using Python 3.8. > > ...python\\python38\\site-packages\... > > > The [last release for pychalk](https://pypi.org/project/pychalk/#history) was 2018, before the [releases for Python3.8](https://www.python.org/dev/peps/pep-0569/#schedule) came out. The package is probably not updated yet to support Python 3.8. You will have to downgrade your Python version. From the repo's README on [**Testing**](https://github.com/anthonyalmarza/chalk#testing), it seems they only tested on 2.7, 3.5, 3.6. I tried it and it seems it can also install successfully on Python 3.7. So... * Use Python3.7 * [Report the issue](https://github.com/anthonyalmarza/chalk/issues) to the package authors so that they can support Python 3.8
you just need to delete the OpenCV-python folder from the(path=**C:\Users\rahul\AppData\Local\Programs\Python\Python37\Lib\site-packages\opencv\_python-3.4.2.17.dist-info**) and then again you have to install that library you can see the picture below. here I have attached two pics in which first is showing error messages and the second one is not showing any errors [first image](https://i.stack.imgur.com/PDhV3.png) [second image](https://i.stack.imgur.com/SZe18.png)