qid
int64 46k
74.7M
| question
stringlengths 54
37.8k
| date
stringlengths 10
10
| metadata
listlengths 3
3
| response_j
stringlengths 17
26k
| response_k
stringlengths 26
26k
|
|---|---|---|---|---|---|
48,783,650
|
I have a python list l.The first few elements of the list looks like below
```
[751883787]
[751026090]
[752575831]
[751031278]
[751032392]
[751027358]
[751052118]
```
I want to convert this list to pandas.core.series.Series with 2 leading 0.My final outcome will look like
```
00751883787
00751026090
00752575831
00751031278
00751032392
00751027358
00751052118
```
I'm working in Python 3.x in windows environment.Can you suggest me how to do this?
Also my list contains around 2000000 elements
|
2018/02/14
|
[
"https://Stackoverflow.com/questions/48783650",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/9300211/"
] |
you can try:
```
list=[121,123,125,145]
series='00'+pd.Series(list).astype(str)
print(series)
```
output:
```
0 00121
1 00123
2 00125
3 00145
dtype: object
```
|
both the given answers are usefull ... below is the summrise one
```
import pandas as pd
mylist = [751883787,751026090,752575831,751031278]
mysers = pd.Series(mylist).astype(str).str.zfill(11)
print (mysers)
./test
0 00751883787
1 00751026090
2 00752575831
3 00751031278
dtype: object
```
another way around is , cast the dtype of the series to str using astype and use vectorised str.zfill to pad with 00, though using lamda will be more easy to read ..
```
import pandas as pd
mylist = pd.DataFrame([751883787,751026090,752575831,751031278], columns=['coln'])
result = mylist.coln.apply(lambda x: str(int(x)).zfill(11))
print(result)
```
Below is the result..
```
./test
0 00751883787
1 00751026090
2 00752575831
3 00751031278
Name: coln, dtype: object
```
|
48,783,650
|
I have a python list l.The first few elements of the list looks like below
```
[751883787]
[751026090]
[752575831]
[751031278]
[751032392]
[751027358]
[751052118]
```
I want to convert this list to pandas.core.series.Series with 2 leading 0.My final outcome will look like
```
00751883787
00751026090
00752575831
00751031278
00751032392
00751027358
00751052118
```
I'm working in Python 3.x in windows environment.Can you suggest me how to do this?
Also my list contains around 2000000 elements
|
2018/02/14
|
[
"https://Stackoverflow.com/questions/48783650",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/9300211/"
] |
This is one way.
```
from itertools import chain; concat = chain.from_iterable
import pandas as pd
lst = [[751883787],
[751026090],
[752575831],
[751031278]]
pd.DataFrame({'a': pd.Series([str(i).zfill(11) for i in concat(lst)])})
a
0 00751883787
1 00751026090
2 00752575831
3 00751031278
```
Some benchmarking, relevant since your dataframe is large:
```
from itertools import chain; concat = chain.from_iterable
import pandas as pd
lst = [[751883787],
[751026090],
[752575831],
[751031278],
[751032392],
[751027358],
[751052118]]*300000
%timeit pd.DataFrame(lst, columns=['a'])['a'].astype(str).str.zfill(11)
# 1 loop, best of 3: 7.88 s per loop
%timeit pd.DataFrame({'a': pd.Series([str(i).zfill(11) for i in concat(lst)])})
# 1 loop, best of 3: 2.06 s per loop
```
|
both the given answers are usefull ... below is the summrise one
```
import pandas as pd
mylist = [751883787,751026090,752575831,751031278]
mysers = pd.Series(mylist).astype(str).str.zfill(11)
print (mysers)
./test
0 00751883787
1 00751026090
2 00752575831
3 00751031278
dtype: object
```
another way around is , cast the dtype of the series to str using astype and use vectorised str.zfill to pad with 00, though using lamda will be more easy to read ..
```
import pandas as pd
mylist = pd.DataFrame([751883787,751026090,752575831,751031278], columns=['coln'])
result = mylist.coln.apply(lambda x: str(int(x)).zfill(11))
print(result)
```
Below is the result..
```
./test
0 00751883787
1 00751026090
2 00752575831
3 00751031278
Name: coln, dtype: object
```
|
48,783,650
|
I have a python list l.The first few elements of the list looks like below
```
[751883787]
[751026090]
[752575831]
[751031278]
[751032392]
[751027358]
[751052118]
```
I want to convert this list to pandas.core.series.Series with 2 leading 0.My final outcome will look like
```
00751883787
00751026090
00752575831
00751031278
00751032392
00751027358
00751052118
```
I'm working in Python 3.x in windows environment.Can you suggest me how to do this?
Also my list contains around 2000000 elements
|
2018/02/14
|
[
"https://Stackoverflow.com/questions/48783650",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/9300211/"
] |
First use `DataFrame` constructor with columns, then cast to `string` and last add `0` by [`Series.str.zfill`](http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.str.zfill.html) if nested `list`s:
```
lst = [[751883787],
[751026090],
[752575831],
[751031278],
[751032392],
[751027358],
[751052118]]
s = pd.DataFrame(lst, columns=['a'])['a'].astype(str).str.zfill(11)
print (s)
0 00751883787
1 00751026090
2 00752575831
3 00751031278
4 00751032392
5 00751027358
6 00751052118
Name: a, dtype: object
```
---
If there is one `list` only:
```
lst = [751883787,
751026090,
752575831,
751031278,
751032392,
751027358,
751052118]
s = pd.Series(lst).astype(str).str.zfill(11)
print (s)
0 00751883787
1 00751026090
2 00752575831
3 00751031278
4 00751032392
5 00751027358
6 00751052118
dtype: object
```
|
both the given answers are usefull ... below is the summrise one
```
import pandas as pd
mylist = [751883787,751026090,752575831,751031278]
mysers = pd.Series(mylist).astype(str).str.zfill(11)
print (mysers)
./test
0 00751883787
1 00751026090
2 00752575831
3 00751031278
dtype: object
```
another way around is , cast the dtype of the series to str using astype and use vectorised str.zfill to pad with 00, though using lamda will be more easy to read ..
```
import pandas as pd
mylist = pd.DataFrame([751883787,751026090,752575831,751031278], columns=['coln'])
result = mylist.coln.apply(lambda x: str(int(x)).zfill(11))
print(result)
```
Below is the result..
```
./test
0 00751883787
1 00751026090
2 00752575831
3 00751031278
Name: coln, dtype: object
```
|
42,345,745
|
I am using python-social-auth. But when I run makemigrations and migrate. The tables "social\_auth-\*" are not created.
My settings.py looks like this
```
INSTALLED_APPS += (
'social.apps.django_app.default',
)
AUTHENTICATION_BACKENDS += (
'social.backends.facebook.FacebookOAuth2',
'social.backends.google.GoogleOAuth2',
'social.backends.twitter.TwitterOAuth',
)
SOCIAL_AUTH_USER_MODEL = AUTH_USER_MODEL
# Rausnehmen wenns Probleme mit der Auth. gibt
SOCIAL_AUTH_PIPELINE = (
'social.pipeline.social_auth.social_details',
'social.pipeline.social_auth.social_uid',
'social.pipeline.social_auth.auth_allowed',
'social.pipeline.social_auth.social_user',
'social.pipeline.user.get_username',
'social.pipeline.social_auth.associate_by_email', # <--- enable this one. to match users per email adress
'social.pipeline.user.create_user',
'social.pipeline.social_auth.associate_user',
'social.pipeline.social_auth.load_extra_data',
'social.pipeline.user.user_details',
)
from sharadar.soc_auth_config import *
```
The same does work on another machine without any flaw. On this machine I receive :
```
Operations to perform:
Apply all migrations: admin, auth, contenttypes, easy_thumbnailsguardian, main, myauth, sessions, social_auth
Running migrations:
Applying myauth.0002_auto_20170220_1408... OK
```
social\_auth is included here.
But on a new Computer I allways receive
```
Exception Value:
relation "social_auth_usersocialauth" does not exist
LINE 1: ...er"."bio", "myauth_shruser"."email_verified" FROM "social_au...
```
When using google auth in my running django app
social\_auth is not included when I run migrate
```
Operations to perform:
Apply all migrations: admin, auth, contenttypes, easy_thumbnails, guardian, myauth, sessions
Running migrations:
No migrations to apply.
```
Any help is appretiated.
Kind regards
Michael
|
2017/02/20
|
[
"https://Stackoverflow.com/questions/42345745",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/7007547/"
] |
I had to migrate to social-auth-core as described in this dokument :
[Migrating from python-social-auth to split social](https://github.com/omab/python-social-auth/blob/master/MIGRATING_TO_SOCIAL.md)
Then all is working fine. But after this problems I am thinking about changing to all-auth.
Regards for any help
|
Strange .. when entering the admin interface I receive the exception :
```
No installed app with label 'social_django'.
```
But later in the Error Report I have :
```
INSTALLED_APPS
['django.contrib.admin',
'django.contrib.auth',
.....
'myauth.apps.MyauthConfig',
'main.apps.MainConfig',
'social.apps.django_app.default']
```
pip3 search for python-social-auth gives :
```
python-social-auth (0.3.6)
INSTALLED: 0.3.6 (latest)
```
I dont know what is going on here.
Regards
Michael
|
40,461,824
|
I have a Adafruit Feather Huzzah ESP8266 and want to load a lua script onto it.
The script is out of [this Adafruit tutorial](https://learn.adafruit.com/manually-bridging-mqtt-mosquitto-to-adafruit-io/programming-the-esp8266) and I changed only the Wifi and MQTT connection settings.
I followed the instructions at
<https://github.com/4refr0nt/luatool#run>
and used the following command:
```
python ./luatool.py --port /dev/tty.SLAB_USBtoUART --src LightSensor-master/init.lua --dest init.lua --verbose
```
I get the following error
```
Upload starting
Stage 1. Deleting old file from flash memory
->file.open("init.lua", "w")Traceback (most recent call last):
File "./luatool.py", line 272, in <module>
transport.writeln("file.open(\"" + args.dest + "\", \"w\")\r")
File "./luatool.py", line 111, in writeln
self.performcheck(data)
File "./luatool.py", line 61, in performcheck
raise Exception('No proper answer from MCU')
Exception: No proper answer from MCU
```
What is the error here, what am I doing wrong?
I tried flashing the nodemcu dev version to the Feather. This didn't change the problem. I also read some advice to stabilize the power supply and added a battery to the feather–-also without success.
|
2016/11/07
|
[
"https://Stackoverflow.com/questions/40461824",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2427707/"
] |
Adding a delay of 0.6 ms to the `luatool.py` solved the problem for me:
```
python ./luatool.py --delay 0.6 --port /dev/tty.SLAB_USBtoUART --src LightSensor-master/init.lua --dest init.lua --verbose
```
I found this solution because I read some advice that the python script might try to talk to the Feather faster than the Feather can answer.
|
I had the same problem, I detached the cable and attached again and ran the command
```
sudo python esp8266/luatool.py --delay 0.6 --port /dev/ttyUSB0 --src init.lua --dest init.lua --restart --verbose
```
1st time it fails but next time execute the same command and it works for me.
|
49,037,742
|
I've noticed that installing Pandas and Numpy (it's dependency) in a Docker container using the base OS Alpine vs. CentOS or Debian takes much longer. I created a little test below to demonstrate the time difference. Aside from the few seconds Alpine takes to update and download the build dependencies to install Pandas and Numpy, why does the setup.py take around 70x more time than on Debian install?
Is there any way to speed up the install using Alpine as the base image or is there another base image of comparable size to Alpine that is better to use for packages like Pandas and Numpy?
**Dockerfile.debian**
```
FROM python:3.6.4-slim-jessie
RUN pip install pandas
```
**Build Debian image with Pandas & Numpy:**
```
[PandasDockerTest] time docker build -t debian-pandas -f Dockerfile.debian . --no-cache
Sending build context to Docker daemon 3.072kB
Step 1/2 : FROM python:3.6.4-slim-jessie
---> 43431c5410f3
Step 2/2 : RUN pip install pandas
---> Running in 2e4c030f8051
Collecting pandas
Downloading pandas-0.22.0-cp36-cp36m-manylinux1_x86_64.whl (26.2MB)
Collecting numpy>=1.9.0 (from pandas)
Downloading numpy-1.14.1-cp36-cp36m-manylinux1_x86_64.whl (12.2MB)
Collecting pytz>=2011k (from pandas)
Downloading pytz-2018.3-py2.py3-none-any.whl (509kB)
Collecting python-dateutil>=2 (from pandas)
Downloading python_dateutil-2.6.1-py2.py3-none-any.whl (194kB)
Collecting six>=1.5 (from python-dateutil>=2->pandas)
Downloading six-1.11.0-py2.py3-none-any.whl
Installing collected packages: numpy, pytz, six, python-dateutil, pandas
Successfully installed numpy-1.14.1 pandas-0.22.0 python-dateutil-2.6.1 pytz-2018.3 six-1.11.0
Removing intermediate container 2e4c030f8051
---> a71e1c314897
Successfully built a71e1c314897
Successfully tagged debian-pandas:latest
docker build -t debian-pandas -f Dockerfile.debian . --no-cache 0.07s user 0.06s system 0% cpu 13.605 total
```
**Dockerfile.alpine**
```
FROM python:3.6.4-alpine3.7
RUN apk --update add --no-cache g++
RUN pip install pandas
```
**Build Alpine image with Pandas & Numpy:**
```
[PandasDockerTest] time docker build -t alpine-pandas -f Dockerfile.alpine . --no-cache
Sending build context to Docker daemon 16.9kB
Step 1/3 : FROM python:3.6.4-alpine3.7
---> 4b00a94b6f26
Step 2/3 : RUN apk --update add --no-cache g++
---> Running in 4b0c32551e3f
fetch http://dl-cdn.alpinelinux.org/alpine/v3.7/main/x86_64/APKINDEX.tar.gz
fetch http://dl-cdn.alpinelinux.org/alpine/v3.7/main/x86_64/APKINDEX.tar.gz
fetch http://dl-cdn.alpinelinux.org/alpine/v3.7/community/x86_64/APKINDEX.tar.gz
fetch http://dl-cdn.alpinelinux.org/alpine/v3.7/community/x86_64/APKINDEX.tar.gz
(1/17) Upgrading musl (1.1.18-r2 -> 1.1.18-r3)
(2/17) Installing libgcc (6.4.0-r5)
(3/17) Installing libstdc++ (6.4.0-r5)
(4/17) Installing binutils-libs (2.28-r3)
(5/17) Installing binutils (2.28-r3)
(6/17) Installing gmp (6.1.2-r1)
(7/17) Installing isl (0.18-r0)
(8/17) Installing libgomp (6.4.0-r5)
(9/17) Installing libatomic (6.4.0-r5)
(10/17) Installing pkgconf (1.3.10-r0)
(11/17) Installing mpfr3 (3.1.5-r1)
(12/17) Installing mpc1 (1.0.3-r1)
(13/17) Installing gcc (6.4.0-r5)
(14/17) Installing musl-dev (1.1.18-r3)
(15/17) Installing libc-dev (0.7.1-r0)
(16/17) Installing g++ (6.4.0-r5)
(17/17) Upgrading musl-utils (1.1.18-r2 -> 1.1.18-r3)
Executing busybox-1.27.2-r7.trigger
OK: 184 MiB in 50 packages
Removing intermediate container 4b0c32551e3f
---> be26c3bf4e42
Step 3/3 : RUN pip install pandas
---> Running in 36f6024e5e2d
Collecting pandas
Downloading pandas-0.22.0.tar.gz (11.3MB)
Collecting python-dateutil>=2 (from pandas)
Downloading python_dateutil-2.6.1-py2.py3-none-any.whl (194kB)
Collecting pytz>=2011k (from pandas)
Downloading pytz-2018.3-py2.py3-none-any.whl (509kB)
Collecting numpy>=1.9.0 (from pandas)
Downloading numpy-1.14.1.zip (4.9MB)
Collecting six>=1.5 (from python-dateutil>=2->pandas)
Downloading six-1.11.0-py2.py3-none-any.whl
Building wheels for collected packages: pandas, numpy
Running setup.py bdist_wheel for pandas: started
Running setup.py bdist_wheel for pandas: still running...
Running setup.py bdist_wheel for pandas: still running...
Running setup.py bdist_wheel for pandas: still running...
Running setup.py bdist_wheel for pandas: still running...
Running setup.py bdist_wheel for pandas: still running...
Running setup.py bdist_wheel for pandas: still running...
Running setup.py bdist_wheel for pandas: finished with status 'done'
Stored in directory: /root/.cache/pip/wheels/e8/ed/46/0596b51014f3cc49259e52dff9824e1c6fe352048a2656fc92
Running setup.py bdist_wheel for numpy: started
Running setup.py bdist_wheel for numpy: still running...
Running setup.py bdist_wheel for numpy: still running...
Running setup.py bdist_wheel for numpy: still running...
Running setup.py bdist_wheel for numpy: finished with status 'done'
Stored in directory: /root/.cache/pip/wheels/9d/cd/e1/4d418b16ea662e512349ef193ed9d9ff473af715110798c984
Successfully built pandas numpy
Installing collected packages: six, python-dateutil, pytz, numpy, pandas
Successfully installed numpy-1.14.1 pandas-0.22.0 python-dateutil-2.6.1 pytz-2018.3 six-1.11.0
Removing intermediate container 36f6024e5e2d
---> a93c59e6a106
Successfully built a93c59e6a106
Successfully tagged alpine-pandas:latest
docker build -t alpine-pandas -f Dockerfile.alpine . --no-cache 0.54s user 0.33s system 0% cpu 16:08.47 total
```
|
2018/02/28
|
[
"https://Stackoverflow.com/questions/49037742",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3089468/"
] |
Debian based images use only `python pip` to install packages with `.whl` format:
```
Downloading pandas-0.22.0-cp36-cp36m-manylinux1_x86_64.whl (26.2MB)
Downloading numpy-1.14.1-cp36-cp36m-manylinux1_x86_64.whl (12.2MB)
```
WHL format was developed as a quicker and more reliable method of installing Python software than re-building from source code every time. WHL files only have to be moved to the correct location on the target system to be installed, whereas a source distribution requires a build step before installation.
Wheel packages `pandas` and `numpy` are not supported in images based on Alpine platform. That's why when we install them using `python pip` during the building process, we always compile them from the source files in alpine:
```
Downloading pandas-0.22.0.tar.gz (11.3MB)
Downloading numpy-1.14.1.zip (4.9MB)
```
and we can see the following inside container during the image building:
```
/ # ps aux
PID USER TIME COMMAND
1 root 0:00 /bin/sh -c pip install pandas
7 root 0:04 {pip} /usr/local/bin/python /usr/local/bin/pip install pandas
21 root 0:07 /usr/local/bin/python -c import setuptools, tokenize;__file__='/tmp/pip-build-en29h0ak/pandas/setup.py';f=getattr(tokenize, 'open', open)(__file__);code=f.read().replace('\r\n', '\n
496 root 0:00 sh
660 root 0:00 /bin/sh -c gcc -Wno-unused-result -Wsign-compare -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -DTHREAD_STACK_SIZE=0x100000 -fPIC -Ibuild/src.linux-x86_64-3.6/numpy/core/src/pri
661 root 0:00 gcc -Wno-unused-result -Wsign-compare -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -DTHREAD_STACK_SIZE=0x100000 -fPIC -Ibuild/src.linux-x86_64-3.6/numpy/core/src/private -Inump
662 root 0:00 /usr/libexec/gcc/x86_64-alpine-linux-musl/6.4.0/cc1 -quiet -I build/src.linux-x86_64-3.6/numpy/core/src/private -I numpy/core/include -I build/src.linux-x86_64-3.6/numpy/core/includ
663 root 0:00 ps aux
```
If we modify `Dockerfile` a little:
```
FROM python:3.6.4-alpine3.7
RUN apk add --no-cache g++ wget
RUN wget https://pypi.python.org/packages/da/c6/0936bc5814b429fddb5d6252566fe73a3e40372e6ceaf87de3dec1326f28/pandas-0.22.0-cp36-cp36m-manylinux1_x86_64.whl
RUN pip install pandas-0.22.0-cp36-cp36m-manylinux1_x86_64.whl
```
we get the following error:
```
Step 4/4 : RUN pip install pandas-0.22.0-cp36-cp36m-manylinux1_x86_64.whl
---> Running in 0faea63e2bda
pandas-0.22.0-cp36-cp36m-manylinux1_x86_64.whl is not a supported wheel on this platform.
The command '/bin/sh -c pip install pandas-0.22.0-cp36-cp36m-manylinux1_x86_64.whl' returned a non-zero code: 1
```
Unfortunately, the only way to install `pandas` on an Alpine image is to wait until build finishes.
Of course if you want to use the Alpine image with `pandas` in CI for example, the best way to do so is to compile it once, push it to any registry and use it as a base image for your needs.
**EDIT:**
If you want to use the Alpine image with `pandas` you can pull my [nickgryg/alpine-pandas](https://hub.docker.com/r/nickgryg/alpine-pandas/) docker image. It is a python image with pre-compiled `pandas` on the Alpine platform. It should save your time.
|
In this case the alpine not be the best solution change alpine for slim:
FROM python:3.8.3-alpine
========================
Change to that:
`FROM python:3.8.3-slim`
In my case it was resolved with this small change.
|
49,037,742
|
I've noticed that installing Pandas and Numpy (it's dependency) in a Docker container using the base OS Alpine vs. CentOS or Debian takes much longer. I created a little test below to demonstrate the time difference. Aside from the few seconds Alpine takes to update and download the build dependencies to install Pandas and Numpy, why does the setup.py take around 70x more time than on Debian install?
Is there any way to speed up the install using Alpine as the base image or is there another base image of comparable size to Alpine that is better to use for packages like Pandas and Numpy?
**Dockerfile.debian**
```
FROM python:3.6.4-slim-jessie
RUN pip install pandas
```
**Build Debian image with Pandas & Numpy:**
```
[PandasDockerTest] time docker build -t debian-pandas -f Dockerfile.debian . --no-cache
Sending build context to Docker daemon 3.072kB
Step 1/2 : FROM python:3.6.4-slim-jessie
---> 43431c5410f3
Step 2/2 : RUN pip install pandas
---> Running in 2e4c030f8051
Collecting pandas
Downloading pandas-0.22.0-cp36-cp36m-manylinux1_x86_64.whl (26.2MB)
Collecting numpy>=1.9.0 (from pandas)
Downloading numpy-1.14.1-cp36-cp36m-manylinux1_x86_64.whl (12.2MB)
Collecting pytz>=2011k (from pandas)
Downloading pytz-2018.3-py2.py3-none-any.whl (509kB)
Collecting python-dateutil>=2 (from pandas)
Downloading python_dateutil-2.6.1-py2.py3-none-any.whl (194kB)
Collecting six>=1.5 (from python-dateutil>=2->pandas)
Downloading six-1.11.0-py2.py3-none-any.whl
Installing collected packages: numpy, pytz, six, python-dateutil, pandas
Successfully installed numpy-1.14.1 pandas-0.22.0 python-dateutil-2.6.1 pytz-2018.3 six-1.11.0
Removing intermediate container 2e4c030f8051
---> a71e1c314897
Successfully built a71e1c314897
Successfully tagged debian-pandas:latest
docker build -t debian-pandas -f Dockerfile.debian . --no-cache 0.07s user 0.06s system 0% cpu 13.605 total
```
**Dockerfile.alpine**
```
FROM python:3.6.4-alpine3.7
RUN apk --update add --no-cache g++
RUN pip install pandas
```
**Build Alpine image with Pandas & Numpy:**
```
[PandasDockerTest] time docker build -t alpine-pandas -f Dockerfile.alpine . --no-cache
Sending build context to Docker daemon 16.9kB
Step 1/3 : FROM python:3.6.4-alpine3.7
---> 4b00a94b6f26
Step 2/3 : RUN apk --update add --no-cache g++
---> Running in 4b0c32551e3f
fetch http://dl-cdn.alpinelinux.org/alpine/v3.7/main/x86_64/APKINDEX.tar.gz
fetch http://dl-cdn.alpinelinux.org/alpine/v3.7/main/x86_64/APKINDEX.tar.gz
fetch http://dl-cdn.alpinelinux.org/alpine/v3.7/community/x86_64/APKINDEX.tar.gz
fetch http://dl-cdn.alpinelinux.org/alpine/v3.7/community/x86_64/APKINDEX.tar.gz
(1/17) Upgrading musl (1.1.18-r2 -> 1.1.18-r3)
(2/17) Installing libgcc (6.4.0-r5)
(3/17) Installing libstdc++ (6.4.0-r5)
(4/17) Installing binutils-libs (2.28-r3)
(5/17) Installing binutils (2.28-r3)
(6/17) Installing gmp (6.1.2-r1)
(7/17) Installing isl (0.18-r0)
(8/17) Installing libgomp (6.4.0-r5)
(9/17) Installing libatomic (6.4.0-r5)
(10/17) Installing pkgconf (1.3.10-r0)
(11/17) Installing mpfr3 (3.1.5-r1)
(12/17) Installing mpc1 (1.0.3-r1)
(13/17) Installing gcc (6.4.0-r5)
(14/17) Installing musl-dev (1.1.18-r3)
(15/17) Installing libc-dev (0.7.1-r0)
(16/17) Installing g++ (6.4.0-r5)
(17/17) Upgrading musl-utils (1.1.18-r2 -> 1.1.18-r3)
Executing busybox-1.27.2-r7.trigger
OK: 184 MiB in 50 packages
Removing intermediate container 4b0c32551e3f
---> be26c3bf4e42
Step 3/3 : RUN pip install pandas
---> Running in 36f6024e5e2d
Collecting pandas
Downloading pandas-0.22.0.tar.gz (11.3MB)
Collecting python-dateutil>=2 (from pandas)
Downloading python_dateutil-2.6.1-py2.py3-none-any.whl (194kB)
Collecting pytz>=2011k (from pandas)
Downloading pytz-2018.3-py2.py3-none-any.whl (509kB)
Collecting numpy>=1.9.0 (from pandas)
Downloading numpy-1.14.1.zip (4.9MB)
Collecting six>=1.5 (from python-dateutil>=2->pandas)
Downloading six-1.11.0-py2.py3-none-any.whl
Building wheels for collected packages: pandas, numpy
Running setup.py bdist_wheel for pandas: started
Running setup.py bdist_wheel for pandas: still running...
Running setup.py bdist_wheel for pandas: still running...
Running setup.py bdist_wheel for pandas: still running...
Running setup.py bdist_wheel for pandas: still running...
Running setup.py bdist_wheel for pandas: still running...
Running setup.py bdist_wheel for pandas: still running...
Running setup.py bdist_wheel for pandas: finished with status 'done'
Stored in directory: /root/.cache/pip/wheels/e8/ed/46/0596b51014f3cc49259e52dff9824e1c6fe352048a2656fc92
Running setup.py bdist_wheel for numpy: started
Running setup.py bdist_wheel for numpy: still running...
Running setup.py bdist_wheel for numpy: still running...
Running setup.py bdist_wheel for numpy: still running...
Running setup.py bdist_wheel for numpy: finished with status 'done'
Stored in directory: /root/.cache/pip/wheels/9d/cd/e1/4d418b16ea662e512349ef193ed9d9ff473af715110798c984
Successfully built pandas numpy
Installing collected packages: six, python-dateutil, pytz, numpy, pandas
Successfully installed numpy-1.14.1 pandas-0.22.0 python-dateutil-2.6.1 pytz-2018.3 six-1.11.0
Removing intermediate container 36f6024e5e2d
---> a93c59e6a106
Successfully built a93c59e6a106
Successfully tagged alpine-pandas:latest
docker build -t alpine-pandas -f Dockerfile.alpine . --no-cache 0.54s user 0.33s system 0% cpu 16:08.47 total
```
|
2018/02/28
|
[
"https://Stackoverflow.com/questions/49037742",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3089468/"
] |
**ANSWER: AS OF 3/9/2020, FOR PYTHON 3, IT STILL DOESN'T!**
Here is a complete working Dockerfile:
```
FROM python:3.7-alpine
RUN echo "@testing http://dl-cdn.alpinelinux.org/alpine/edge/testing" >> /etc/apk/repositories
RUN apk add --update --no-cache py3-numpy py3-pandas@testing
```
The build is very sensitive to the exact python and alpine version numbers - getting these wrong seems to provoke Max Levy's error `so:libpython3.7m.so.1.0 (missing)` - but the above does now work for me.
My updated Dockerfile is available at <https://gist.github.com/jtlz2/b0f4bc07ce2ff04bc193337f2327c13b>
---
[Earlier Update:]
**ANSWER: IT DOESN'T!**
In any Alpine Dockerfile you can simply do\*
```
RUN apk add py2-numpy@community py2-scipy@community py-pandas@edge
```
This is because `numpy`, `scipy` and now `pandas` are all available prebuilt on `alpine`:
[https://pkgs.alpinelinux.org/packages?name=\*numpy](https://pkgs.alpinelinux.org/packages?name=*numpy)
[https://pkgs.alpinelinux.org/packages?name=\*scipy&branch=edge](https://pkgs.alpinelinux.org/packages?name=*scipy&branch=edge)
[https://pkgs.alpinelinux.org/packages?name=\*pandas&branch=edge](https://pkgs.alpinelinux.org/packages?name=*pandas&branch=edge)
**One way to avoid rebuilding every time, or using a Docker layer, is to use a prebuilt, native Alpine Linux/`.apk` package, e.g.**
<https://github.com/sgerrand/alpine-pkg-py-pandas>
<https://github.com/nbgallery/apks>
You can build these `.apk`s once and use them wherever in your Dockerfile you like :)
This also saves you having to bake everything else into the Docker image before the fact - i.e. the flexibility to pre-build any Docker image you like.
PS I have put a Dockerfile stub at <https://gist.github.com/jtlz2/b0f4bc07ce2ff04bc193337f2327c13b> that shows roughly how to build the image. These include the important steps (\*):
```
RUN echo "@community http://dl-cdn.alpinelinux.org/alpine/edge/community" >> /etc/apk/repositories
RUN apk update
RUN apk add --update --no-cache libgfortran
```
|
alpine takes lot of time to install pandas and the image size is also huge. I tried the python:3.8-slim-buster version of python base image. Image build was very fast and size of image was less than half in comparison to alpine python docker image
<https://github.com/dguyhasnoname/k8s-cluster-checker/blob/master/Dockerfile>
|
49,037,742
|
I've noticed that installing Pandas and Numpy (it's dependency) in a Docker container using the base OS Alpine vs. CentOS or Debian takes much longer. I created a little test below to demonstrate the time difference. Aside from the few seconds Alpine takes to update and download the build dependencies to install Pandas and Numpy, why does the setup.py take around 70x more time than on Debian install?
Is there any way to speed up the install using Alpine as the base image or is there another base image of comparable size to Alpine that is better to use for packages like Pandas and Numpy?
**Dockerfile.debian**
```
FROM python:3.6.4-slim-jessie
RUN pip install pandas
```
**Build Debian image with Pandas & Numpy:**
```
[PandasDockerTest] time docker build -t debian-pandas -f Dockerfile.debian . --no-cache
Sending build context to Docker daemon 3.072kB
Step 1/2 : FROM python:3.6.4-slim-jessie
---> 43431c5410f3
Step 2/2 : RUN pip install pandas
---> Running in 2e4c030f8051
Collecting pandas
Downloading pandas-0.22.0-cp36-cp36m-manylinux1_x86_64.whl (26.2MB)
Collecting numpy>=1.9.0 (from pandas)
Downloading numpy-1.14.1-cp36-cp36m-manylinux1_x86_64.whl (12.2MB)
Collecting pytz>=2011k (from pandas)
Downloading pytz-2018.3-py2.py3-none-any.whl (509kB)
Collecting python-dateutil>=2 (from pandas)
Downloading python_dateutil-2.6.1-py2.py3-none-any.whl (194kB)
Collecting six>=1.5 (from python-dateutil>=2->pandas)
Downloading six-1.11.0-py2.py3-none-any.whl
Installing collected packages: numpy, pytz, six, python-dateutil, pandas
Successfully installed numpy-1.14.1 pandas-0.22.0 python-dateutil-2.6.1 pytz-2018.3 six-1.11.0
Removing intermediate container 2e4c030f8051
---> a71e1c314897
Successfully built a71e1c314897
Successfully tagged debian-pandas:latest
docker build -t debian-pandas -f Dockerfile.debian . --no-cache 0.07s user 0.06s system 0% cpu 13.605 total
```
**Dockerfile.alpine**
```
FROM python:3.6.4-alpine3.7
RUN apk --update add --no-cache g++
RUN pip install pandas
```
**Build Alpine image with Pandas & Numpy:**
```
[PandasDockerTest] time docker build -t alpine-pandas -f Dockerfile.alpine . --no-cache
Sending build context to Docker daemon 16.9kB
Step 1/3 : FROM python:3.6.4-alpine3.7
---> 4b00a94b6f26
Step 2/3 : RUN apk --update add --no-cache g++
---> Running in 4b0c32551e3f
fetch http://dl-cdn.alpinelinux.org/alpine/v3.7/main/x86_64/APKINDEX.tar.gz
fetch http://dl-cdn.alpinelinux.org/alpine/v3.7/main/x86_64/APKINDEX.tar.gz
fetch http://dl-cdn.alpinelinux.org/alpine/v3.7/community/x86_64/APKINDEX.tar.gz
fetch http://dl-cdn.alpinelinux.org/alpine/v3.7/community/x86_64/APKINDEX.tar.gz
(1/17) Upgrading musl (1.1.18-r2 -> 1.1.18-r3)
(2/17) Installing libgcc (6.4.0-r5)
(3/17) Installing libstdc++ (6.4.0-r5)
(4/17) Installing binutils-libs (2.28-r3)
(5/17) Installing binutils (2.28-r3)
(6/17) Installing gmp (6.1.2-r1)
(7/17) Installing isl (0.18-r0)
(8/17) Installing libgomp (6.4.0-r5)
(9/17) Installing libatomic (6.4.0-r5)
(10/17) Installing pkgconf (1.3.10-r0)
(11/17) Installing mpfr3 (3.1.5-r1)
(12/17) Installing mpc1 (1.0.3-r1)
(13/17) Installing gcc (6.4.0-r5)
(14/17) Installing musl-dev (1.1.18-r3)
(15/17) Installing libc-dev (0.7.1-r0)
(16/17) Installing g++ (6.4.0-r5)
(17/17) Upgrading musl-utils (1.1.18-r2 -> 1.1.18-r3)
Executing busybox-1.27.2-r7.trigger
OK: 184 MiB in 50 packages
Removing intermediate container 4b0c32551e3f
---> be26c3bf4e42
Step 3/3 : RUN pip install pandas
---> Running in 36f6024e5e2d
Collecting pandas
Downloading pandas-0.22.0.tar.gz (11.3MB)
Collecting python-dateutil>=2 (from pandas)
Downloading python_dateutil-2.6.1-py2.py3-none-any.whl (194kB)
Collecting pytz>=2011k (from pandas)
Downloading pytz-2018.3-py2.py3-none-any.whl (509kB)
Collecting numpy>=1.9.0 (from pandas)
Downloading numpy-1.14.1.zip (4.9MB)
Collecting six>=1.5 (from python-dateutil>=2->pandas)
Downloading six-1.11.0-py2.py3-none-any.whl
Building wheels for collected packages: pandas, numpy
Running setup.py bdist_wheel for pandas: started
Running setup.py bdist_wheel for pandas: still running...
Running setup.py bdist_wheel for pandas: still running...
Running setup.py bdist_wheel for pandas: still running...
Running setup.py bdist_wheel for pandas: still running...
Running setup.py bdist_wheel for pandas: still running...
Running setup.py bdist_wheel for pandas: still running...
Running setup.py bdist_wheel for pandas: finished with status 'done'
Stored in directory: /root/.cache/pip/wheels/e8/ed/46/0596b51014f3cc49259e52dff9824e1c6fe352048a2656fc92
Running setup.py bdist_wheel for numpy: started
Running setup.py bdist_wheel for numpy: still running...
Running setup.py bdist_wheel for numpy: still running...
Running setup.py bdist_wheel for numpy: still running...
Running setup.py bdist_wheel for numpy: finished with status 'done'
Stored in directory: /root/.cache/pip/wheels/9d/cd/e1/4d418b16ea662e512349ef193ed9d9ff473af715110798c984
Successfully built pandas numpy
Installing collected packages: six, python-dateutil, pytz, numpy, pandas
Successfully installed numpy-1.14.1 pandas-0.22.0 python-dateutil-2.6.1 pytz-2018.3 six-1.11.0
Removing intermediate container 36f6024e5e2d
---> a93c59e6a106
Successfully built a93c59e6a106
Successfully tagged alpine-pandas:latest
docker build -t alpine-pandas -f Dockerfile.alpine . --no-cache 0.54s user 0.33s system 0% cpu 16:08.47 total
```
|
2018/02/28
|
[
"https://Stackoverflow.com/questions/49037742",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3089468/"
] |
`pandas` is considered a community supported package, so the answers pointing to `edge/testing` are not going to work as Alpine does not officially support pandas as a core package (it still works, it's just not supported by the core Alpine developers).
Try this Dockerfile:
```
FROM python:3.8-alpine
RUN echo "@community http://dl-cdn.alpinelinux.org/alpine/edge/community" >> /etc/apk/repositories \
&& apk add py3-pandas@community
ENV PYTHONPATH="/usr/lib/python3.8/site-packages"
```
This works for the vanilla Alpine image too, using `FROM alpine:3.12`.
---
Update: thanks to @cegprakash for raising the question about how to work with this setup when you also have a `requirements.txt` file that must be satisfied inside the container.
I added one line to the Dockerfile snippet to export the `PYTHONPATH` variable into the container runtime. If you do this, it won't matter whether `pandas` or `numpy` are included in the requirements file or not (provided they are pegged to the same version that was installed via `apk`).
The reason this is needed is that `apk` installs the `py3-pands@community` package under `/usr/lib`, but that location is not on the default `PYTHONPATH` that `pip` checks before installing new packages. If we don't include this step to add it, `pip` and `python` will not find the package and `pip` will try to download and install it under `/usr/local` which is what we're trying to avoid.
And given that we *really* want to make sure that `pip` doesn't try to install `pandas`, I would suggest to **not** include `pandas` or `numpy` in the `requirements.txt` file if you've already installed them with `apk` using the above method. It's just a little extra insurance that things will go as intended.
|
alpine takes lot of time to install pandas and the image size is also huge. I tried the python:3.8-slim-buster version of python base image. Image build was very fast and size of image was less than half in comparison to alpine python docker image
<https://github.com/dguyhasnoname/k8s-cluster-checker/blob/master/Dockerfile>
|
49,037,742
|
I've noticed that installing Pandas and Numpy (it's dependency) in a Docker container using the base OS Alpine vs. CentOS or Debian takes much longer. I created a little test below to demonstrate the time difference. Aside from the few seconds Alpine takes to update and download the build dependencies to install Pandas and Numpy, why does the setup.py take around 70x more time than on Debian install?
Is there any way to speed up the install using Alpine as the base image or is there another base image of comparable size to Alpine that is better to use for packages like Pandas and Numpy?
**Dockerfile.debian**
```
FROM python:3.6.4-slim-jessie
RUN pip install pandas
```
**Build Debian image with Pandas & Numpy:**
```
[PandasDockerTest] time docker build -t debian-pandas -f Dockerfile.debian . --no-cache
Sending build context to Docker daemon 3.072kB
Step 1/2 : FROM python:3.6.4-slim-jessie
---> 43431c5410f3
Step 2/2 : RUN pip install pandas
---> Running in 2e4c030f8051
Collecting pandas
Downloading pandas-0.22.0-cp36-cp36m-manylinux1_x86_64.whl (26.2MB)
Collecting numpy>=1.9.0 (from pandas)
Downloading numpy-1.14.1-cp36-cp36m-manylinux1_x86_64.whl (12.2MB)
Collecting pytz>=2011k (from pandas)
Downloading pytz-2018.3-py2.py3-none-any.whl (509kB)
Collecting python-dateutil>=2 (from pandas)
Downloading python_dateutil-2.6.1-py2.py3-none-any.whl (194kB)
Collecting six>=1.5 (from python-dateutil>=2->pandas)
Downloading six-1.11.0-py2.py3-none-any.whl
Installing collected packages: numpy, pytz, six, python-dateutil, pandas
Successfully installed numpy-1.14.1 pandas-0.22.0 python-dateutil-2.6.1 pytz-2018.3 six-1.11.0
Removing intermediate container 2e4c030f8051
---> a71e1c314897
Successfully built a71e1c314897
Successfully tagged debian-pandas:latest
docker build -t debian-pandas -f Dockerfile.debian . --no-cache 0.07s user 0.06s system 0% cpu 13.605 total
```
**Dockerfile.alpine**
```
FROM python:3.6.4-alpine3.7
RUN apk --update add --no-cache g++
RUN pip install pandas
```
**Build Alpine image with Pandas & Numpy:**
```
[PandasDockerTest] time docker build -t alpine-pandas -f Dockerfile.alpine . --no-cache
Sending build context to Docker daemon 16.9kB
Step 1/3 : FROM python:3.6.4-alpine3.7
---> 4b00a94b6f26
Step 2/3 : RUN apk --update add --no-cache g++
---> Running in 4b0c32551e3f
fetch http://dl-cdn.alpinelinux.org/alpine/v3.7/main/x86_64/APKINDEX.tar.gz
fetch http://dl-cdn.alpinelinux.org/alpine/v3.7/main/x86_64/APKINDEX.tar.gz
fetch http://dl-cdn.alpinelinux.org/alpine/v3.7/community/x86_64/APKINDEX.tar.gz
fetch http://dl-cdn.alpinelinux.org/alpine/v3.7/community/x86_64/APKINDEX.tar.gz
(1/17) Upgrading musl (1.1.18-r2 -> 1.1.18-r3)
(2/17) Installing libgcc (6.4.0-r5)
(3/17) Installing libstdc++ (6.4.0-r5)
(4/17) Installing binutils-libs (2.28-r3)
(5/17) Installing binutils (2.28-r3)
(6/17) Installing gmp (6.1.2-r1)
(7/17) Installing isl (0.18-r0)
(8/17) Installing libgomp (6.4.0-r5)
(9/17) Installing libatomic (6.4.0-r5)
(10/17) Installing pkgconf (1.3.10-r0)
(11/17) Installing mpfr3 (3.1.5-r1)
(12/17) Installing mpc1 (1.0.3-r1)
(13/17) Installing gcc (6.4.0-r5)
(14/17) Installing musl-dev (1.1.18-r3)
(15/17) Installing libc-dev (0.7.1-r0)
(16/17) Installing g++ (6.4.0-r5)
(17/17) Upgrading musl-utils (1.1.18-r2 -> 1.1.18-r3)
Executing busybox-1.27.2-r7.trigger
OK: 184 MiB in 50 packages
Removing intermediate container 4b0c32551e3f
---> be26c3bf4e42
Step 3/3 : RUN pip install pandas
---> Running in 36f6024e5e2d
Collecting pandas
Downloading pandas-0.22.0.tar.gz (11.3MB)
Collecting python-dateutil>=2 (from pandas)
Downloading python_dateutil-2.6.1-py2.py3-none-any.whl (194kB)
Collecting pytz>=2011k (from pandas)
Downloading pytz-2018.3-py2.py3-none-any.whl (509kB)
Collecting numpy>=1.9.0 (from pandas)
Downloading numpy-1.14.1.zip (4.9MB)
Collecting six>=1.5 (from python-dateutil>=2->pandas)
Downloading six-1.11.0-py2.py3-none-any.whl
Building wheels for collected packages: pandas, numpy
Running setup.py bdist_wheel for pandas: started
Running setup.py bdist_wheel for pandas: still running...
Running setup.py bdist_wheel for pandas: still running...
Running setup.py bdist_wheel for pandas: still running...
Running setup.py bdist_wheel for pandas: still running...
Running setup.py bdist_wheel for pandas: still running...
Running setup.py bdist_wheel for pandas: still running...
Running setup.py bdist_wheel for pandas: finished with status 'done'
Stored in directory: /root/.cache/pip/wheels/e8/ed/46/0596b51014f3cc49259e52dff9824e1c6fe352048a2656fc92
Running setup.py bdist_wheel for numpy: started
Running setup.py bdist_wheel for numpy: still running...
Running setup.py bdist_wheel for numpy: still running...
Running setup.py bdist_wheel for numpy: still running...
Running setup.py bdist_wheel for numpy: finished with status 'done'
Stored in directory: /root/.cache/pip/wheels/9d/cd/e1/4d418b16ea662e512349ef193ed9d9ff473af715110798c984
Successfully built pandas numpy
Installing collected packages: six, python-dateutil, pytz, numpy, pandas
Successfully installed numpy-1.14.1 pandas-0.22.0 python-dateutil-2.6.1 pytz-2018.3 six-1.11.0
Removing intermediate container 36f6024e5e2d
---> a93c59e6a106
Successfully built a93c59e6a106
Successfully tagged alpine-pandas:latest
docker build -t alpine-pandas -f Dockerfile.alpine . --no-cache 0.54s user 0.33s system 0% cpu 16:08.47 total
```
|
2018/02/28
|
[
"https://Stackoverflow.com/questions/49037742",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3089468/"
] |
Debian based images use only `python pip` to install packages with `.whl` format:
```
Downloading pandas-0.22.0-cp36-cp36m-manylinux1_x86_64.whl (26.2MB)
Downloading numpy-1.14.1-cp36-cp36m-manylinux1_x86_64.whl (12.2MB)
```
WHL format was developed as a quicker and more reliable method of installing Python software than re-building from source code every time. WHL files only have to be moved to the correct location on the target system to be installed, whereas a source distribution requires a build step before installation.
Wheel packages `pandas` and `numpy` are not supported in images based on Alpine platform. That's why when we install them using `python pip` during the building process, we always compile them from the source files in alpine:
```
Downloading pandas-0.22.0.tar.gz (11.3MB)
Downloading numpy-1.14.1.zip (4.9MB)
```
and we can see the following inside container during the image building:
```
/ # ps aux
PID USER TIME COMMAND
1 root 0:00 /bin/sh -c pip install pandas
7 root 0:04 {pip} /usr/local/bin/python /usr/local/bin/pip install pandas
21 root 0:07 /usr/local/bin/python -c import setuptools, tokenize;__file__='/tmp/pip-build-en29h0ak/pandas/setup.py';f=getattr(tokenize, 'open', open)(__file__);code=f.read().replace('\r\n', '\n
496 root 0:00 sh
660 root 0:00 /bin/sh -c gcc -Wno-unused-result -Wsign-compare -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -DTHREAD_STACK_SIZE=0x100000 -fPIC -Ibuild/src.linux-x86_64-3.6/numpy/core/src/pri
661 root 0:00 gcc -Wno-unused-result -Wsign-compare -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -DTHREAD_STACK_SIZE=0x100000 -fPIC -Ibuild/src.linux-x86_64-3.6/numpy/core/src/private -Inump
662 root 0:00 /usr/libexec/gcc/x86_64-alpine-linux-musl/6.4.0/cc1 -quiet -I build/src.linux-x86_64-3.6/numpy/core/src/private -I numpy/core/include -I build/src.linux-x86_64-3.6/numpy/core/includ
663 root 0:00 ps aux
```
If we modify `Dockerfile` a little:
```
FROM python:3.6.4-alpine3.7
RUN apk add --no-cache g++ wget
RUN wget https://pypi.python.org/packages/da/c6/0936bc5814b429fddb5d6252566fe73a3e40372e6ceaf87de3dec1326f28/pandas-0.22.0-cp36-cp36m-manylinux1_x86_64.whl
RUN pip install pandas-0.22.0-cp36-cp36m-manylinux1_x86_64.whl
```
we get the following error:
```
Step 4/4 : RUN pip install pandas-0.22.0-cp36-cp36m-manylinux1_x86_64.whl
---> Running in 0faea63e2bda
pandas-0.22.0-cp36-cp36m-manylinux1_x86_64.whl is not a supported wheel on this platform.
The command '/bin/sh -c pip install pandas-0.22.0-cp36-cp36m-manylinux1_x86_64.whl' returned a non-zero code: 1
```
Unfortunately, the only way to install `pandas` on an Alpine image is to wait until build finishes.
Of course if you want to use the Alpine image with `pandas` in CI for example, the best way to do so is to compile it once, push it to any registry and use it as a base image for your needs.
**EDIT:**
If you want to use the Alpine image with `pandas` you can pull my [nickgryg/alpine-pandas](https://hub.docker.com/r/nickgryg/alpine-pandas/) docker image. It is a python image with pre-compiled `pandas` on the Alpine platform. It should save your time.
|
This worked for me:
```
FROM python:3.8-alpine
RUN echo "@testing http://dl-cdn.alpinelinux.org/alpine/edge/testing" >> /etc/apk/repositories
RUN apk add --update --no-cache py3-numpy py3-pandas@testing
ENV PYTHONPATH=/usr/lib/python3.8/site-packages
COPY . /app
WORKDIR /app
RUN pip install -r requirements.txt
EXPOSE 5003
ENTRYPOINT [ "python" ]
CMD [ "app.py" ]
```
Most of the code here is from the answer of [jtlz2](https://stackoverflow.com/a/50443531/5381704) from this same thread and [Faylixe](https://stackoverflow.com/a/57485724/5381704) from another thread.
Turns out the lighter version of pandas is found in the Alpine repository `py3-numpy` but it doesn't get installed in the same file path from where Python reads the imports by default. Therefore you need to add the `ENV`. Also be mindful about the alpine version.
|
49,037,742
|
I've noticed that installing Pandas and Numpy (it's dependency) in a Docker container using the base OS Alpine vs. CentOS or Debian takes much longer. I created a little test below to demonstrate the time difference. Aside from the few seconds Alpine takes to update and download the build dependencies to install Pandas and Numpy, why does the setup.py take around 70x more time than on Debian install?
Is there any way to speed up the install using Alpine as the base image or is there another base image of comparable size to Alpine that is better to use for packages like Pandas and Numpy?
**Dockerfile.debian**
```
FROM python:3.6.4-slim-jessie
RUN pip install pandas
```
**Build Debian image with Pandas & Numpy:**
```
[PandasDockerTest] time docker build -t debian-pandas -f Dockerfile.debian . --no-cache
Sending build context to Docker daemon 3.072kB
Step 1/2 : FROM python:3.6.4-slim-jessie
---> 43431c5410f3
Step 2/2 : RUN pip install pandas
---> Running in 2e4c030f8051
Collecting pandas
Downloading pandas-0.22.0-cp36-cp36m-manylinux1_x86_64.whl (26.2MB)
Collecting numpy>=1.9.0 (from pandas)
Downloading numpy-1.14.1-cp36-cp36m-manylinux1_x86_64.whl (12.2MB)
Collecting pytz>=2011k (from pandas)
Downloading pytz-2018.3-py2.py3-none-any.whl (509kB)
Collecting python-dateutil>=2 (from pandas)
Downloading python_dateutil-2.6.1-py2.py3-none-any.whl (194kB)
Collecting six>=1.5 (from python-dateutil>=2->pandas)
Downloading six-1.11.0-py2.py3-none-any.whl
Installing collected packages: numpy, pytz, six, python-dateutil, pandas
Successfully installed numpy-1.14.1 pandas-0.22.0 python-dateutil-2.6.1 pytz-2018.3 six-1.11.0
Removing intermediate container 2e4c030f8051
---> a71e1c314897
Successfully built a71e1c314897
Successfully tagged debian-pandas:latest
docker build -t debian-pandas -f Dockerfile.debian . --no-cache 0.07s user 0.06s system 0% cpu 13.605 total
```
**Dockerfile.alpine**
```
FROM python:3.6.4-alpine3.7
RUN apk --update add --no-cache g++
RUN pip install pandas
```
**Build Alpine image with Pandas & Numpy:**
```
[PandasDockerTest] time docker build -t alpine-pandas -f Dockerfile.alpine . --no-cache
Sending build context to Docker daemon 16.9kB
Step 1/3 : FROM python:3.6.4-alpine3.7
---> 4b00a94b6f26
Step 2/3 : RUN apk --update add --no-cache g++
---> Running in 4b0c32551e3f
fetch http://dl-cdn.alpinelinux.org/alpine/v3.7/main/x86_64/APKINDEX.tar.gz
fetch http://dl-cdn.alpinelinux.org/alpine/v3.7/main/x86_64/APKINDEX.tar.gz
fetch http://dl-cdn.alpinelinux.org/alpine/v3.7/community/x86_64/APKINDEX.tar.gz
fetch http://dl-cdn.alpinelinux.org/alpine/v3.7/community/x86_64/APKINDEX.tar.gz
(1/17) Upgrading musl (1.1.18-r2 -> 1.1.18-r3)
(2/17) Installing libgcc (6.4.0-r5)
(3/17) Installing libstdc++ (6.4.0-r5)
(4/17) Installing binutils-libs (2.28-r3)
(5/17) Installing binutils (2.28-r3)
(6/17) Installing gmp (6.1.2-r1)
(7/17) Installing isl (0.18-r0)
(8/17) Installing libgomp (6.4.0-r5)
(9/17) Installing libatomic (6.4.0-r5)
(10/17) Installing pkgconf (1.3.10-r0)
(11/17) Installing mpfr3 (3.1.5-r1)
(12/17) Installing mpc1 (1.0.3-r1)
(13/17) Installing gcc (6.4.0-r5)
(14/17) Installing musl-dev (1.1.18-r3)
(15/17) Installing libc-dev (0.7.1-r0)
(16/17) Installing g++ (6.4.0-r5)
(17/17) Upgrading musl-utils (1.1.18-r2 -> 1.1.18-r3)
Executing busybox-1.27.2-r7.trigger
OK: 184 MiB in 50 packages
Removing intermediate container 4b0c32551e3f
---> be26c3bf4e42
Step 3/3 : RUN pip install pandas
---> Running in 36f6024e5e2d
Collecting pandas
Downloading pandas-0.22.0.tar.gz (11.3MB)
Collecting python-dateutil>=2 (from pandas)
Downloading python_dateutil-2.6.1-py2.py3-none-any.whl (194kB)
Collecting pytz>=2011k (from pandas)
Downloading pytz-2018.3-py2.py3-none-any.whl (509kB)
Collecting numpy>=1.9.0 (from pandas)
Downloading numpy-1.14.1.zip (4.9MB)
Collecting six>=1.5 (from python-dateutil>=2->pandas)
Downloading six-1.11.0-py2.py3-none-any.whl
Building wheels for collected packages: pandas, numpy
Running setup.py bdist_wheel for pandas: started
Running setup.py bdist_wheel for pandas: still running...
Running setup.py bdist_wheel for pandas: still running...
Running setup.py bdist_wheel for pandas: still running...
Running setup.py bdist_wheel for pandas: still running...
Running setup.py bdist_wheel for pandas: still running...
Running setup.py bdist_wheel for pandas: still running...
Running setup.py bdist_wheel for pandas: finished with status 'done'
Stored in directory: /root/.cache/pip/wheels/e8/ed/46/0596b51014f3cc49259e52dff9824e1c6fe352048a2656fc92
Running setup.py bdist_wheel for numpy: started
Running setup.py bdist_wheel for numpy: still running...
Running setup.py bdist_wheel for numpy: still running...
Running setup.py bdist_wheel for numpy: still running...
Running setup.py bdist_wheel for numpy: finished with status 'done'
Stored in directory: /root/.cache/pip/wheels/9d/cd/e1/4d418b16ea662e512349ef193ed9d9ff473af715110798c984
Successfully built pandas numpy
Installing collected packages: six, python-dateutil, pytz, numpy, pandas
Successfully installed numpy-1.14.1 pandas-0.22.0 python-dateutil-2.6.1 pytz-2018.3 six-1.11.0
Removing intermediate container 36f6024e5e2d
---> a93c59e6a106
Successfully built a93c59e6a106
Successfully tagged alpine-pandas:latest
docker build -t alpine-pandas -f Dockerfile.alpine . --no-cache 0.54s user 0.33s system 0% cpu 16:08.47 total
```
|
2018/02/28
|
[
"https://Stackoverflow.com/questions/49037742",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3089468/"
] |
This worked for me:
```
FROM python:3.8-alpine
RUN echo "@testing http://dl-cdn.alpinelinux.org/alpine/edge/testing" >> /etc/apk/repositories
RUN apk add --update --no-cache py3-numpy py3-pandas@testing
ENV PYTHONPATH=/usr/lib/python3.8/site-packages
COPY . /app
WORKDIR /app
RUN pip install -r requirements.txt
EXPOSE 5003
ENTRYPOINT [ "python" ]
CMD [ "app.py" ]
```
Most of the code here is from the answer of [jtlz2](https://stackoverflow.com/a/50443531/5381704) from this same thread and [Faylixe](https://stackoverflow.com/a/57485724/5381704) from another thread.
Turns out the lighter version of pandas is found in the Alpine repository `py3-numpy` but it doesn't get installed in the same file path from where Python reads the imports by default. Therefore you need to add the `ENV`. Also be mindful about the alpine version.
|
The following Dockerfile worked for me to install pandas, among other dependencies as listed below.
### python:3.10-alpine Dockerfile
```
# syntax=docker/dockerfile:1
FROM python:3.10-alpine as base
RUN apk add --update --no-cache --virtual .tmp-build-deps \
gcc g++ libc-dev linux-headers postgresql-dev build-base \
&& apk add libffi-dev
COPY requirements.txt requirements.txt
RUN pip install --no-cache-dir --upgrade -r requirements.txt
```
### pyproject.toml dependencies
```
python = "^3.10"
Django = "^3.2.9"
djangorestframework = "^3.12.4"
PyYAML = ">=5.3.0,<6.0.0"
Markdown = "^3.3.6"
uritemplate = "^4.1.1"
install = "^1.3.5"
drf-spectacular = "^0.21.0"
django-extensions = "^3.1.5"
django-filter = "^21.1"
django-cors-headers = "^3.10.1"
httpx = "^0.22.0"
channels = "^3.0.4"
daphne = "^3.0.2"
whitenoise = "^6.2.0"
djoser = "^2.1.0"
channels-redis = "^3.4.0"
pika = "^1.2.1"
backoff = "^2.1.2"
psycopg2-binary = "^2.9.3"
pandas = "^1.5.0"
```
|
49,037,742
|
I've noticed that installing Pandas and Numpy (it's dependency) in a Docker container using the base OS Alpine vs. CentOS or Debian takes much longer. I created a little test below to demonstrate the time difference. Aside from the few seconds Alpine takes to update and download the build dependencies to install Pandas and Numpy, why does the setup.py take around 70x more time than on Debian install?
Is there any way to speed up the install using Alpine as the base image or is there another base image of comparable size to Alpine that is better to use for packages like Pandas and Numpy?
**Dockerfile.debian**
```
FROM python:3.6.4-slim-jessie
RUN pip install pandas
```
**Build Debian image with Pandas & Numpy:**
```
[PandasDockerTest] time docker build -t debian-pandas -f Dockerfile.debian . --no-cache
Sending build context to Docker daemon 3.072kB
Step 1/2 : FROM python:3.6.4-slim-jessie
---> 43431c5410f3
Step 2/2 : RUN pip install pandas
---> Running in 2e4c030f8051
Collecting pandas
Downloading pandas-0.22.0-cp36-cp36m-manylinux1_x86_64.whl (26.2MB)
Collecting numpy>=1.9.0 (from pandas)
Downloading numpy-1.14.1-cp36-cp36m-manylinux1_x86_64.whl (12.2MB)
Collecting pytz>=2011k (from pandas)
Downloading pytz-2018.3-py2.py3-none-any.whl (509kB)
Collecting python-dateutil>=2 (from pandas)
Downloading python_dateutil-2.6.1-py2.py3-none-any.whl (194kB)
Collecting six>=1.5 (from python-dateutil>=2->pandas)
Downloading six-1.11.0-py2.py3-none-any.whl
Installing collected packages: numpy, pytz, six, python-dateutil, pandas
Successfully installed numpy-1.14.1 pandas-0.22.0 python-dateutil-2.6.1 pytz-2018.3 six-1.11.0
Removing intermediate container 2e4c030f8051
---> a71e1c314897
Successfully built a71e1c314897
Successfully tagged debian-pandas:latest
docker build -t debian-pandas -f Dockerfile.debian . --no-cache 0.07s user 0.06s system 0% cpu 13.605 total
```
**Dockerfile.alpine**
```
FROM python:3.6.4-alpine3.7
RUN apk --update add --no-cache g++
RUN pip install pandas
```
**Build Alpine image with Pandas & Numpy:**
```
[PandasDockerTest] time docker build -t alpine-pandas -f Dockerfile.alpine . --no-cache
Sending build context to Docker daemon 16.9kB
Step 1/3 : FROM python:3.6.4-alpine3.7
---> 4b00a94b6f26
Step 2/3 : RUN apk --update add --no-cache g++
---> Running in 4b0c32551e3f
fetch http://dl-cdn.alpinelinux.org/alpine/v3.7/main/x86_64/APKINDEX.tar.gz
fetch http://dl-cdn.alpinelinux.org/alpine/v3.7/main/x86_64/APKINDEX.tar.gz
fetch http://dl-cdn.alpinelinux.org/alpine/v3.7/community/x86_64/APKINDEX.tar.gz
fetch http://dl-cdn.alpinelinux.org/alpine/v3.7/community/x86_64/APKINDEX.tar.gz
(1/17) Upgrading musl (1.1.18-r2 -> 1.1.18-r3)
(2/17) Installing libgcc (6.4.0-r5)
(3/17) Installing libstdc++ (6.4.0-r5)
(4/17) Installing binutils-libs (2.28-r3)
(5/17) Installing binutils (2.28-r3)
(6/17) Installing gmp (6.1.2-r1)
(7/17) Installing isl (0.18-r0)
(8/17) Installing libgomp (6.4.0-r5)
(9/17) Installing libatomic (6.4.0-r5)
(10/17) Installing pkgconf (1.3.10-r0)
(11/17) Installing mpfr3 (3.1.5-r1)
(12/17) Installing mpc1 (1.0.3-r1)
(13/17) Installing gcc (6.4.0-r5)
(14/17) Installing musl-dev (1.1.18-r3)
(15/17) Installing libc-dev (0.7.1-r0)
(16/17) Installing g++ (6.4.0-r5)
(17/17) Upgrading musl-utils (1.1.18-r2 -> 1.1.18-r3)
Executing busybox-1.27.2-r7.trigger
OK: 184 MiB in 50 packages
Removing intermediate container 4b0c32551e3f
---> be26c3bf4e42
Step 3/3 : RUN pip install pandas
---> Running in 36f6024e5e2d
Collecting pandas
Downloading pandas-0.22.0.tar.gz (11.3MB)
Collecting python-dateutil>=2 (from pandas)
Downloading python_dateutil-2.6.1-py2.py3-none-any.whl (194kB)
Collecting pytz>=2011k (from pandas)
Downloading pytz-2018.3-py2.py3-none-any.whl (509kB)
Collecting numpy>=1.9.0 (from pandas)
Downloading numpy-1.14.1.zip (4.9MB)
Collecting six>=1.5 (from python-dateutil>=2->pandas)
Downloading six-1.11.0-py2.py3-none-any.whl
Building wheels for collected packages: pandas, numpy
Running setup.py bdist_wheel for pandas: started
Running setup.py bdist_wheel for pandas: still running...
Running setup.py bdist_wheel for pandas: still running...
Running setup.py bdist_wheel for pandas: still running...
Running setup.py bdist_wheel for pandas: still running...
Running setup.py bdist_wheel for pandas: still running...
Running setup.py bdist_wheel for pandas: still running...
Running setup.py bdist_wheel for pandas: finished with status 'done'
Stored in directory: /root/.cache/pip/wheels/e8/ed/46/0596b51014f3cc49259e52dff9824e1c6fe352048a2656fc92
Running setup.py bdist_wheel for numpy: started
Running setup.py bdist_wheel for numpy: still running...
Running setup.py bdist_wheel for numpy: still running...
Running setup.py bdist_wheel for numpy: still running...
Running setup.py bdist_wheel for numpy: finished with status 'done'
Stored in directory: /root/.cache/pip/wheels/9d/cd/e1/4d418b16ea662e512349ef193ed9d9ff473af715110798c984
Successfully built pandas numpy
Installing collected packages: six, python-dateutil, pytz, numpy, pandas
Successfully installed numpy-1.14.1 pandas-0.22.0 python-dateutil-2.6.1 pytz-2018.3 six-1.11.0
Removing intermediate container 36f6024e5e2d
---> a93c59e6a106
Successfully built a93c59e6a106
Successfully tagged alpine-pandas:latest
docker build -t alpine-pandas -f Dockerfile.alpine . --no-cache 0.54s user 0.33s system 0% cpu 16:08.47 total
```
|
2018/02/28
|
[
"https://Stackoverflow.com/questions/49037742",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3089468/"
] |
I have solved the installation with some additional changes:
### Requirements
* Migrate from `python3.8-alpine` to `python3.10-alpine`:
```bash
docker pull python:3.10-alpine
```
>
> #### Important!
>
>
> I had to migrate because when I was installing `py3-pandas`, it installed the package as `python3.10`, not in the required version
> that I was using `python3.8`).
>
>
> To figure out where the libraries of a package were installed, you can check that with the following command:
>
>
>
> ```bash
> apk info -L py3-pandas
>
> ```
>
>
* Not install **`backports.zoneinfo`** package since **`python3.9`** (I had to add a condition in the **requirements.txt** to install the package with versions lower than **`3.9`**).
```
backports.zoneinfo==0.2.1;python_version<"3.9"
```
### Installation
After the previous changes, I proceed to install `panda` performing the following:
* Add 3 additional repositories to **`/etc/apk/repositories`** (**the repositories can vary based on the version of your distribution**), reference [here](https://wiki.alpinelinux.org/wiki/Package_management#Repository_pinning):
```bash
for x in $(echo "main community testing"); \
do echo "https://dl-cdn.alpinelinux.org/alpine/edge/${x}" >> /etc/apk/repositories; \
done
```
* Validate the content of the file **`/etc/apk/repositories`**:
```bash
$ cat /etc/apk/repositories
https://dl-cdn.alpinelinux.org/alpine/v3.16/main
https://dl-cdn.alpinelinux.org/alpine/v3.16/community
https://dl-cdn.alpinelinux.org/alpine/edge/main
https://dl-cdn.alpinelinux.org/alpine/edge/community
https://dl-cdn.alpinelinux.org/alpine/edge/testing
```
* Perform to install **`pandas`** (`pynum` is installed automatically as a dependency of `pandas`):
```bash
sudo apk update && sudo apk add py3-pandas
```
* Set the environment variable **`PYTHONPATH`**:
```bash
export PYTHONPATH=/usr/lib/python3.10/site-packages/
```
* Validate the packages can be imported (on my case I tested it with django):
```bash
python manage.py shell
```
```py
import pandas as pd
import numpy as np
technologies = ['Spark','Pandas','Java','Python', 'PHP']
fee = [25000,20000,15000,15000,18000]
duration = ['5o Days','35 Days',np.nan,'30 Days', '30 Days']
discount = [2000,1000,800,500,800]
columns=['Courses','Fee','Duration','Discount']
df = pd.DataFrame(list(zip(technologies,fee,duration,discount)), columns=columns)
print(df)
```
|
alpine takes lot of time to install pandas and the image size is also huge. I tried the python:3.8-slim-buster version of python base image. Image build was very fast and size of image was less than half in comparison to alpine python docker image
<https://github.com/dguyhasnoname/k8s-cluster-checker/blob/master/Dockerfile>
|
49,037,742
|
I've noticed that installing Pandas and Numpy (it's dependency) in a Docker container using the base OS Alpine vs. CentOS or Debian takes much longer. I created a little test below to demonstrate the time difference. Aside from the few seconds Alpine takes to update and download the build dependencies to install Pandas and Numpy, why does the setup.py take around 70x more time than on Debian install?
Is there any way to speed up the install using Alpine as the base image or is there another base image of comparable size to Alpine that is better to use for packages like Pandas and Numpy?
**Dockerfile.debian**
```
FROM python:3.6.4-slim-jessie
RUN pip install pandas
```
**Build Debian image with Pandas & Numpy:**
```
[PandasDockerTest] time docker build -t debian-pandas -f Dockerfile.debian . --no-cache
Sending build context to Docker daemon 3.072kB
Step 1/2 : FROM python:3.6.4-slim-jessie
---> 43431c5410f3
Step 2/2 : RUN pip install pandas
---> Running in 2e4c030f8051
Collecting pandas
Downloading pandas-0.22.0-cp36-cp36m-manylinux1_x86_64.whl (26.2MB)
Collecting numpy>=1.9.0 (from pandas)
Downloading numpy-1.14.1-cp36-cp36m-manylinux1_x86_64.whl (12.2MB)
Collecting pytz>=2011k (from pandas)
Downloading pytz-2018.3-py2.py3-none-any.whl (509kB)
Collecting python-dateutil>=2 (from pandas)
Downloading python_dateutil-2.6.1-py2.py3-none-any.whl (194kB)
Collecting six>=1.5 (from python-dateutil>=2->pandas)
Downloading six-1.11.0-py2.py3-none-any.whl
Installing collected packages: numpy, pytz, six, python-dateutil, pandas
Successfully installed numpy-1.14.1 pandas-0.22.0 python-dateutil-2.6.1 pytz-2018.3 six-1.11.0
Removing intermediate container 2e4c030f8051
---> a71e1c314897
Successfully built a71e1c314897
Successfully tagged debian-pandas:latest
docker build -t debian-pandas -f Dockerfile.debian . --no-cache 0.07s user 0.06s system 0% cpu 13.605 total
```
**Dockerfile.alpine**
```
FROM python:3.6.4-alpine3.7
RUN apk --update add --no-cache g++
RUN pip install pandas
```
**Build Alpine image with Pandas & Numpy:**
```
[PandasDockerTest] time docker build -t alpine-pandas -f Dockerfile.alpine . --no-cache
Sending build context to Docker daemon 16.9kB
Step 1/3 : FROM python:3.6.4-alpine3.7
---> 4b00a94b6f26
Step 2/3 : RUN apk --update add --no-cache g++
---> Running in 4b0c32551e3f
fetch http://dl-cdn.alpinelinux.org/alpine/v3.7/main/x86_64/APKINDEX.tar.gz
fetch http://dl-cdn.alpinelinux.org/alpine/v3.7/main/x86_64/APKINDEX.tar.gz
fetch http://dl-cdn.alpinelinux.org/alpine/v3.7/community/x86_64/APKINDEX.tar.gz
fetch http://dl-cdn.alpinelinux.org/alpine/v3.7/community/x86_64/APKINDEX.tar.gz
(1/17) Upgrading musl (1.1.18-r2 -> 1.1.18-r3)
(2/17) Installing libgcc (6.4.0-r5)
(3/17) Installing libstdc++ (6.4.0-r5)
(4/17) Installing binutils-libs (2.28-r3)
(5/17) Installing binutils (2.28-r3)
(6/17) Installing gmp (6.1.2-r1)
(7/17) Installing isl (0.18-r0)
(8/17) Installing libgomp (6.4.0-r5)
(9/17) Installing libatomic (6.4.0-r5)
(10/17) Installing pkgconf (1.3.10-r0)
(11/17) Installing mpfr3 (3.1.5-r1)
(12/17) Installing mpc1 (1.0.3-r1)
(13/17) Installing gcc (6.4.0-r5)
(14/17) Installing musl-dev (1.1.18-r3)
(15/17) Installing libc-dev (0.7.1-r0)
(16/17) Installing g++ (6.4.0-r5)
(17/17) Upgrading musl-utils (1.1.18-r2 -> 1.1.18-r3)
Executing busybox-1.27.2-r7.trigger
OK: 184 MiB in 50 packages
Removing intermediate container 4b0c32551e3f
---> be26c3bf4e42
Step 3/3 : RUN pip install pandas
---> Running in 36f6024e5e2d
Collecting pandas
Downloading pandas-0.22.0.tar.gz (11.3MB)
Collecting python-dateutil>=2 (from pandas)
Downloading python_dateutil-2.6.1-py2.py3-none-any.whl (194kB)
Collecting pytz>=2011k (from pandas)
Downloading pytz-2018.3-py2.py3-none-any.whl (509kB)
Collecting numpy>=1.9.0 (from pandas)
Downloading numpy-1.14.1.zip (4.9MB)
Collecting six>=1.5 (from python-dateutil>=2->pandas)
Downloading six-1.11.0-py2.py3-none-any.whl
Building wheels for collected packages: pandas, numpy
Running setup.py bdist_wheel for pandas: started
Running setup.py bdist_wheel for pandas: still running...
Running setup.py bdist_wheel for pandas: still running...
Running setup.py bdist_wheel for pandas: still running...
Running setup.py bdist_wheel for pandas: still running...
Running setup.py bdist_wheel for pandas: still running...
Running setup.py bdist_wheel for pandas: still running...
Running setup.py bdist_wheel for pandas: finished with status 'done'
Stored in directory: /root/.cache/pip/wheels/e8/ed/46/0596b51014f3cc49259e52dff9824e1c6fe352048a2656fc92
Running setup.py bdist_wheel for numpy: started
Running setup.py bdist_wheel for numpy: still running...
Running setup.py bdist_wheel for numpy: still running...
Running setup.py bdist_wheel for numpy: still running...
Running setup.py bdist_wheel for numpy: finished with status 'done'
Stored in directory: /root/.cache/pip/wheels/9d/cd/e1/4d418b16ea662e512349ef193ed9d9ff473af715110798c984
Successfully built pandas numpy
Installing collected packages: six, python-dateutil, pytz, numpy, pandas
Successfully installed numpy-1.14.1 pandas-0.22.0 python-dateutil-2.6.1 pytz-2018.3 six-1.11.0
Removing intermediate container 36f6024e5e2d
---> a93c59e6a106
Successfully built a93c59e6a106
Successfully tagged alpine-pandas:latest
docker build -t alpine-pandas -f Dockerfile.alpine . --no-cache 0.54s user 0.33s system 0% cpu 16:08.47 total
```
|
2018/02/28
|
[
"https://Stackoverflow.com/questions/49037742",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3089468/"
] |
In this case the alpine not be the best solution change alpine for slim:
FROM python:3.8.3-alpine
========================
Change to that:
`FROM python:3.8.3-slim`
In my case it was resolved with this small change.
|
The following Dockerfile worked for me to install pandas, among other dependencies as listed below.
### python:3.10-alpine Dockerfile
```
# syntax=docker/dockerfile:1
FROM python:3.10-alpine as base
RUN apk add --update --no-cache --virtual .tmp-build-deps \
gcc g++ libc-dev linux-headers postgresql-dev build-base \
&& apk add libffi-dev
COPY requirements.txt requirements.txt
RUN pip install --no-cache-dir --upgrade -r requirements.txt
```
### pyproject.toml dependencies
```
python = "^3.10"
Django = "^3.2.9"
djangorestframework = "^3.12.4"
PyYAML = ">=5.3.0,<6.0.0"
Markdown = "^3.3.6"
uritemplate = "^4.1.1"
install = "^1.3.5"
drf-spectacular = "^0.21.0"
django-extensions = "^3.1.5"
django-filter = "^21.1"
django-cors-headers = "^3.10.1"
httpx = "^0.22.0"
channels = "^3.0.4"
daphne = "^3.0.2"
whitenoise = "^6.2.0"
djoser = "^2.1.0"
channels-redis = "^3.4.0"
pika = "^1.2.1"
backoff = "^2.1.2"
psycopg2-binary = "^2.9.3"
pandas = "^1.5.0"
```
|
49,037,742
|
I've noticed that installing Pandas and Numpy (it's dependency) in a Docker container using the base OS Alpine vs. CentOS or Debian takes much longer. I created a little test below to demonstrate the time difference. Aside from the few seconds Alpine takes to update and download the build dependencies to install Pandas and Numpy, why does the setup.py take around 70x more time than on Debian install?
Is there any way to speed up the install using Alpine as the base image or is there another base image of comparable size to Alpine that is better to use for packages like Pandas and Numpy?
**Dockerfile.debian**
```
FROM python:3.6.4-slim-jessie
RUN pip install pandas
```
**Build Debian image with Pandas & Numpy:**
```
[PandasDockerTest] time docker build -t debian-pandas -f Dockerfile.debian . --no-cache
Sending build context to Docker daemon 3.072kB
Step 1/2 : FROM python:3.6.4-slim-jessie
---> 43431c5410f3
Step 2/2 : RUN pip install pandas
---> Running in 2e4c030f8051
Collecting pandas
Downloading pandas-0.22.0-cp36-cp36m-manylinux1_x86_64.whl (26.2MB)
Collecting numpy>=1.9.0 (from pandas)
Downloading numpy-1.14.1-cp36-cp36m-manylinux1_x86_64.whl (12.2MB)
Collecting pytz>=2011k (from pandas)
Downloading pytz-2018.3-py2.py3-none-any.whl (509kB)
Collecting python-dateutil>=2 (from pandas)
Downloading python_dateutil-2.6.1-py2.py3-none-any.whl (194kB)
Collecting six>=1.5 (from python-dateutil>=2->pandas)
Downloading six-1.11.0-py2.py3-none-any.whl
Installing collected packages: numpy, pytz, six, python-dateutil, pandas
Successfully installed numpy-1.14.1 pandas-0.22.0 python-dateutil-2.6.1 pytz-2018.3 six-1.11.0
Removing intermediate container 2e4c030f8051
---> a71e1c314897
Successfully built a71e1c314897
Successfully tagged debian-pandas:latest
docker build -t debian-pandas -f Dockerfile.debian . --no-cache 0.07s user 0.06s system 0% cpu 13.605 total
```
**Dockerfile.alpine**
```
FROM python:3.6.4-alpine3.7
RUN apk --update add --no-cache g++
RUN pip install pandas
```
**Build Alpine image with Pandas & Numpy:**
```
[PandasDockerTest] time docker build -t alpine-pandas -f Dockerfile.alpine . --no-cache
Sending build context to Docker daemon 16.9kB
Step 1/3 : FROM python:3.6.4-alpine3.7
---> 4b00a94b6f26
Step 2/3 : RUN apk --update add --no-cache g++
---> Running in 4b0c32551e3f
fetch http://dl-cdn.alpinelinux.org/alpine/v3.7/main/x86_64/APKINDEX.tar.gz
fetch http://dl-cdn.alpinelinux.org/alpine/v3.7/main/x86_64/APKINDEX.tar.gz
fetch http://dl-cdn.alpinelinux.org/alpine/v3.7/community/x86_64/APKINDEX.tar.gz
fetch http://dl-cdn.alpinelinux.org/alpine/v3.7/community/x86_64/APKINDEX.tar.gz
(1/17) Upgrading musl (1.1.18-r2 -> 1.1.18-r3)
(2/17) Installing libgcc (6.4.0-r5)
(3/17) Installing libstdc++ (6.4.0-r5)
(4/17) Installing binutils-libs (2.28-r3)
(5/17) Installing binutils (2.28-r3)
(6/17) Installing gmp (6.1.2-r1)
(7/17) Installing isl (0.18-r0)
(8/17) Installing libgomp (6.4.0-r5)
(9/17) Installing libatomic (6.4.0-r5)
(10/17) Installing pkgconf (1.3.10-r0)
(11/17) Installing mpfr3 (3.1.5-r1)
(12/17) Installing mpc1 (1.0.3-r1)
(13/17) Installing gcc (6.4.0-r5)
(14/17) Installing musl-dev (1.1.18-r3)
(15/17) Installing libc-dev (0.7.1-r0)
(16/17) Installing g++ (6.4.0-r5)
(17/17) Upgrading musl-utils (1.1.18-r2 -> 1.1.18-r3)
Executing busybox-1.27.2-r7.trigger
OK: 184 MiB in 50 packages
Removing intermediate container 4b0c32551e3f
---> be26c3bf4e42
Step 3/3 : RUN pip install pandas
---> Running in 36f6024e5e2d
Collecting pandas
Downloading pandas-0.22.0.tar.gz (11.3MB)
Collecting python-dateutil>=2 (from pandas)
Downloading python_dateutil-2.6.1-py2.py3-none-any.whl (194kB)
Collecting pytz>=2011k (from pandas)
Downloading pytz-2018.3-py2.py3-none-any.whl (509kB)
Collecting numpy>=1.9.0 (from pandas)
Downloading numpy-1.14.1.zip (4.9MB)
Collecting six>=1.5 (from python-dateutil>=2->pandas)
Downloading six-1.11.0-py2.py3-none-any.whl
Building wheels for collected packages: pandas, numpy
Running setup.py bdist_wheel for pandas: started
Running setup.py bdist_wheel for pandas: still running...
Running setup.py bdist_wheel for pandas: still running...
Running setup.py bdist_wheel for pandas: still running...
Running setup.py bdist_wheel for pandas: still running...
Running setup.py bdist_wheel for pandas: still running...
Running setup.py bdist_wheel for pandas: still running...
Running setup.py bdist_wheel for pandas: finished with status 'done'
Stored in directory: /root/.cache/pip/wheels/e8/ed/46/0596b51014f3cc49259e52dff9824e1c6fe352048a2656fc92
Running setup.py bdist_wheel for numpy: started
Running setup.py bdist_wheel for numpy: still running...
Running setup.py bdist_wheel for numpy: still running...
Running setup.py bdist_wheel for numpy: still running...
Running setup.py bdist_wheel for numpy: finished with status 'done'
Stored in directory: /root/.cache/pip/wheels/9d/cd/e1/4d418b16ea662e512349ef193ed9d9ff473af715110798c984
Successfully built pandas numpy
Installing collected packages: six, python-dateutil, pytz, numpy, pandas
Successfully installed numpy-1.14.1 pandas-0.22.0 python-dateutil-2.6.1 pytz-2018.3 six-1.11.0
Removing intermediate container 36f6024e5e2d
---> a93c59e6a106
Successfully built a93c59e6a106
Successfully tagged alpine-pandas:latest
docker build -t alpine-pandas -f Dockerfile.alpine . --no-cache 0.54s user 0.33s system 0% cpu 16:08.47 total
```
|
2018/02/28
|
[
"https://Stackoverflow.com/questions/49037742",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3089468/"
] |
Real honest advice here, switch to Debian based image and then all your problems will be gone.
Alpine for python applications doesn't work well.
Here is an example of my `dockerfile`:
```
FROM python:3.7.6-buster
RUN pip install pandas==1.0.0
RUN pip install sklearn
RUN pip install Django==3.0.2
RUN pip install cx_Oracle==7.3.0
RUN pip install excel
RUN pip install djangorestframework==3.11.0
```
The `python:3.7.6-buster` is more appropriate in this case, in addition, you don't need any extra dependency in the OS.
Follow a usefull and recent article: <https://pythonspeed.com/articles/alpine-docker-python/>:
>
> Don’t use Alpine Linux for Python images
> Unless you want massively slower build times, larger images, more work, and the potential for obscure bugs, you’ll want to avoid Alpine Linux as a base image. For some recommendations on what you should use, see my article on choosing a good base image.
>
>
>
|
This worked for me:
```
FROM python:3.8-alpine
RUN echo "@testing http://dl-cdn.alpinelinux.org/alpine/edge/testing" >> /etc/apk/repositories
RUN apk add --update --no-cache py3-numpy py3-pandas@testing
ENV PYTHONPATH=/usr/lib/python3.8/site-packages
COPY . /app
WORKDIR /app
RUN pip install -r requirements.txt
EXPOSE 5003
ENTRYPOINT [ "python" ]
CMD [ "app.py" ]
```
Most of the code here is from the answer of [jtlz2](https://stackoverflow.com/a/50443531/5381704) from this same thread and [Faylixe](https://stackoverflow.com/a/57485724/5381704) from another thread.
Turns out the lighter version of pandas is found in the Alpine repository `py3-numpy` but it doesn't get installed in the same file path from where Python reads the imports by default. Therefore you need to add the `ENV`. Also be mindful about the alpine version.
|
49,037,742
|
I've noticed that installing Pandas and Numpy (it's dependency) in a Docker container using the base OS Alpine vs. CentOS or Debian takes much longer. I created a little test below to demonstrate the time difference. Aside from the few seconds Alpine takes to update and download the build dependencies to install Pandas and Numpy, why does the setup.py take around 70x more time than on Debian install?
Is there any way to speed up the install using Alpine as the base image or is there another base image of comparable size to Alpine that is better to use for packages like Pandas and Numpy?
**Dockerfile.debian**
```
FROM python:3.6.4-slim-jessie
RUN pip install pandas
```
**Build Debian image with Pandas & Numpy:**
```
[PandasDockerTest] time docker build -t debian-pandas -f Dockerfile.debian . --no-cache
Sending build context to Docker daemon 3.072kB
Step 1/2 : FROM python:3.6.4-slim-jessie
---> 43431c5410f3
Step 2/2 : RUN pip install pandas
---> Running in 2e4c030f8051
Collecting pandas
Downloading pandas-0.22.0-cp36-cp36m-manylinux1_x86_64.whl (26.2MB)
Collecting numpy>=1.9.0 (from pandas)
Downloading numpy-1.14.1-cp36-cp36m-manylinux1_x86_64.whl (12.2MB)
Collecting pytz>=2011k (from pandas)
Downloading pytz-2018.3-py2.py3-none-any.whl (509kB)
Collecting python-dateutil>=2 (from pandas)
Downloading python_dateutil-2.6.1-py2.py3-none-any.whl (194kB)
Collecting six>=1.5 (from python-dateutil>=2->pandas)
Downloading six-1.11.0-py2.py3-none-any.whl
Installing collected packages: numpy, pytz, six, python-dateutil, pandas
Successfully installed numpy-1.14.1 pandas-0.22.0 python-dateutil-2.6.1 pytz-2018.3 six-1.11.0
Removing intermediate container 2e4c030f8051
---> a71e1c314897
Successfully built a71e1c314897
Successfully tagged debian-pandas:latest
docker build -t debian-pandas -f Dockerfile.debian . --no-cache 0.07s user 0.06s system 0% cpu 13.605 total
```
**Dockerfile.alpine**
```
FROM python:3.6.4-alpine3.7
RUN apk --update add --no-cache g++
RUN pip install pandas
```
**Build Alpine image with Pandas & Numpy:**
```
[PandasDockerTest] time docker build -t alpine-pandas -f Dockerfile.alpine . --no-cache
Sending build context to Docker daemon 16.9kB
Step 1/3 : FROM python:3.6.4-alpine3.7
---> 4b00a94b6f26
Step 2/3 : RUN apk --update add --no-cache g++
---> Running in 4b0c32551e3f
fetch http://dl-cdn.alpinelinux.org/alpine/v3.7/main/x86_64/APKINDEX.tar.gz
fetch http://dl-cdn.alpinelinux.org/alpine/v3.7/main/x86_64/APKINDEX.tar.gz
fetch http://dl-cdn.alpinelinux.org/alpine/v3.7/community/x86_64/APKINDEX.tar.gz
fetch http://dl-cdn.alpinelinux.org/alpine/v3.7/community/x86_64/APKINDEX.tar.gz
(1/17) Upgrading musl (1.1.18-r2 -> 1.1.18-r3)
(2/17) Installing libgcc (6.4.0-r5)
(3/17) Installing libstdc++ (6.4.0-r5)
(4/17) Installing binutils-libs (2.28-r3)
(5/17) Installing binutils (2.28-r3)
(6/17) Installing gmp (6.1.2-r1)
(7/17) Installing isl (0.18-r0)
(8/17) Installing libgomp (6.4.0-r5)
(9/17) Installing libatomic (6.4.0-r5)
(10/17) Installing pkgconf (1.3.10-r0)
(11/17) Installing mpfr3 (3.1.5-r1)
(12/17) Installing mpc1 (1.0.3-r1)
(13/17) Installing gcc (6.4.0-r5)
(14/17) Installing musl-dev (1.1.18-r3)
(15/17) Installing libc-dev (0.7.1-r0)
(16/17) Installing g++ (6.4.0-r5)
(17/17) Upgrading musl-utils (1.1.18-r2 -> 1.1.18-r3)
Executing busybox-1.27.2-r7.trigger
OK: 184 MiB in 50 packages
Removing intermediate container 4b0c32551e3f
---> be26c3bf4e42
Step 3/3 : RUN pip install pandas
---> Running in 36f6024e5e2d
Collecting pandas
Downloading pandas-0.22.0.tar.gz (11.3MB)
Collecting python-dateutil>=2 (from pandas)
Downloading python_dateutil-2.6.1-py2.py3-none-any.whl (194kB)
Collecting pytz>=2011k (from pandas)
Downloading pytz-2018.3-py2.py3-none-any.whl (509kB)
Collecting numpy>=1.9.0 (from pandas)
Downloading numpy-1.14.1.zip (4.9MB)
Collecting six>=1.5 (from python-dateutil>=2->pandas)
Downloading six-1.11.0-py2.py3-none-any.whl
Building wheels for collected packages: pandas, numpy
Running setup.py bdist_wheel for pandas: started
Running setup.py bdist_wheel for pandas: still running...
Running setup.py bdist_wheel for pandas: still running...
Running setup.py bdist_wheel for pandas: still running...
Running setup.py bdist_wheel for pandas: still running...
Running setup.py bdist_wheel for pandas: still running...
Running setup.py bdist_wheel for pandas: still running...
Running setup.py bdist_wheel for pandas: finished with status 'done'
Stored in directory: /root/.cache/pip/wheels/e8/ed/46/0596b51014f3cc49259e52dff9824e1c6fe352048a2656fc92
Running setup.py bdist_wheel for numpy: started
Running setup.py bdist_wheel for numpy: still running...
Running setup.py bdist_wheel for numpy: still running...
Running setup.py bdist_wheel for numpy: still running...
Running setup.py bdist_wheel for numpy: finished with status 'done'
Stored in directory: /root/.cache/pip/wheels/9d/cd/e1/4d418b16ea662e512349ef193ed9d9ff473af715110798c984
Successfully built pandas numpy
Installing collected packages: six, python-dateutil, pytz, numpy, pandas
Successfully installed numpy-1.14.1 pandas-0.22.0 python-dateutil-2.6.1 pytz-2018.3 six-1.11.0
Removing intermediate container 36f6024e5e2d
---> a93c59e6a106
Successfully built a93c59e6a106
Successfully tagged alpine-pandas:latest
docker build -t alpine-pandas -f Dockerfile.alpine . --no-cache 0.54s user 0.33s system 0% cpu 16:08.47 total
```
|
2018/02/28
|
[
"https://Stackoverflow.com/questions/49037742",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3089468/"
] |
**ATTENTION**
Look at the @jtlz2 answer with the **latest update**
**OUTDATED**
So, py3-pandas & py3-numpy packages moved to the testing alpine repository, so, you can download it by adding these lines in to the your Dockerfile:
```
RUN echo "http://dl-8.alpinelinux.org/alpine/edge/testing" >> /etc/apk/repositories \
&& apk update \
&& apk add py3-numpy py3-pandas
```
>
> **Hope it helps someone!**
>
>
> Alpine packages links:
>
> - [py3-pandas](https://pkgs.alpinelinux.org/packages?name=*pandas&branch=edge)
>
> - [py3-numpy](https://pkgs.alpinelinux.org/packages?name=*numpy&branch=edge)
>
>
> Alpine repositories [docks info](https://wiki.alpinelinux.org/wiki/Alpine_Linux_package_management#Repository_pinning).
>
>
>
|
This worked for me:
```
FROM python:3.8-alpine
RUN echo "@testing http://dl-cdn.alpinelinux.org/alpine/edge/testing" >> /etc/apk/repositories
RUN apk add --update --no-cache py3-numpy py3-pandas@testing
ENV PYTHONPATH=/usr/lib/python3.8/site-packages
COPY . /app
WORKDIR /app
RUN pip install -r requirements.txt
EXPOSE 5003
ENTRYPOINT [ "python" ]
CMD [ "app.py" ]
```
Most of the code here is from the answer of [jtlz2](https://stackoverflow.com/a/50443531/5381704) from this same thread and [Faylixe](https://stackoverflow.com/a/57485724/5381704) from another thread.
Turns out the lighter version of pandas is found in the Alpine repository `py3-numpy` but it doesn't get installed in the same file path from where Python reads the imports by default. Therefore you need to add the `ENV`. Also be mindful about the alpine version.
|
49,037,742
|
I've noticed that installing Pandas and Numpy (it's dependency) in a Docker container using the base OS Alpine vs. CentOS or Debian takes much longer. I created a little test below to demonstrate the time difference. Aside from the few seconds Alpine takes to update and download the build dependencies to install Pandas and Numpy, why does the setup.py take around 70x more time than on Debian install?
Is there any way to speed up the install using Alpine as the base image or is there another base image of comparable size to Alpine that is better to use for packages like Pandas and Numpy?
**Dockerfile.debian**
```
FROM python:3.6.4-slim-jessie
RUN pip install pandas
```
**Build Debian image with Pandas & Numpy:**
```
[PandasDockerTest] time docker build -t debian-pandas -f Dockerfile.debian . --no-cache
Sending build context to Docker daemon 3.072kB
Step 1/2 : FROM python:3.6.4-slim-jessie
---> 43431c5410f3
Step 2/2 : RUN pip install pandas
---> Running in 2e4c030f8051
Collecting pandas
Downloading pandas-0.22.0-cp36-cp36m-manylinux1_x86_64.whl (26.2MB)
Collecting numpy>=1.9.0 (from pandas)
Downloading numpy-1.14.1-cp36-cp36m-manylinux1_x86_64.whl (12.2MB)
Collecting pytz>=2011k (from pandas)
Downloading pytz-2018.3-py2.py3-none-any.whl (509kB)
Collecting python-dateutil>=2 (from pandas)
Downloading python_dateutil-2.6.1-py2.py3-none-any.whl (194kB)
Collecting six>=1.5 (from python-dateutil>=2->pandas)
Downloading six-1.11.0-py2.py3-none-any.whl
Installing collected packages: numpy, pytz, six, python-dateutil, pandas
Successfully installed numpy-1.14.1 pandas-0.22.0 python-dateutil-2.6.1 pytz-2018.3 six-1.11.0
Removing intermediate container 2e4c030f8051
---> a71e1c314897
Successfully built a71e1c314897
Successfully tagged debian-pandas:latest
docker build -t debian-pandas -f Dockerfile.debian . --no-cache 0.07s user 0.06s system 0% cpu 13.605 total
```
**Dockerfile.alpine**
```
FROM python:3.6.4-alpine3.7
RUN apk --update add --no-cache g++
RUN pip install pandas
```
**Build Alpine image with Pandas & Numpy:**
```
[PandasDockerTest] time docker build -t alpine-pandas -f Dockerfile.alpine . --no-cache
Sending build context to Docker daemon 16.9kB
Step 1/3 : FROM python:3.6.4-alpine3.7
---> 4b00a94b6f26
Step 2/3 : RUN apk --update add --no-cache g++
---> Running in 4b0c32551e3f
fetch http://dl-cdn.alpinelinux.org/alpine/v3.7/main/x86_64/APKINDEX.tar.gz
fetch http://dl-cdn.alpinelinux.org/alpine/v3.7/main/x86_64/APKINDEX.tar.gz
fetch http://dl-cdn.alpinelinux.org/alpine/v3.7/community/x86_64/APKINDEX.tar.gz
fetch http://dl-cdn.alpinelinux.org/alpine/v3.7/community/x86_64/APKINDEX.tar.gz
(1/17) Upgrading musl (1.1.18-r2 -> 1.1.18-r3)
(2/17) Installing libgcc (6.4.0-r5)
(3/17) Installing libstdc++ (6.4.0-r5)
(4/17) Installing binutils-libs (2.28-r3)
(5/17) Installing binutils (2.28-r3)
(6/17) Installing gmp (6.1.2-r1)
(7/17) Installing isl (0.18-r0)
(8/17) Installing libgomp (6.4.0-r5)
(9/17) Installing libatomic (6.4.0-r5)
(10/17) Installing pkgconf (1.3.10-r0)
(11/17) Installing mpfr3 (3.1.5-r1)
(12/17) Installing mpc1 (1.0.3-r1)
(13/17) Installing gcc (6.4.0-r5)
(14/17) Installing musl-dev (1.1.18-r3)
(15/17) Installing libc-dev (0.7.1-r0)
(16/17) Installing g++ (6.4.0-r5)
(17/17) Upgrading musl-utils (1.1.18-r2 -> 1.1.18-r3)
Executing busybox-1.27.2-r7.trigger
OK: 184 MiB in 50 packages
Removing intermediate container 4b0c32551e3f
---> be26c3bf4e42
Step 3/3 : RUN pip install pandas
---> Running in 36f6024e5e2d
Collecting pandas
Downloading pandas-0.22.0.tar.gz (11.3MB)
Collecting python-dateutil>=2 (from pandas)
Downloading python_dateutil-2.6.1-py2.py3-none-any.whl (194kB)
Collecting pytz>=2011k (from pandas)
Downloading pytz-2018.3-py2.py3-none-any.whl (509kB)
Collecting numpy>=1.9.0 (from pandas)
Downloading numpy-1.14.1.zip (4.9MB)
Collecting six>=1.5 (from python-dateutil>=2->pandas)
Downloading six-1.11.0-py2.py3-none-any.whl
Building wheels for collected packages: pandas, numpy
Running setup.py bdist_wheel for pandas: started
Running setup.py bdist_wheel for pandas: still running...
Running setup.py bdist_wheel for pandas: still running...
Running setup.py bdist_wheel for pandas: still running...
Running setup.py bdist_wheel for pandas: still running...
Running setup.py bdist_wheel for pandas: still running...
Running setup.py bdist_wheel for pandas: still running...
Running setup.py bdist_wheel for pandas: finished with status 'done'
Stored in directory: /root/.cache/pip/wheels/e8/ed/46/0596b51014f3cc49259e52dff9824e1c6fe352048a2656fc92
Running setup.py bdist_wheel for numpy: started
Running setup.py bdist_wheel for numpy: still running...
Running setup.py bdist_wheel for numpy: still running...
Running setup.py bdist_wheel for numpy: still running...
Running setup.py bdist_wheel for numpy: finished with status 'done'
Stored in directory: /root/.cache/pip/wheels/9d/cd/e1/4d418b16ea662e512349ef193ed9d9ff473af715110798c984
Successfully built pandas numpy
Installing collected packages: six, python-dateutil, pytz, numpy, pandas
Successfully installed numpy-1.14.1 pandas-0.22.0 python-dateutil-2.6.1 pytz-2018.3 six-1.11.0
Removing intermediate container 36f6024e5e2d
---> a93c59e6a106
Successfully built a93c59e6a106
Successfully tagged alpine-pandas:latest
docker build -t alpine-pandas -f Dockerfile.alpine . --no-cache 0.54s user 0.33s system 0% cpu 16:08.47 total
```
|
2018/02/28
|
[
"https://Stackoverflow.com/questions/49037742",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3089468/"
] |
**ANSWER: AS OF 3/9/2020, FOR PYTHON 3, IT STILL DOESN'T!**
Here is a complete working Dockerfile:
```
FROM python:3.7-alpine
RUN echo "@testing http://dl-cdn.alpinelinux.org/alpine/edge/testing" >> /etc/apk/repositories
RUN apk add --update --no-cache py3-numpy py3-pandas@testing
```
The build is very sensitive to the exact python and alpine version numbers - getting these wrong seems to provoke Max Levy's error `so:libpython3.7m.so.1.0 (missing)` - but the above does now work for me.
My updated Dockerfile is available at <https://gist.github.com/jtlz2/b0f4bc07ce2ff04bc193337f2327c13b>
---
[Earlier Update:]
**ANSWER: IT DOESN'T!**
In any Alpine Dockerfile you can simply do\*
```
RUN apk add py2-numpy@community py2-scipy@community py-pandas@edge
```
This is because `numpy`, `scipy` and now `pandas` are all available prebuilt on `alpine`:
[https://pkgs.alpinelinux.org/packages?name=\*numpy](https://pkgs.alpinelinux.org/packages?name=*numpy)
[https://pkgs.alpinelinux.org/packages?name=\*scipy&branch=edge](https://pkgs.alpinelinux.org/packages?name=*scipy&branch=edge)
[https://pkgs.alpinelinux.org/packages?name=\*pandas&branch=edge](https://pkgs.alpinelinux.org/packages?name=*pandas&branch=edge)
**One way to avoid rebuilding every time, or using a Docker layer, is to use a prebuilt, native Alpine Linux/`.apk` package, e.g.**
<https://github.com/sgerrand/alpine-pkg-py-pandas>
<https://github.com/nbgallery/apks>
You can build these `.apk`s once and use them wherever in your Dockerfile you like :)
This also saves you having to bake everything else into the Docker image before the fact - i.e. the flexibility to pre-build any Docker image you like.
PS I have put a Dockerfile stub at <https://gist.github.com/jtlz2/b0f4bc07ce2ff04bc193337f2327c13b> that shows roughly how to build the image. These include the important steps (\*):
```
RUN echo "@community http://dl-cdn.alpinelinux.org/alpine/edge/community" >> /etc/apk/repositories
RUN apk update
RUN apk add --update --no-cache libgfortran
```
|
This worked for me:
```
FROM python:3.8-alpine
RUN echo "@testing http://dl-cdn.alpinelinux.org/alpine/edge/testing" >> /etc/apk/repositories
RUN apk add --update --no-cache py3-numpy py3-pandas@testing
ENV PYTHONPATH=/usr/lib/python3.8/site-packages
COPY . /app
WORKDIR /app
RUN pip install -r requirements.txt
EXPOSE 5003
ENTRYPOINT [ "python" ]
CMD [ "app.py" ]
```
Most of the code here is from the answer of [jtlz2](https://stackoverflow.com/a/50443531/5381704) from this same thread and [Faylixe](https://stackoverflow.com/a/57485724/5381704) from another thread.
Turns out the lighter version of pandas is found in the Alpine repository `py3-numpy` but it doesn't get installed in the same file path from where Python reads the imports by default. Therefore you need to add the `ENV`. Also be mindful about the alpine version.
|
72,095,609
|
With given 2D and 1D lists, I have to dot product them. But I have to calculate them without using `.dot`.
For example, I want to make these lists
```
matrix_A = [[0, 1, 2, 3], [4, 5, 6, 7], [8, 9, 10, 11], [12, 13, 14, 15], [16, 17, 18, 19], [20, 21, 22, 23], [24, 25, 26, 27], [28, 29, 30, 31]]
vector_x = [0, 1, 2, 3]
```
to this output
```
result_list = [ 14 38 62 86 110 134 158 182]
```
How can I do it by only using lists(not using `NumPy array` and `.dot`) in python?
|
2022/05/03
|
[
"https://Stackoverflow.com/questions/72095609",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/19021583/"
] |
You could use a list comprehension with nested for loops.
```py
matrix_A = [[0, 1, 2, 3], [4, 5, 6, 7], [8, 9, 10, 11], [12, 13, 14, 15], [16, 17, 18, 19], [20, 21, 22, 23], [24, 25, 26, 27], [28, 29, 30, 31]]
vector_x = [0, 1, 2, 3]
result_list = [sum(a*b for a,b in zip(row, vector_x)) for row in matrix_A]
print(result_list)
```
Output:
```
[14, 38, 62, 86, 110, 134, 158, 182]
```
Edit: Removed the square brackets in the list comprehension following @fshabashev's comment.
|
If you do not mind using numpy, this is a solution
```
import numpy as np
matrix_A = [[0, 1, 2, 3], [4, 5, 6, 7], [8, 9, 10, 11], [12, 13, 14, 15], [16, 17, 18, 19], [20, 21, 22, 23], [24, 25, 26, 27], [28, 29, 30, 31]]
vector_x = [0, 1, 2, 3]
res = np.sum(np.array(matrix_A) * np.array(vector_x), axis=1)
print(res)
```
|
41,249,099
|
I am able to create a DirContext using the credentials provided. So it seems that I am connecting to the ldap server and verifying credentials but later on we do a .search on the context that we get from these credentials. Here it is failing. I have included my spring security configuration in addition to code that shows how I verified the credentials are working and code which seems to be failiing.
spring-security configuration
```
<?xml version="1.0" encoding="UTF-8"?>
<beans:beans xmlns="http://www.springframework.org/schema/security"
xmlns:beans="http://www.springframework.org/schema/beans"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xmlns:p="http://www.springframework.org/schema/p"
xsi:schemaLocation="http://www.springframework.org/schema/beans
http://www.springframework.org/schema/beans/spring-beans-3.0.xsd
http://www.springframework.org/schema/security
http://www.springframework.org/schema/security/spring-security-3.1.xsd">
<http pattern="/ui/login" security="none"></http>
<http pattern="/styles" security="none"/>
<http use-expressions="true">
<intercept-url pattern="/views/*" access="isAuthenticated()" />
<intercept-url pattern="/database/upload" access="isAuthenticated()" />
<intercept-url pattern="/database/save" access="isAuthenticated()" />
<intercept-url pattern="/database/list" access="isAuthenticated()" />
<intercept-url pattern="/database/delete" access="isAuthenticated()" />
<intercept-url pattern="/project/*" access="isAuthenticated()" />
<intercept-url pattern="/file/*" access="isAuthenticated()" />
<intercept-url pattern="/amazon/*" access="isAuthenticated()" />
<intercept-url pattern="/python/*" access="isAuthenticated()" />
<intercept-url pattern="/r/*" access="isAuthenticated()" />
<intercept-url pattern="/project/*" access="isAuthenticated()" />
<intercept-url pattern="/image/*" access="isAuthenticated()" />
<intercept-url pattern="/shell/*" access="isAuthenticated()" />
<intercept-url pattern="/register" access="hasRole('ROLE_ADMIN')" />
<intercept-url pattern="/user/save" access="hasRole('ROLE_ADMIN')" />
<intercept-url pattern="/user/userAdministrator" access="hasRole('ROLE_ADMIN')" />
<intercept-url pattern="/user/list" access="isAuthenticated()" />
<intercept-url pattern="/user/archive" access="isAuthenticated()" />
<form-login login-page="/login" default-target-url="/views/main"
authentication-failure-url="/loginfailed"/>
<logout logout-success-url="/login" />
</http>
<beans:bean id="ldapAuthProvider"
class="org.springframework.security.ldap.authentication.ad.ActiveDirectoryLdapAuthenticationProvider">
<beans:constructor-arg value="simplead.blazingdb.com" />
<beans:constructor-arg value="ldap://simplead.blazingdb.com/" />
</beans:bean>
<authentication-manager alias="authenticationManager" erase-credentials="true">
<authentication-provider ref="ldapAuthProvider">
</authentication-provider>
</authentication-manager>
</beans:beans>
```
from ActiveDirectoryLdapAuthenticationProvider
```
@Override
protected DirContextOperations doAuthentication(UsernamePasswordAuthenticationToken auth) {
String username = auth.getName();
String password = (String)auth.getCredentials();
DirContext ctx = bindAsUser(username, password);
try {
return searchForUser(ctx, username);
} catch (NamingException e) {
logger.error("Failed to locate directory entry for authenticated user: " + username, e);
throw badCredentials();
} finally {
LdapUtils.closeContext(ctx);
}
}
```
This returns just fine so long as I pass in the correct credentials and fails if I send the wrong credentials so I know that we are making it this far.
The issue comes inside of SpringSecurityLdapTemplate
```
public static DirContextOperations searchForSingleEntryInternal(DirContext ctx, SearchControls searchControls,
String base, String filter, Object[] params) throws NamingException {
final DistinguishedName ctxBaseDn = new DistinguishedName(ctx.getNameInNamespace());
final DistinguishedName searchBaseDn = new DistinguishedName(base);
final NamingEnumeration<SearchResult> resultsEnum = ctx.search(searchBaseDn, filter, params, searchControls);
if (logger.isDebugEnabled()) {
logger.debug("Searching for entry under DN '" + ctxBaseDn
+ "', base = '" + searchBaseDn + "', filter = '" + filter + "'");
}
Set<DirContextOperations> results = new HashSet<DirContextOperations>();
try {
while (resultsEnum.hasMore()) {
SearchResult searchResult = resultsEnum.next();
// Work out the DN of the matched entry
DistinguishedName dn = new DistinguishedName(new CompositeName(searchResult.getName()));
if (base.length() > 0) {
dn.prepend(searchBaseDn);
}
if (logger.isDebugEnabled()) {
logger.debug("Found DN: " + dn);
}
results.add(new DirContextAdapter(searchResult.getAttributes(), dn, ctxBaseDn));
}
} catch (PartialResultException e) {
LdapUtils.closeEnumeration(resultsEnum);
logger.info("Ignoring PartialResultException");
}
if (results.size() == 0) {
throw new IncorrectResultSizeDataAccessException(1, 0);
}
if (results.size() > 1) {
throw new IncorrectResultSizeDataAccessException(1, results.size());
}
return results.iterator().next();
}
```
Specifically the following line I think is where I am seeing issues. We get a return of size 0 when it is expecting 1 so it throws an error and the whole thing fails.
```
final NamingEnumeration<SearchResult> resultsEnum = ctx.search(searchBaseDn, filter, params, searchControls);
```
Whenever he we try to do resultsEnum.hasMore() we catch a PartialResultsException.
I am trying to figure out why this is the case. I am using Amazon Simple directory service (the one that is backed by Samba not the MSFT version). I am very new to LDAP and Active Directory so if my question is poorly formed please let me know what information I need to add.
|
2016/12/20
|
[
"https://Stackoverflow.com/questions/41249099",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1431499/"
] |
In method searchForUser is called the method SpringSecurityLdapTemplate.searchForSingleEntryInternal where it's passed an array of objects. The first object of array relates to username@domain. The second one, is the username itself. So, when you are searching for (&(objectClass=user)(sAMAccountName={0})) in ActiveDirectory, you are passing username@domain as attribute to the parameter {0} of search. You just had to pass search filter like this:
(&(objectClass=user)(sAMAccountName={1}))
**EDIT:**
I'm assuming that you have passed the searchFilter to ActiveDirectoryLdapAuthenticationProvider object. If you haven't, you have to.
|
The issue was pretty straightforward once I used Apache Directory Studio to try and run the ldap queries coming out of Spring Security Active Directory defaults. They assume you have an attribute called userPrincipalName which is a combination of the sAMAccountName and the domain.
In the end I had to set the searchFilter to use sAMAccountName and I had to make my own version of ActiveDirectoryLdapAuthenticationProvider which only looked for users inside of the domain being used but was only comparing sAMAccountName. I only changed searchForUser but since this was a final class I did have to just copy it over. I HATE having to do this but I need to keep moving and these are not configurable options in Spring Security 3.2.9.
package org.springframework.security.ldap.authentication.ad;
```
import org.springframework.dao.IncorrectResultSizeDataAccessException;
import org.springframework.ldap.core.DirContextOperations;
import org.springframework.ldap.core.DistinguishedName;
import org.springframework.ldap.core.support.DefaultDirObjectFactory;
import org.springframework.ldap.support.LdapUtils;
import org.springframework.security.authentication.AccountExpiredException;
import org.springframework.security.authentication.BadCredentialsException;
import org.springframework.security.authentication.CredentialsExpiredException;
import org.springframework.security.authentication.DisabledException;
import org.springframework.security.authentication.LockedException;
import org.springframework.security.authentication.UsernamePasswordAuthenticationToken;
import org.springframework.security.core.GrantedAuthority;
import org.springframework.security.core.authority.AuthorityUtils;
import org.springframework.security.core.authority.SimpleGrantedAuthority;
import org.springframework.security.core.userdetails.UsernameNotFoundException;
import org.springframework.security.ldap.SpringSecurityLdapTemplate;
import org.springframework.security.ldap.authentication.AbstractLdapAuthenticationProvider;
import org.springframework.security.ldap.authentication.ad.ActiveDirectoryAuthenticationException;
import org.springframework.util.Assert;
import org.springframework.util.StringUtils;
import javax.naming.AuthenticationException;
import javax.naming.Context;
import javax.naming.NamingException;
import javax.naming.OperationNotSupportedException;
import javax.naming.directory.DirContext;
import javax.naming.directory.SearchControls;
import javax.naming.ldap.InitialLdapContext;
import java.util.*;
import java.util.regex.Matcher;
import java.util.regex.Pattern;
public final class BlazingActiveDirectory extends AbstractLdapAuthenticationProvider {
private static final Pattern SUB_ERROR_CODE = Pattern.compile(".*data\\s([0-9a-f]{3,4}).*");
// Error codes
private static final int USERNAME_NOT_FOUND = 0x525;
private static final int INVALID_PASSWORD = 0x52e;
private static final int NOT_PERMITTED = 0x530;
private static final int PASSWORD_EXPIRED = 0x532;
private static final int ACCOUNT_DISABLED = 0x533;
private static final int ACCOUNT_EXPIRED = 0x701;
private static final int PASSWORD_NEEDS_RESET = 0x773;
private static final int ACCOUNT_LOCKED = 0x775;
private final String domain;
private final String rootDn;
private final String url;
private boolean convertSubErrorCodesToExceptions;
private String searchFilter = "(&(objectClass=user)(userPrincipalName={0}))";
// Only used to allow tests to substitute a mock LdapContext
ContextFactory contextFactory = new ContextFactory();
/**
* @param domain the domain name (may be null or empty)
* @param url an LDAP url (or multiple URLs)
*/
public BlazingActiveDirectory(String domain, String url) {
Assert.isTrue(StringUtils.hasText(url), "Url cannot be empty");
this.domain = StringUtils.hasText(domain) ? domain.toLowerCase() : null;
this.url = url;
rootDn = this.domain == null ? null : rootDnFromDomain(this.domain);
}
@Override
protected DirContextOperations doAuthentication(UsernamePasswordAuthenticationToken auth) {
String username = auth.getName();
String password = (String) auth.getCredentials();
DirContext ctx = bindAsUser(username, password);
try {
return searchForUser(ctx, username);
} catch (NamingException e) {
logger.error("Failed to locate directory entry for authenticated user: " + username, e);
throw badCredentials(e);
} finally {
LdapUtils.closeContext(ctx);
}
}
/**
* Creates the user authority list from the values of the {@code memberOf} attribute obtained from the user's
* Active Directory entry.
*/
@Override
protected Collection<? extends GrantedAuthority> loadUserAuthorities(DirContextOperations userData, String username, String password) {
String[] groups = userData.getStringAttributes("memberOf");
if (groups == null) {
logger.debug("No values for 'memberOf' attribute.");
return AuthorityUtils.NO_AUTHORITIES;
}
if (logger.isDebugEnabled()) {
logger.debug("'memberOf' attribute values: " + Arrays.asList(groups));
}
ArrayList<GrantedAuthority> authorities = new ArrayList<GrantedAuthority>(groups.length);
for (String group : groups) {
authorities.add(new SimpleGrantedAuthority(new DistinguishedName(group).removeLast().getValue()));
}
return authorities;
}
private DirContext bindAsUser(String username, String password) {
// TODO. add DNS lookup based on domain
final String bindUrl = url;
Hashtable<String, String> env = new Hashtable<String, String>();
env.put(Context.SECURITY_AUTHENTICATION, "simple");
String bindPrincipal = createBindPrincipal(username);
env.put(Context.SECURITY_PRINCIPAL, bindPrincipal);
env.put(Context.PROVIDER_URL, bindUrl);
env.put(Context.SECURITY_CREDENTIALS, password);
env.put(Context.INITIAL_CONTEXT_FACTORY, "com.sun.jndi.ldap.LdapCtxFactory");
env.put(Context.OBJECT_FACTORIES, DefaultDirObjectFactory.class.getName());
try {
return contextFactory.createContext(env);
} catch (NamingException e) {
if ((e instanceof AuthenticationException) || (e instanceof OperationNotSupportedException)) {
handleBindException(bindPrincipal, e);
throw badCredentials(e);
} else {
throw LdapUtils.convertLdapException(e);
}
}
}
private void handleBindException(String bindPrincipal, NamingException exception) {
if (logger.isDebugEnabled()) {
logger.debug("Authentication for " + bindPrincipal + " failed:" + exception);
}
int subErrorCode = parseSubErrorCode(exception.getMessage());
if (subErrorCode <= 0) {
logger.debug("Failed to locate AD-specific sub-error code in message");
return;
}
logger.info("Active Directory authentication failed: " + subCodeToLogMessage(subErrorCode));
if (convertSubErrorCodesToExceptions) {
raiseExceptionForErrorCode(subErrorCode, exception);
}
}
private int parseSubErrorCode(String message) {
Matcher m = SUB_ERROR_CODE.matcher(message);
if (m.matches()) {
return Integer.parseInt(m.group(1), 16);
}
return -1;
}
private void raiseExceptionForErrorCode(int code, NamingException exception) {
String hexString = Integer.toHexString(code);
Throwable cause = new ActiveDirectoryAuthenticationException(hexString, exception.getMessage(), exception);
switch (code) {
case PASSWORD_EXPIRED:
throw new CredentialsExpiredException(messages.getMessage("LdapAuthenticationProvider.credentialsExpired",
"User credentials have expired"), cause);
case ACCOUNT_DISABLED:
throw new DisabledException(messages.getMessage("LdapAuthenticationProvider.disabled",
"User is disabled"), cause);
case ACCOUNT_EXPIRED:
throw new AccountExpiredException(messages.getMessage("LdapAuthenticationProvider.expired",
"User account has expired"), cause);
case ACCOUNT_LOCKED:
throw new LockedException(messages.getMessage("LdapAuthenticationProvider.locked",
"User account is locked"), cause);
default:
throw badCredentials(cause);
}
}
private String subCodeToLogMessage(int code) {
switch (code) {
case USERNAME_NOT_FOUND:
return "User was not found in directory";
case INVALID_PASSWORD:
return "Supplied password was invalid";
case NOT_PERMITTED:
return "User not permitted to logon at this time";
case PASSWORD_EXPIRED:
return "Password has expired";
case ACCOUNT_DISABLED:
return "Account is disabled";
case ACCOUNT_EXPIRED:
return "Account expired";
case PASSWORD_NEEDS_RESET:
return "User must reset password";
case ACCOUNT_LOCKED:
return "Account locked";
}
return "Unknown (error code " + Integer.toHexString(code) +")";
}
private BadCredentialsException badCredentials() {
return new BadCredentialsException(messages.getMessage(
"LdapAuthenticationProvider.badCredentials", "Bad credentials"));
}
private BadCredentialsException badCredentials(Throwable cause) {
return (BadCredentialsException) badCredentials().initCause(cause);
}
private DirContextOperations searchForUser(DirContext context, String username) throws NamingException {
SearchControls searchControls = new SearchControls();
searchControls.setSearchScope(SearchControls.SUBTREE_SCOPE);
String bindPrincipal = createBindPrincipal(username);
String searchRoot = rootDn != null ? rootDn : searchRootFromPrincipal(bindPrincipal);
try {
String verifyName = username;
if(username.indexOf("@") != -1){
verifyName = username.substring(0,username.indexOf("@"));
}
return SpringSecurityLdapTemplate.searchForSingleEntryInternal(context, searchControls,
searchRoot, searchFilter, new Object[]{verifyName});
} catch (IncorrectResultSizeDataAccessException incorrectResults) {
// Search should never return multiple results if properly configured - just rethrow
if (incorrectResults.getActualSize() != 0) {
throw incorrectResults;
}
// If we found no results, then the username/password did not match
UsernameNotFoundException userNameNotFoundException = new UsernameNotFoundException("User " + username
+ " not found in directory.", incorrectResults);
throw badCredentials(userNameNotFoundException);
}
}
private String searchRootFromPrincipal(String bindPrincipal) {
int atChar = bindPrincipal.lastIndexOf('@');
if (atChar < 0) {
logger.debug("User principal '" + bindPrincipal + "' does not contain the domain, and no domain has been configured");
throw badCredentials();
}
return rootDnFromDomain(bindPrincipal.substring(atChar + 1, bindPrincipal.length()));
}
private String rootDnFromDomain(String domain) {
String[] tokens = StringUtils.tokenizeToStringArray(domain, ".");
StringBuilder root = new StringBuilder();
for (String token : tokens) {
if (root.length() > 0) {
root.append(',');
}
root.append("dc=").append(token);
}
return root.toString();
}
String createBindPrincipal(String username) {
if (domain == null || username.toLowerCase().endsWith(domain)) {
return username;
}
return username + "@" + domain;
}
/**
* By default, a failed authentication (LDAP error 49) will result in a {@code BadCredentialsException}.
* <p>
* If this property is set to {@code true}, the exception message from a failed bind attempt will be parsed
* for the AD-specific error code and a {@link CredentialsExpiredException}, {@link DisabledException},
* {@link AccountExpiredException} or {@link LockedException} will be thrown for the corresponding codes. All
* other codes will result in the default {@code BadCredentialsException}.
*
* @param convertSubErrorCodesToExceptions {@code true} to raise an exception based on the AD error code.
*/
public void setConvertSubErrorCodesToExceptions(boolean convertSubErrorCodesToExceptions) {
this.convertSubErrorCodesToExceptions = convertSubErrorCodesToExceptions;
}
/**
* The LDAP filter string to search for the user being authenticated.
* Occurrences of {0} are replaced with the {@code username@domain}.
* <p>
* Defaults to: {@code (&(objectClass=user)(userPrincipalName={0}))}
* </p>
*
* @param searchFilter the filter string
*
* @since 3.2.6
*/
public void setSearchFilter(String searchFilter) {
Assert.hasText(searchFilter,"searchFilter must have text");
this.searchFilter = searchFilter;
}
static class ContextFactory {
DirContext createContext(Hashtable<?,?> env) throws NamingException {
return new InitialLdapContext(env, null);
}
}
}
```
|
49,641,899
|
I am not familiar with how to export the list to the `csv` in python. Here is code for one list:
```
import csv
X = ([1,2,3],[7,8,9])
Y = ([4,5,6],[3,4,5])
for x in range(0,2,1):
csvfile = "C:/Temp/aaa.csv"
with open(csvfile, "w") as output:
writer = csv.writer(output, lineterminator='\n')
for val in x[0]:
writer.writerow([val])
```
And I want to the result:
[](https://i.stack.imgur.com/kqiIN.png)
Then how to modify the code(the main problem is how to change the column?)
|
2018/04/04
|
[
"https://Stackoverflow.com/questions/49641899",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6703592/"
] |
To output multiple columns you can use [`zip()`](https://docs.python.org/3/library/functions.html#zip) like:
### Code:
```
import csv
x0 = [1, 2, 3]
y0 = [4, 5, 6]
x2 = [7, 8, 9]
y2 = [3, 4, 5]
csvfile = "aaa.csv"
with open(csvfile, "w") as output:
writer = csv.writer(output, lineterminator='\n')
writer.writerow(['x=0', None, None, 'x=2'])
writer.writerow(['x', 'y', None, 'x', 'y'])
for val in zip(x0, y0, [None] * len(x0), x2, y2):
writer.writerow(val)
```
### Results:
```
x=0,,,x=2
x,y,,x,y
1,4,,7,3
2,5,,8,4
3,6,,9,5
```
|
You could try:
```
with open('file.csv') as fin:
reader = csv.reader(fin)
[fout.write(r[0],r[1]) for r in reader]
```
If you need further help, leave a comment.
|
49,641,899
|
I am not familiar with how to export the list to the `csv` in python. Here is code for one list:
```
import csv
X = ([1,2,3],[7,8,9])
Y = ([4,5,6],[3,4,5])
for x in range(0,2,1):
csvfile = "C:/Temp/aaa.csv"
with open(csvfile, "w") as output:
writer = csv.writer(output, lineterminator='\n')
for val in x[0]:
writer.writerow([val])
```
And I want to the result:
[](https://i.stack.imgur.com/kqiIN.png)
Then how to modify the code(the main problem is how to change the column?)
|
2018/04/04
|
[
"https://Stackoverflow.com/questions/49641899",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6703592/"
] |
To output multiple columns you can use [`zip()`](https://docs.python.org/3/library/functions.html#zip) like:
### Code:
```
import csv
x0 = [1, 2, 3]
y0 = [4, 5, 6]
x2 = [7, 8, 9]
y2 = [3, 4, 5]
csvfile = "aaa.csv"
with open(csvfile, "w") as output:
writer = csv.writer(output, lineterminator='\n')
writer.writerow(['x=0', None, None, 'x=2'])
writer.writerow(['x', 'y', None, 'x', 'y'])
for val in zip(x0, y0, [None] * len(x0), x2, y2):
writer.writerow(val)
```
### Results:
```
x=0,,,x=2
x,y,,x,y
1,4,,7,3
2,5,,8,4
3,6,,9,5
```
|
When dealing with csv files you should really just use Pandas. Put your header and data into a dataframe, and then use the .to\_csv method on that dataframe. Csv can get tricky when you have strings that contain commas, etc...
<https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.to_csv.html>
|
52,149,479
|
I am preprocessing a timeseries dataset changing its shape from 2-dimensions (datapoints, features) into a 3-dimensions (datapoints, time\_window, features).
In such perspective time windows (sometimes also called look back) indicates the number of previous time steps/datapoints that are involved as input variables to predict the next time period. In other words time windows is how much data in past the machine learning algorithm takes into consideration for a single prediction in the future.
The issue with such approach (or at least with my implementation) is that it is quite inefficient in terms of memory usage since it brings data redundancy across the windows causing the input data to become very heavy.
This is the function that I have been using so far to reshape the input data into a 3 dimensional structure.
```
from sys import getsizeof
def time_framer(data_to_frame, window_size=1):
"""It transforms a 2d dataset into 3d based on a specific size;
original function can be found at:
https://machinelearningmastery.com/time-series-prediction-lstm-recurrent-neural-networks-python-keras/
"""
n_datapoints = data_to_frame.shape[0] - window_size
framed_data = np.empty(
shape=(n_datapoints, window_size, data_to_frame.shape[1],)).astype(np.float32)
for index in range(n_datapoints):
framed_data[index] = data_to_frame[index:(index + window_size)]
print(framed_data.shape)
# it prints the size of the output in MB
print(framed_data.nbytes / 10 ** 6)
print(getsizeof(framed_data) / 10 ** 6)
# quick and dirty quality test to check if the data has been correctly reshaped
test1=list(set(framed_data[0][1]==framed_data[1][0]))
if test1[0] and len(test1)==1:
print('Data is correctly framed')
return framed_data
```
I have been suggested to use [numpy's strides trick](https://docs.scipy.org/doc/numpy-1.14.0/reference/generated/numpy.lib.stride_tricks.as_strided.html#numpy.lib.stride_tricks.as_strided) to overcome such problem and reduce the size of the reshaped data. Unfortunately, any resource I found so far on this subject is focused on implementing the trick on a 2 dimensional array, just as this [excellent tutorial](https://www.youtube.com/watch?v=XijN-pjOz-A). I have been struggling with my use case which involves a 3 dimensional output. Here is the best I came out with; however, it neither succeeds in reducing the size of the framed\_data, nor it frames the data correctly as it does not pass the quality test.
I am quite sure that my error is on the *strides* parameter which I did not fully understood. The *new\_strides* are the only values I managed to successfully feed to *as\_strided*.
```
from numpy.lib.stride_tricks import as_strided
def strides_trick_time_framer(data_to_frame, window_size=1):
new_strides = (data_to_frame.strides[0],
data_to_frame.strides[0]*data_to_frame.shape[1] ,
data_to_frame.strides[0]*window_size)
n_datapoints = data_to_frame.shape[0] - window_size
print('striding.....')
framed_data = as_strided(data_to_frame,
shape=(n_datapoints, # .flatten() here did not change the outcome
window_size,
data_to_frame.shape[1]),
strides=new_strides).astype(np.float32)
# it prints the size of the output in MB
print(framed_data.nbytes / 10 ** 6)
print(getsizeof(framed_data) / 10 ** 6)
# quick and dirty test to check if the data has been correctly reshaped
test1=list(set(framed_data[0][1]==framed_data[1][0]))
if test1[0] and len(test1)==1:
print('Data is correctly framed')
return framed_data
```
Any help would be highly appreciated!
|
2018/09/03
|
[
"https://Stackoverflow.com/questions/52149479",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3482860/"
] |
`os.system` is deprecated. Use `subprocess` instead, which will handle the quoting nicely for you.
Since you have a pipe, you would normally have to create 2 `subprocess` objects, but here you just want to feed standard input so:
```
import subprocess
p = subprocess.Popen(["/usr/bin/cmd","-parameters"],stdin=subprocess.PIPE)
p.communicate('{{"value": {:0.0f}}}\n'.format(value).encode()) # we need to provide bytes
rc = p.wait()
```
The quoting issue is gone, since you're not using a system command to provide the argument, but pure python.
to test this I have changed the command to `more` so I can run it on windows (which also proves that this is portable):
```
import subprocess
value=12.0
p = subprocess.Popen(["more.com"],stdin=subprocess.PIPE)
p.communicate('{{"value": {:0.0f}}}\n'.format(value).encode())
rc = p.wait()
```
that prints: `{"value": 12}`
|
If you want to echo quotes, you need to escape them.
For example:
```
echo "value"
```
>
> value
>
>
>
```
echo "\"value\""
```
>
> "value"
>
>
>
So your python code should look like
```
os.system('echo {{\\"value\\": {0:0.0f}}} | /usr/bin/cmd -parameters'.format(value))
```
Note, that you should use double slashes `\\` because python would escape a single slash.
|
72,068,358
|
Why python doesn't raise an error when I try do this, instead it print Nothing.
```
empty = []
for i in empty:
for y in i:
print(y)
```
Is that python stop iterate it the first level or it just iterates over and print **None**?
I found this when trying to create a infinite loop but it stop when list become empty
|
2022/04/30
|
[
"https://Stackoverflow.com/questions/72068358",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/18132490/"
] |
There is nothing to iterate over in `empty` list. Hence, `for` loop won't run even for a single time. However, if it is required to get an error for this you can raise an exception like below:
```
empty = []
if len(empty) == 0:
raise Exception("List is empty")
for i in empty:
for y in i:
print(y)
```
|
Your list variable `empty` is initialized with empty list. so, the first iteration itself will not enter, since the list is empty.
|
72,068,358
|
Why python doesn't raise an error when I try do this, instead it print Nothing.
```
empty = []
for i in empty:
for y in i:
print(y)
```
Is that python stop iterate it the first level or it just iterates over and print **None**?
I found this when trying to create a infinite loop but it stop when list become empty
|
2022/04/30
|
[
"https://Stackoverflow.com/questions/72068358",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/18132490/"
] |
There is nothing to iterate over in `empty` list. Hence, `for` loop won't run even for a single time. However, if it is required to get an error for this you can raise an exception like below:
```
empty = []
if len(empty) == 0:
raise Exception("List is empty")
for i in empty:
for y in i:
print(y)
```
|
Since the list is empty, the loop will not run at all.
|
60,233,935
|
I have been trying to get my first trial with templating with Ansible to work and I am stopped by this following exception. As far as I can see, I think I have maintained the indentation well and also validated the yml file. I don't know where to go from here, help pls! Below is the yml file followed by the exception I saw after ran the playbook.
```
---
- name: run these tasks on the host
hosts:
testhost:
testhost1: "172.16.201.163"
vars:
ansible_port: 22
tasks:
- name: Templating
template:
dest: /etc/my_test.conf
owner: root
src: my_test.j2
become: true
```
The output from the run
```
ERROR! Unexpected Exception, this is probably a bug: unhashable type: 'AnsibleMapping'
the full traceback was:
Traceback (most recent call last):
File "/usr/local/bin/ansible-playbook", line 118, in <module>
exit_code = cli.run()
File "/usr/local/Cellar/ansible/2.7.9/libexec/lib/python3.7/site-packages/ansible/cli/playbook.py", line 122, in run
results = pbex.run()
File "/usr/local/Cellar/ansible/2.7.9/libexec/lib/python3.7/site-packages/ansible/executor/playbook_executor.py", line 106, in run
all_vars = self._variable_manager.get_vars(play=play)
File "/usr/local/Cellar/ansible/2.7.9/libexec/lib/python3.7/site-packages/ansible/vars/manager.py", line 185, in get_vars
include_delegate_to=include_delegate_to,
File "/usr/local/Cellar/ansible/2.7.9/libexec/lib/python3.7/site-packages/ansible/vars/manager.py", line 470, in _get_magic_variables
variables['ansible_play_hosts_all'] = [x.name for x in self._inventory.get_hosts(pattern=pattern, ignore_restrictions=True)]
File "/usr/local/Cellar/ansible/2.7.9/libexec/lib/python3.7/site-packages/ansible/inventory/manager.py", line 358, in get_hosts
if pattern_hash not in self._hosts_patterns_cache:
TypeError: unhashable type: 'AnsibleMapping'
```
|
2020/02/14
|
[
"https://Stackoverflow.com/questions/60233935",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4682497/"
] |
There are at minimum two things wrong with the playbook you posted:
1. `hosts:` is a `dict`, but should not be
2. `testhost:` has a `null` value
[Reading the fine manual](https://docs.ansible.com/ansible/2.9/reference_appendices/playbooks_keywords.html#term-hosts) shows that `hosts:` should be a string, or `list[str]` but may not be a `dict`. Perhaps what you are trying to accomplish is best achieved via an inventory file, or dynamic inventory plugin/script
|
This error can also happen if you wind up with the wrong structure in your playbook.
For example:
```yaml
tags: # oops
- role: foo/myrole
```
instead of
```yaml
roles:
- role: foo/myrole
```
|
19,306,963
|
In a file like:
```
jaslkfdj,asldkfj,,,
slakj,aklsjf,,,
lsak,sajf,,,
```
how can you split it up so there is just a key value pair of the two words? I tried to split using commas but the only way i know how to make key/value pairs is when there is only one commma in a line.
python gives the error: "ValueError: too many values to unpack (expected 2)" because of the 3 extra commas at the end of each line
this is what i have:
```
newdict= {}
wd = open('file.csv', 'r')
for line in wd:
key,val = line.split(',')
newdict[key]=val
print(newdict)
```
|
2013/10/10
|
[
"https://Stackoverflow.com/questions/19306963",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2844776/"
] |
It seems more likely that what you tried is this:
```
>>> line = 'jaslkfdj,asldkfj,,,'
>>> key, value = line.split(',')
ValueError: too many values to unpack (expected 2)
```
There are two ways around this.
First, you can split, and then just take the first two values:
```
>>> line = 'jaslkfdj,asldkfj,,,'
>>> parts = line.split(',')
>>> key, value = parts[:2]
```
Or you can use a `maxsplit` argument:
```
>>> line = 'jaslkfdj,asldkfj,,,'
>>> key, value = line.split(',', 1)
```
This second one will leave extra commas on the end of `value`, but that's easy to fix:
```
>>> value = value.rstrip(',')
```
|
Try slicing for the first two values:
```
"a,b,,,,,".split(",")[:2]
```
Nice summary of slice notation in [this answer](https://stackoverflow.com/a/509295/169121).
|
19,306,963
|
In a file like:
```
jaslkfdj,asldkfj,,,
slakj,aklsjf,,,
lsak,sajf,,,
```
how can you split it up so there is just a key value pair of the two words? I tried to split using commas but the only way i know how to make key/value pairs is when there is only one commma in a line.
python gives the error: "ValueError: too many values to unpack (expected 2)" because of the 3 extra commas at the end of each line
this is what i have:
```
newdict= {}
wd = open('file.csv', 'r')
for line in wd:
key,val = line.split(',')
newdict[key]=val
print(newdict)
```
|
2013/10/10
|
[
"https://Stackoverflow.com/questions/19306963",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2844776/"
] |
Try slicing for the first two values:
```
"a,b,,,,,".split(",")[:2]
```
Nice summary of slice notation in [this answer](https://stackoverflow.com/a/509295/169121).
|
```
with open('file.csv', 'r') as wd:
newdict = dict(line.split(",")[:2] for line in wd.read().splitlines())
print newdict
```
The result is follows:
`{' jaslkfdj': 'asldkfj', ' lsak': 'sajf', ' slakj': 'aklsjf'}`
|
19,306,963
|
In a file like:
```
jaslkfdj,asldkfj,,,
slakj,aklsjf,,,
lsak,sajf,,,
```
how can you split it up so there is just a key value pair of the two words? I tried to split using commas but the only way i know how to make key/value pairs is when there is only one commma in a line.
python gives the error: "ValueError: too many values to unpack (expected 2)" because of the 3 extra commas at the end of each line
this is what i have:
```
newdict= {}
wd = open('file.csv', 'r')
for line in wd:
key,val = line.split(',')
newdict[key]=val
print(newdict)
```
|
2013/10/10
|
[
"https://Stackoverflow.com/questions/19306963",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2844776/"
] |
It seems more likely that what you tried is this:
```
>>> line = 'jaslkfdj,asldkfj,,,'
>>> key, value = line.split(',')
ValueError: too many values to unpack (expected 2)
```
There are two ways around this.
First, you can split, and then just take the first two values:
```
>>> line = 'jaslkfdj,asldkfj,,,'
>>> parts = line.split(',')
>>> key, value = parts[:2]
```
Or you can use a `maxsplit` argument:
```
>>> line = 'jaslkfdj,asldkfj,,,'
>>> key, value = line.split(',', 1)
```
This second one will leave extra commas on the end of `value`, but that's easy to fix:
```
>>> value = value.rstrip(',')
```
|
```
with open('file.csv', 'r') as wd:
newdict = dict(line.split(",")[:2] for line in wd.read().splitlines())
print newdict
```
The result is follows:
`{' jaslkfdj': 'asldkfj', ' lsak': 'sajf', ' slakj': 'aklsjf'}`
|
46,554,928
|
I don't care if I achieve this through vim, sed, awk, python etc. I tried in all, could not get it done.
For an input like this:
```
top f1 f2 f3
sub1 f1 f2 f3
sub2 f1 f2 f3
sub21 f1 f2 f3
sub3 f1 f2 f3
```
I want:
```
top f1 f2 f3
...sub1 f1 f2 f3
...sub2 f1 f2 f3
......sub21 f1 f2 f3
...sub3 f1 f2 f3
```
Then I want to just load this up in Excel (delimited by whitespace) and still be able to look at the hierarchy-ness of the first column!
I tried many things, but end up losing the hierarchy information
|
2017/10/03
|
[
"https://Stackoverflow.com/questions/46554928",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2977601/"
] |
With this as the input:
```
$ cat file
top f1 f2 f3
sub1 f1 f2 f3
sub2 f1 f2 f3
sub21 f1 f2 f3
sub3 f1 f2 f3
```
Try:
```
$ sed -E ':a; s/^( *) ([^ ])/\1.\2/; ta' file
top f1 f2 f3
...sub1 f1 f2 f3
...sub2 f1 f2 f3
......sub21 f1 f2 f3
...sub3 f1 f2 f3
```
### How it works:
* `:a`
This creates a label `a`.
* `s/^( *) ([^ ])/\1.\2/`
If the line begins with spaces, this replaces the last space in the leading spaces with a period.
In more detail, `^( *)` matches all leading blanks except the last and stores them in group 1. The regex `([^ ])` (which, despite what stackoverflow makes it look like, consists of a blank followed by `([^ ])`) matches a blank followed by a nonblank and stores the nonblank in group 2.
`\1.\2` replaces the matched text with group 1, followed by a period, followed by group 2.
* `ta`
If the substituted command resulted in a substitution, then branch back to label `a` and try over again.
### Compatibility:
1. The above was tested on modern GNU sed. For BSD/OSX sed, one might or might not need to use:
```
sed -E -e :a -e 's/^( *) ([^ ])/\1.\2/' -e ta file
```
On ancient GNU sed, one needs to use `-r` in place of `-E`:
```
sed -r ':a; s/^( *) ([^ ])/\1.\2/; ta' file
```
2. The above assumed that the spaces were blanks. If they are tabs, then you will have to decide what your tabstop is and make substitutions accordingly.
|
There are two different ways to do this in vim.
1. With a regex:
```
:%s/^\s\+/\=repeat('.', len(submatch(0)))
```
This is fairly straightforward, but a little verbose. It uses the eval register (`\=`) to generate a string of `'.'`s the same length as the number of spaces at the beginning of each line.
2. With a norm command:
```
:%norm ^hviwr.
```
This is a much more conveniently short command, although it's a little harder to understand. It visually selects the spaces at the beginning of a line, and replaces the whole selection with dots. If there is no leading space, the command will fail on `^h` because the cursor attempts to move out of bounds.
To see how this works, try typing `^hviwr.` on a line that has leading spaces to see it happen.
|
46,554,928
|
I don't care if I achieve this through vim, sed, awk, python etc. I tried in all, could not get it done.
For an input like this:
```
top f1 f2 f3
sub1 f1 f2 f3
sub2 f1 f2 f3
sub21 f1 f2 f3
sub3 f1 f2 f3
```
I want:
```
top f1 f2 f3
...sub1 f1 f2 f3
...sub2 f1 f2 f3
......sub21 f1 f2 f3
...sub3 f1 f2 f3
```
Then I want to just load this up in Excel (delimited by whitespace) and still be able to look at the hierarchy-ness of the first column!
I tried many things, but end up losing the hierarchy information
|
2017/10/03
|
[
"https://Stackoverflow.com/questions/46554928",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2977601/"
] |
With this as the input:
```
$ cat file
top f1 f2 f3
sub1 f1 f2 f3
sub2 f1 f2 f3
sub21 f1 f2 f3
sub3 f1 f2 f3
```
Try:
```
$ sed -E ':a; s/^( *) ([^ ])/\1.\2/; ta' file
top f1 f2 f3
...sub1 f1 f2 f3
...sub2 f1 f2 f3
......sub21 f1 f2 f3
...sub3 f1 f2 f3
```
### How it works:
* `:a`
This creates a label `a`.
* `s/^( *) ([^ ])/\1.\2/`
If the line begins with spaces, this replaces the last space in the leading spaces with a period.
In more detail, `^( *)` matches all leading blanks except the last and stores them in group 1. The regex `([^ ])` (which, despite what stackoverflow makes it look like, consists of a blank followed by `([^ ])`) matches a blank followed by a nonblank and stores the nonblank in group 2.
`\1.\2` replaces the matched text with group 1, followed by a period, followed by group 2.
* `ta`
If the substituted command resulted in a substitution, then branch back to label `a` and try over again.
### Compatibility:
1. The above was tested on modern GNU sed. For BSD/OSX sed, one might or might not need to use:
```
sed -E -e :a -e 's/^( *) ([^ ])/\1.\2/' -e ta file
```
On ancient GNU sed, one needs to use `-r` in place of `-E`:
```
sed -r ':a; s/^( *) ([^ ])/\1.\2/; ta' file
```
2. The above assumed that the spaces were blanks. If they are tabs, then you will have to decide what your tabstop is and make substitutions accordingly.
|
A little lengthy, but a fun exercise nonetheless:
```
# Function to count the number of leading spaces in a string
# Basically, this counts the number of consecutive elements that satisfy being spaces
def count_leading_spaces(s):
if not s:
return 0
else:
curr_char = s[0]
if curr_char != ' ':
return 0
else:
idx = 1
curr_char = s[idx]
while curr_char == ' ':
idx += 1
try:
curr_char = s[idx]
except IndexError:
return idx
return idx
```
Finally, open up the file and do some work:
```
with open('file.txt', 'r') as f:
data = []
for i, line in enumerate(f):
# Don't do anything to the field names
if i == 0:
new_line = line.rstrip()
else:
n_leading_spaces = count_leading_spaces(line)
# Impute periods for spaces
new_line = ('.'*n_leading_spaces + line.lstrip()).rstrip()
data.append(new_line)
```
Results:
```
>>> print('\n'.join(data))
top f1 f2 f3
...sub1 f1 f2 f3
...sub2 f1 f2 f3
......sub21 f1 f2 f3
...sub3 f1 f2 f3
```
You could also do it this way, which is much simpler:
```
with open('file.txt', 'r') as f:
data = []
for i, line in enumerate(f):
# Don't do anything to the field names
if i == 0:
new_line = line.rstrip()
else:
n_leading_spaces = len(line) - len(line.lstrip())
# Impute periods for spaces
new_line = line.lstrip().rjust(len(line), '.').rstrip()
data.append(new_line)
```
|
46,554,928
|
I don't care if I achieve this through vim, sed, awk, python etc. I tried in all, could not get it done.
For an input like this:
```
top f1 f2 f3
sub1 f1 f2 f3
sub2 f1 f2 f3
sub21 f1 f2 f3
sub3 f1 f2 f3
```
I want:
```
top f1 f2 f3
...sub1 f1 f2 f3
...sub2 f1 f2 f3
......sub21 f1 f2 f3
...sub3 f1 f2 f3
```
Then I want to just load this up in Excel (delimited by whitespace) and still be able to look at the hierarchy-ness of the first column!
I tried many things, but end up losing the hierarchy information
|
2017/10/03
|
[
"https://Stackoverflow.com/questions/46554928",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2977601/"
] |
With this as the input:
```
$ cat file
top f1 f2 f3
sub1 f1 f2 f3
sub2 f1 f2 f3
sub21 f1 f2 f3
sub3 f1 f2 f3
```
Try:
```
$ sed -E ':a; s/^( *) ([^ ])/\1.\2/; ta' file
top f1 f2 f3
...sub1 f1 f2 f3
...sub2 f1 f2 f3
......sub21 f1 f2 f3
...sub3 f1 f2 f3
```
### How it works:
* `:a`
This creates a label `a`.
* `s/^( *) ([^ ])/\1.\2/`
If the line begins with spaces, this replaces the last space in the leading spaces with a period.
In more detail, `^( *)` matches all leading blanks except the last and stores them in group 1. The regex `([^ ])` (which, despite what stackoverflow makes it look like, consists of a blank followed by `([^ ])`) matches a blank followed by a nonblank and stores the nonblank in group 2.
`\1.\2` replaces the matched text with group 1, followed by a period, followed by group 2.
* `ta`
If the substituted command resulted in a substitution, then branch back to label `a` and try over again.
### Compatibility:
1. The above was tested on modern GNU sed. For BSD/OSX sed, one might or might not need to use:
```
sed -E -e :a -e 's/^( *) ([^ ])/\1.\2/' -e ta file
```
On ancient GNU sed, one needs to use `-r` in place of `-E`:
```
sed -r ':a; s/^( *) ([^ ])/\1.\2/; ta' file
```
2. The above assumed that the spaces were blanks. If they are tabs, then you will have to decide what your tabstop is and make substitutions accordingly.
|
Since you said **`python`**:
```
#!/usr/bin/env python
import re, sys
for line in sys.stdin:
sys.stdout.write(re.sub('^ +', lambda m: len(m.group(0)) * '.', line))
```
(for each line, we replace the longest run of prefix spaces `'^ +'` with an equally long string of dots, `len(m.group(0)) * '.'`).
With the end result:
```
$ ./dottify.py <file
top f1 f2 f3
...sub1 f1 f2 f3
...sub2 f1 f2 f3
......sub21 f1 f2 f3
...sub3 f1 f2 f3
```
---
Since you said **`awk`**:
```
$ awk '{ match($0,/^ +/); p=substr($0,0,RLENGTH); gsub(" ",".",p); print p""substr($0,RLENGTH+1) }' file
top f1 f2 f3
...sub1 f1 f2 f3
...sub2 f1 f2 f3
......sub21 f1 f2 f3
...sub3 f1 f2 f3
```
(where for each line we match the longest prefix of spaces with [`match`](https://www.gnu.org/software/gawk/manual/gawk.html#index-match-regexp-in-string), extract it with [`substr`](https://www.gnu.org/software/gawk/manual/gawk.html#index-substring), replace each space with dot via [`gsub`](https://www.gnu.org/software/gawk/manual/gawk.html#index-gsub_0028_0029-function-1), and print that modified prefix `p`, followed by the remainder of the input line (`RSTART` and `RLENGTH` variables are populated after `match()` and hold the starting position and length of the matched pattern).
|
46,554,928
|
I don't care if I achieve this through vim, sed, awk, python etc. I tried in all, could not get it done.
For an input like this:
```
top f1 f2 f3
sub1 f1 f2 f3
sub2 f1 f2 f3
sub21 f1 f2 f3
sub3 f1 f2 f3
```
I want:
```
top f1 f2 f3
...sub1 f1 f2 f3
...sub2 f1 f2 f3
......sub21 f1 f2 f3
...sub3 f1 f2 f3
```
Then I want to just load this up in Excel (delimited by whitespace) and still be able to look at the hierarchy-ness of the first column!
I tried many things, but end up losing the hierarchy information
|
2017/10/03
|
[
"https://Stackoverflow.com/questions/46554928",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2977601/"
] |
With this as the input:
```
$ cat file
top f1 f2 f3
sub1 f1 f2 f3
sub2 f1 f2 f3
sub21 f1 f2 f3
sub3 f1 f2 f3
```
Try:
```
$ sed -E ':a; s/^( *) ([^ ])/\1.\2/; ta' file
top f1 f2 f3
...sub1 f1 f2 f3
...sub2 f1 f2 f3
......sub21 f1 f2 f3
...sub3 f1 f2 f3
```
### How it works:
* `:a`
This creates a label `a`.
* `s/^( *) ([^ ])/\1.\2/`
If the line begins with spaces, this replaces the last space in the leading spaces with a period.
In more detail, `^( *)` matches all leading blanks except the last and stores them in group 1. The regex `([^ ])` (which, despite what stackoverflow makes it look like, consists of a blank followed by `([^ ])`) matches a blank followed by a nonblank and stores the nonblank in group 2.
`\1.\2` replaces the matched text with group 1, followed by a period, followed by group 2.
* `ta`
If the substituted command resulted in a substitution, then branch back to label `a` and try over again.
### Compatibility:
1. The above was tested on modern GNU sed. For BSD/OSX sed, one might or might not need to use:
```
sed -E -e :a -e 's/^( *) ([^ ])/\1.\2/' -e ta file
```
On ancient GNU sed, one needs to use `-r` in place of `-E`:
```
sed -r ':a; s/^( *) ([^ ])/\1.\2/; ta' file
```
2. The above assumed that the spaces were blanks. If they are tabs, then you will have to decide what your tabstop is and make substitutions accordingly.
|
In awk. It keeps replacing the first space with a period while the space is preceeded only by periods:
```
$ awk '{while(/^\.* / && sub(/ /,"."));}1' file
top f1 f2 f3
...sub1 f1 f2 f3
...sub2 f1 f2 f3
......sub21 f1 f2 f3
...sub3 f1 f2 f3
```
and here's one in perl:
```
$ perl -p -e 'while(s/(^\.*) /\1./){;}' file
top f1 f2 f3
...sub1 f1 f2 f3
...sub2 f1 f2 f3
......sub21 f1 f2 f3
...sub3 f1 f2 f3
```
|
46,554,928
|
I don't care if I achieve this through vim, sed, awk, python etc. I tried in all, could not get it done.
For an input like this:
```
top f1 f2 f3
sub1 f1 f2 f3
sub2 f1 f2 f3
sub21 f1 f2 f3
sub3 f1 f2 f3
```
I want:
```
top f1 f2 f3
...sub1 f1 f2 f3
...sub2 f1 f2 f3
......sub21 f1 f2 f3
...sub3 f1 f2 f3
```
Then I want to just load this up in Excel (delimited by whitespace) and still be able to look at the hierarchy-ness of the first column!
I tried many things, but end up losing the hierarchy information
|
2017/10/03
|
[
"https://Stackoverflow.com/questions/46554928",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2977601/"
] |
There are two different ways to do this in vim.
1. With a regex:
```
:%s/^\s\+/\=repeat('.', len(submatch(0)))
```
This is fairly straightforward, but a little verbose. It uses the eval register (`\=`) to generate a string of `'.'`s the same length as the number of spaces at the beginning of each line.
2. With a norm command:
```
:%norm ^hviwr.
```
This is a much more conveniently short command, although it's a little harder to understand. It visually selects the spaces at the beginning of a line, and replaces the whole selection with dots. If there is no leading space, the command will fail on `^h` because the cursor attempts to move out of bounds.
To see how this works, try typing `^hviwr.` on a line that has leading spaces to see it happen.
|
A little lengthy, but a fun exercise nonetheless:
```
# Function to count the number of leading spaces in a string
# Basically, this counts the number of consecutive elements that satisfy being spaces
def count_leading_spaces(s):
if not s:
return 0
else:
curr_char = s[0]
if curr_char != ' ':
return 0
else:
idx = 1
curr_char = s[idx]
while curr_char == ' ':
idx += 1
try:
curr_char = s[idx]
except IndexError:
return idx
return idx
```
Finally, open up the file and do some work:
```
with open('file.txt', 'r') as f:
data = []
for i, line in enumerate(f):
# Don't do anything to the field names
if i == 0:
new_line = line.rstrip()
else:
n_leading_spaces = count_leading_spaces(line)
# Impute periods for spaces
new_line = ('.'*n_leading_spaces + line.lstrip()).rstrip()
data.append(new_line)
```
Results:
```
>>> print('\n'.join(data))
top f1 f2 f3
...sub1 f1 f2 f3
...sub2 f1 f2 f3
......sub21 f1 f2 f3
...sub3 f1 f2 f3
```
You could also do it this way, which is much simpler:
```
with open('file.txt', 'r') as f:
data = []
for i, line in enumerate(f):
# Don't do anything to the field names
if i == 0:
new_line = line.rstrip()
else:
n_leading_spaces = len(line) - len(line.lstrip())
# Impute periods for spaces
new_line = line.lstrip().rjust(len(line), '.').rstrip()
data.append(new_line)
```
|
46,554,928
|
I don't care if I achieve this through vim, sed, awk, python etc. I tried in all, could not get it done.
For an input like this:
```
top f1 f2 f3
sub1 f1 f2 f3
sub2 f1 f2 f3
sub21 f1 f2 f3
sub3 f1 f2 f3
```
I want:
```
top f1 f2 f3
...sub1 f1 f2 f3
...sub2 f1 f2 f3
......sub21 f1 f2 f3
...sub3 f1 f2 f3
```
Then I want to just load this up in Excel (delimited by whitespace) and still be able to look at the hierarchy-ness of the first column!
I tried many things, but end up losing the hierarchy information
|
2017/10/03
|
[
"https://Stackoverflow.com/questions/46554928",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2977601/"
] |
There are two different ways to do this in vim.
1. With a regex:
```
:%s/^\s\+/\=repeat('.', len(submatch(0)))
```
This is fairly straightforward, but a little verbose. It uses the eval register (`\=`) to generate a string of `'.'`s the same length as the number of spaces at the beginning of each line.
2. With a norm command:
```
:%norm ^hviwr.
```
This is a much more conveniently short command, although it's a little harder to understand. It visually selects the spaces at the beginning of a line, and replaces the whole selection with dots. If there is no leading space, the command will fail on `^h` because the cursor attempts to move out of bounds.
To see how this works, try typing `^hviwr.` on a line that has leading spaces to see it happen.
|
Since you said **`python`**:
```
#!/usr/bin/env python
import re, sys
for line in sys.stdin:
sys.stdout.write(re.sub('^ +', lambda m: len(m.group(0)) * '.', line))
```
(for each line, we replace the longest run of prefix spaces `'^ +'` with an equally long string of dots, `len(m.group(0)) * '.'`).
With the end result:
```
$ ./dottify.py <file
top f1 f2 f3
...sub1 f1 f2 f3
...sub2 f1 f2 f3
......sub21 f1 f2 f3
...sub3 f1 f2 f3
```
---
Since you said **`awk`**:
```
$ awk '{ match($0,/^ +/); p=substr($0,0,RLENGTH); gsub(" ",".",p); print p""substr($0,RLENGTH+1) }' file
top f1 f2 f3
...sub1 f1 f2 f3
...sub2 f1 f2 f3
......sub21 f1 f2 f3
...sub3 f1 f2 f3
```
(where for each line we match the longest prefix of spaces with [`match`](https://www.gnu.org/software/gawk/manual/gawk.html#index-match-regexp-in-string), extract it with [`substr`](https://www.gnu.org/software/gawk/manual/gawk.html#index-substring), replace each space with dot via [`gsub`](https://www.gnu.org/software/gawk/manual/gawk.html#index-gsub_0028_0029-function-1), and print that modified prefix `p`, followed by the remainder of the input line (`RSTART` and `RLENGTH` variables are populated after `match()` and hold the starting position and length of the matched pattern).
|
46,554,928
|
I don't care if I achieve this through vim, sed, awk, python etc. I tried in all, could not get it done.
For an input like this:
```
top f1 f2 f3
sub1 f1 f2 f3
sub2 f1 f2 f3
sub21 f1 f2 f3
sub3 f1 f2 f3
```
I want:
```
top f1 f2 f3
...sub1 f1 f2 f3
...sub2 f1 f2 f3
......sub21 f1 f2 f3
...sub3 f1 f2 f3
```
Then I want to just load this up in Excel (delimited by whitespace) and still be able to look at the hierarchy-ness of the first column!
I tried many things, but end up losing the hierarchy information
|
2017/10/03
|
[
"https://Stackoverflow.com/questions/46554928",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2977601/"
] |
There are two different ways to do this in vim.
1. With a regex:
```
:%s/^\s\+/\=repeat('.', len(submatch(0)))
```
This is fairly straightforward, but a little verbose. It uses the eval register (`\=`) to generate a string of `'.'`s the same length as the number of spaces at the beginning of each line.
2. With a norm command:
```
:%norm ^hviwr.
```
This is a much more conveniently short command, although it's a little harder to understand. It visually selects the spaces at the beginning of a line, and replaces the whole selection with dots. If there is no leading space, the command will fail on `^h` because the cursor attempts to move out of bounds.
To see how this works, try typing `^hviwr.` on a line that has leading spaces to see it happen.
|
In awk. It keeps replacing the first space with a period while the space is preceeded only by periods:
```
$ awk '{while(/^\.* / && sub(/ /,"."));}1' file
top f1 f2 f3
...sub1 f1 f2 f3
...sub2 f1 f2 f3
......sub21 f1 f2 f3
...sub3 f1 f2 f3
```
and here's one in perl:
```
$ perl -p -e 'while(s/(^\.*) /\1./){;}' file
top f1 f2 f3
...sub1 f1 f2 f3
...sub2 f1 f2 f3
......sub21 f1 f2 f3
...sub3 f1 f2 f3
```
|
46,554,928
|
I don't care if I achieve this through vim, sed, awk, python etc. I tried in all, could not get it done.
For an input like this:
```
top f1 f2 f3
sub1 f1 f2 f3
sub2 f1 f2 f3
sub21 f1 f2 f3
sub3 f1 f2 f3
```
I want:
```
top f1 f2 f3
...sub1 f1 f2 f3
...sub2 f1 f2 f3
......sub21 f1 f2 f3
...sub3 f1 f2 f3
```
Then I want to just load this up in Excel (delimited by whitespace) and still be able to look at the hierarchy-ness of the first column!
I tried many things, but end up losing the hierarchy information
|
2017/10/03
|
[
"https://Stackoverflow.com/questions/46554928",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2977601/"
] |
Since you said **`python`**:
```
#!/usr/bin/env python
import re, sys
for line in sys.stdin:
sys.stdout.write(re.sub('^ +', lambda m: len(m.group(0)) * '.', line))
```
(for each line, we replace the longest run of prefix spaces `'^ +'` with an equally long string of dots, `len(m.group(0)) * '.'`).
With the end result:
```
$ ./dottify.py <file
top f1 f2 f3
...sub1 f1 f2 f3
...sub2 f1 f2 f3
......sub21 f1 f2 f3
...sub3 f1 f2 f3
```
---
Since you said **`awk`**:
```
$ awk '{ match($0,/^ +/); p=substr($0,0,RLENGTH); gsub(" ",".",p); print p""substr($0,RLENGTH+1) }' file
top f1 f2 f3
...sub1 f1 f2 f3
...sub2 f1 f2 f3
......sub21 f1 f2 f3
...sub3 f1 f2 f3
```
(where for each line we match the longest prefix of spaces with [`match`](https://www.gnu.org/software/gawk/manual/gawk.html#index-match-regexp-in-string), extract it with [`substr`](https://www.gnu.org/software/gawk/manual/gawk.html#index-substring), replace each space with dot via [`gsub`](https://www.gnu.org/software/gawk/manual/gawk.html#index-gsub_0028_0029-function-1), and print that modified prefix `p`, followed by the remainder of the input line (`RSTART` and `RLENGTH` variables are populated after `match()` and hold the starting position and length of the matched pattern).
|
A little lengthy, but a fun exercise nonetheless:
```
# Function to count the number of leading spaces in a string
# Basically, this counts the number of consecutive elements that satisfy being spaces
def count_leading_spaces(s):
if not s:
return 0
else:
curr_char = s[0]
if curr_char != ' ':
return 0
else:
idx = 1
curr_char = s[idx]
while curr_char == ' ':
idx += 1
try:
curr_char = s[idx]
except IndexError:
return idx
return idx
```
Finally, open up the file and do some work:
```
with open('file.txt', 'r') as f:
data = []
for i, line in enumerate(f):
# Don't do anything to the field names
if i == 0:
new_line = line.rstrip()
else:
n_leading_spaces = count_leading_spaces(line)
# Impute periods for spaces
new_line = ('.'*n_leading_spaces + line.lstrip()).rstrip()
data.append(new_line)
```
Results:
```
>>> print('\n'.join(data))
top f1 f2 f3
...sub1 f1 f2 f3
...sub2 f1 f2 f3
......sub21 f1 f2 f3
...sub3 f1 f2 f3
```
You could also do it this way, which is much simpler:
```
with open('file.txt', 'r') as f:
data = []
for i, line in enumerate(f):
# Don't do anything to the field names
if i == 0:
new_line = line.rstrip()
else:
n_leading_spaces = len(line) - len(line.lstrip())
# Impute periods for spaces
new_line = line.lstrip().rjust(len(line), '.').rstrip()
data.append(new_line)
```
|
46,554,928
|
I don't care if I achieve this through vim, sed, awk, python etc. I tried in all, could not get it done.
For an input like this:
```
top f1 f2 f3
sub1 f1 f2 f3
sub2 f1 f2 f3
sub21 f1 f2 f3
sub3 f1 f2 f3
```
I want:
```
top f1 f2 f3
...sub1 f1 f2 f3
...sub2 f1 f2 f3
......sub21 f1 f2 f3
...sub3 f1 f2 f3
```
Then I want to just load this up in Excel (delimited by whitespace) and still be able to look at the hierarchy-ness of the first column!
I tried many things, but end up losing the hierarchy information
|
2017/10/03
|
[
"https://Stackoverflow.com/questions/46554928",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2977601/"
] |
In awk. It keeps replacing the first space with a period while the space is preceeded only by periods:
```
$ awk '{while(/^\.* / && sub(/ /,"."));}1' file
top f1 f2 f3
...sub1 f1 f2 f3
...sub2 f1 f2 f3
......sub21 f1 f2 f3
...sub3 f1 f2 f3
```
and here's one in perl:
```
$ perl -p -e 'while(s/(^\.*) /\1./){;}' file
top f1 f2 f3
...sub1 f1 f2 f3
...sub2 f1 f2 f3
......sub21 f1 f2 f3
...sub3 f1 f2 f3
```
|
A little lengthy, but a fun exercise nonetheless:
```
# Function to count the number of leading spaces in a string
# Basically, this counts the number of consecutive elements that satisfy being spaces
def count_leading_spaces(s):
if not s:
return 0
else:
curr_char = s[0]
if curr_char != ' ':
return 0
else:
idx = 1
curr_char = s[idx]
while curr_char == ' ':
idx += 1
try:
curr_char = s[idx]
except IndexError:
return idx
return idx
```
Finally, open up the file and do some work:
```
with open('file.txt', 'r') as f:
data = []
for i, line in enumerate(f):
# Don't do anything to the field names
if i == 0:
new_line = line.rstrip()
else:
n_leading_spaces = count_leading_spaces(line)
# Impute periods for spaces
new_line = ('.'*n_leading_spaces + line.lstrip()).rstrip()
data.append(new_line)
```
Results:
```
>>> print('\n'.join(data))
top f1 f2 f3
...sub1 f1 f2 f3
...sub2 f1 f2 f3
......sub21 f1 f2 f3
...sub3 f1 f2 f3
```
You could also do it this way, which is much simpler:
```
with open('file.txt', 'r') as f:
data = []
for i, line in enumerate(f):
# Don't do anything to the field names
if i == 0:
new_line = line.rstrip()
else:
n_leading_spaces = len(line) - len(line.lstrip())
# Impute periods for spaces
new_line = line.lstrip().rjust(len(line), '.').rstrip()
data.append(new_line)
```
|
73,700,589
|
I have an integer array.
```
Dim a as Variant
a = Array(1,2,3,4,1,2,3,4,5)
Dim index as Integer
index = Application.Match(4,a,0) '3
```
index is 3 here. It returns the index of first occurrence of 4. But I want the last occurrence index of 4.
In python, there is rindex which returns the reverse index. I am new to vba, any similar api available in VBA?
Note: I am using Microsoft office Professional 2019
Thanks in advance.
|
2022/09/13
|
[
"https://Stackoverflow.com/questions/73700589",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/7006773/"
] |
[`XMATCH()`](https://support.microsoft.com/en-us/office/xmatch-function-d966da31-7a6b-4a13-a1c6-5a33ed6a0312) avaibalble in O365 and Excel 2021. Try-
```
Sub ReverseMatch()
Dim a As Variant
Dim index As Integer
a = Array(1, 2, 3, 4, 1, 2, 3, 4, 5)
index = Application.XMatch(4, a, 0, -1) '-1 indicate search last to first.
Debug.Print index
End Sub
```
You can also try below sub.
```
Sub ReverseMatch()
Dim a As Variant
Dim index As Integer, i As Integer
a = Array(1, 2, 3, 4, 1, 2, 3, 4, 5)
For i = UBound(a) To LBound(a) Step -1
If a(i) = 4 Then
Debug.Print i + 1
Exit For
End If
Next i
End Sub
```
|
Try the next way, please:
```
Sub lastOccurrenceMatch()
Dim a As Variant, index As Integer
a = Array(1, 2, 3, 4, 1, 2, 3, 4, 5)
index = Application.Match(CStr(4), Split(StrReverse(Join(a, "|")), "|"), 0) '2
Debug.Print index, UBound(a) + 1 - index + 1
End Sub
```
Or a version not raising an error in case of no match:
```
Sub lastOccurrenceMatchBetter()
Dim a As Variant, index As Variant
a = Array(1, 2, 3, 4, 1, 2, 3, 4, 5)
index = Application.Match(CStr(4), Split(StrReverse(Join(a, "|")), "|"), 0) '2
If Not IsError(index) Then
Debug.Print index, UBound(a) + 1 - index + 1
Else
Debug.Print "No any match..."
End If
End Sub
```
**Edited**:
The next version reverses the array as it is (without join/split sequence). Just using `Index`. Just for the sake of playing with arrays:
```
Sub testMatchReverseArray()
Dim a As Variant, index As Integer, cnt As Long, col As String, arrRev()
a = Array(1, 2, 3, 4, 1, 12, 3, 4, 15)
cnt = UBound(a) + 1
col = Split(cells(1, cnt).Address, "$")(1)
arrRev = Evaluate(cnt & "+1-column(A:" & col & ")") 'build the reversed columns array
a = Application.index(Application.Transpose(a), arrRev, 0)
Debug.Print Join(a, "|") 'just to visually see the reversed array...
index = Application.match(4, a, 0)
Debug.Print index, UBound(a) + 1 - index + 1
End Sub
```
Second **Edit**:
The next solution matches the initial array with one loaded with only element to be searched. Then reverse it (using `StrReverse`) and search the matching positions ("1"):
```
Sub MatchLastOccurrence()
Dim a(), arr(), srcVar, arrMtch(), index As Integer
a = Array(1, 2, 3, 4, 1, 12, 3, 4, 15)
srcVar = 4 'the array element to identify its last position
arr = Array(srcVar)
arrMtch = Application.IfError(Application.match(a, arr, 0), 0)
Debug.Print Join(arrMtch, "|") '1 for matching positions, zero for not matching ones
index = Application.match("1", Split(StrReverse(Join(arrMtch, "|")), "|"), 0) '2
Debug.Print index, UBound(a) + 1 - index + 1
End Sub
```
|
73,700,589
|
I have an integer array.
```
Dim a as Variant
a = Array(1,2,3,4,1,2,3,4,5)
Dim index as Integer
index = Application.Match(4,a,0) '3
```
index is 3 here. It returns the index of first occurrence of 4. But I want the last occurrence index of 4.
In python, there is rindex which returns the reverse index. I am new to vba, any similar api available in VBA?
Note: I am using Microsoft office Professional 2019
Thanks in advance.
|
2022/09/13
|
[
"https://Stackoverflow.com/questions/73700589",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/7006773/"
] |
[`XMATCH()`](https://support.microsoft.com/en-us/office/xmatch-function-d966da31-7a6b-4a13-a1c6-5a33ed6a0312) avaibalble in O365 and Excel 2021. Try-
```
Sub ReverseMatch()
Dim a As Variant
Dim index As Integer
a = Array(1, 2, 3, 4, 1, 2, 3, 4, 5)
index = Application.XMatch(4, a, 0, -1) '-1 indicate search last to first.
Debug.Print index
End Sub
```
You can also try below sub.
```
Sub ReverseMatch()
Dim a As Variant
Dim index As Integer, i As Integer
a = Array(1, 2, 3, 4, 1, 2, 3, 4, 5)
For i = UBound(a) To LBound(a) Step -1
If a(i) = 4 Then
Debug.Print i + 1
Exit For
End If
Next i
End Sub
```
|
**Alternative via `FilterXML()`**
Just in order to complete the valid solutions above, I demonstrate another approach via `FilterXML()` (available since vers. 2013+):
This method doesn't require to reverse the base array; instead it filters & counts all elements before the last finding (i.e. all elements that have another right neighbor with the searched value).
```
Function lastPos(arr, ByVal srch) As Long
'a) create wellformed xml string & XPath search string
Dim xml: xml = "<all><i>" & Join(arr, "</i><i>") & "</i></all>"
Dim XPath: XPath = "//i[following-sibling::i[.=" & srch & "]]"
'b) Filter xml elements before finding at last position
With Application
Dim x: x = .FilterXML(xml, XPath) ' << apply FilterXML()
If IsError(x) Then ' no finding or 1st position = srch
lastPos = IIf(arr(0) = srch, 1, .Count(x))
Else
lastPos = .Count(x) + 1 ' ordinal position number (i.e. +1)
End If
End With
End Function
```
**Example call**
```
Dim arr: arr = Array(1, 2, 3, 4, 1, 2, 3, 4, 5)
Dim srch: srch = 4
Debug.Print _
"Last Position of " & srch & ": " & _
lastPos(arr, srch) ' ~~> last Pos of 4: 8
```
|
73,700,589
|
I have an integer array.
```
Dim a as Variant
a = Array(1,2,3,4,1,2,3,4,5)
Dim index as Integer
index = Application.Match(4,a,0) '3
```
index is 3 here. It returns the index of first occurrence of 4. But I want the last occurrence index of 4.
In python, there is rindex which returns the reverse index. I am new to vba, any similar api available in VBA?
Note: I am using Microsoft office Professional 2019
Thanks in advance.
|
2022/09/13
|
[
"https://Stackoverflow.com/questions/73700589",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/7006773/"
] |
Try the next way, please:
```
Sub lastOccurrenceMatch()
Dim a As Variant, index As Integer
a = Array(1, 2, 3, 4, 1, 2, 3, 4, 5)
index = Application.Match(CStr(4), Split(StrReverse(Join(a, "|")), "|"), 0) '2
Debug.Print index, UBound(a) + 1 - index + 1
End Sub
```
Or a version not raising an error in case of no match:
```
Sub lastOccurrenceMatchBetter()
Dim a As Variant, index As Variant
a = Array(1, 2, 3, 4, 1, 2, 3, 4, 5)
index = Application.Match(CStr(4), Split(StrReverse(Join(a, "|")), "|"), 0) '2
If Not IsError(index) Then
Debug.Print index, UBound(a) + 1 - index + 1
Else
Debug.Print "No any match..."
End If
End Sub
```
**Edited**:
The next version reverses the array as it is (without join/split sequence). Just using `Index`. Just for the sake of playing with arrays:
```
Sub testMatchReverseArray()
Dim a As Variant, index As Integer, cnt As Long, col As String, arrRev()
a = Array(1, 2, 3, 4, 1, 12, 3, 4, 15)
cnt = UBound(a) + 1
col = Split(cells(1, cnt).Address, "$")(1)
arrRev = Evaluate(cnt & "+1-column(A:" & col & ")") 'build the reversed columns array
a = Application.index(Application.Transpose(a), arrRev, 0)
Debug.Print Join(a, "|") 'just to visually see the reversed array...
index = Application.match(4, a, 0)
Debug.Print index, UBound(a) + 1 - index + 1
End Sub
```
Second **Edit**:
The next solution matches the initial array with one loaded with only element to be searched. Then reverse it (using `StrReverse`) and search the matching positions ("1"):
```
Sub MatchLastOccurrence()
Dim a(), arr(), srcVar, arrMtch(), index As Integer
a = Array(1, 2, 3, 4, 1, 12, 3, 4, 15)
srcVar = 4 'the array element to identify its last position
arr = Array(srcVar)
arrMtch = Application.IfError(Application.match(a, arr, 0), 0)
Debug.Print Join(arrMtch, "|") '1 for matching positions, zero for not matching ones
index = Application.match("1", Split(StrReverse(Join(arrMtch, "|")), "|"), 0) '2
Debug.Print index, UBound(a) + 1 - index + 1
End Sub
```
|
**Alternative via `FilterXML()`**
Just in order to complete the valid solutions above, I demonstrate another approach via `FilterXML()` (available since vers. 2013+):
This method doesn't require to reverse the base array; instead it filters & counts all elements before the last finding (i.e. all elements that have another right neighbor with the searched value).
```
Function lastPos(arr, ByVal srch) As Long
'a) create wellformed xml string & XPath search string
Dim xml: xml = "<all><i>" & Join(arr, "</i><i>") & "</i></all>"
Dim XPath: XPath = "//i[following-sibling::i[.=" & srch & "]]"
'b) Filter xml elements before finding at last position
With Application
Dim x: x = .FilterXML(xml, XPath) ' << apply FilterXML()
If IsError(x) Then ' no finding or 1st position = srch
lastPos = IIf(arr(0) = srch, 1, .Count(x))
Else
lastPos = .Count(x) + 1 ' ordinal position number (i.e. +1)
End If
End With
End Function
```
**Example call**
```
Dim arr: arr = Array(1, 2, 3, 4, 1, 2, 3, 4, 5)
Dim srch: srch = 4
Debug.Print _
"Last Position of " & srch & ": " & _
lastPos(arr, srch) ' ~~> last Pos of 4: 8
```
|
30,236,277
|
I am an enthusiastic learner of opencv and write down a code for video streaming with opencv I want to learn the use of cv2.createTrackbar() to add some interactive functionality. Though, I tried this function but its not working for me :
For streaming and resizing the frame i use this code
```
import cv2
import sys
import scipy.misc
import scipy
cap = cv2.VideoCapture(sys.argv[1])
new_size = 0.7 # value range(0,1) can be used for resizing the frame size
while(1):
ret, frame = cap.read()
frame = scipy.misc.imresize(frame, new_size)
cv2.imshow("t",frame)
k = cv2.waitKey(30) & 0xff
if k == 27:
break
```
Then i have transformed the above code like this to add the track bar functionality to resize the frame.
```
import cv2
import sys
import scipy.misc
import scipy
def nothing(x):
pass
cv2.createTrackbar('t','frame',0,1,nothing)
cap = cv2.VideoCapture(sys.argv[1])
while(1):
ret, frame = cap.read()
j = cv2.getTrackbarPos('t','frame')
frame = scipy.misc.imresize(frame, j)
cv2.imshow("t",frame)
k = cv2.waitKey(30) & 0xff
if k == 27:
break
```
but this code is not working and ended up with this error given bellow:
```
me@ubuntu:~/Desktop/final_video_soft$ python GUI_STREAM.py p.3gp
Traceback (most recent call last):
File "GUI_STREAM.py", line 20, in <module>
frame = scipy.misc.imresize(frame, j)
File "/usr/lib/python2.7/dist-packages/scipy/misc/pilutil.py", line 365, in imresize
imnew = im.resize(size, resample=func[interp])
File "/usr/lib/python2.7/dist-packages/PIL/Image.py", line 1305, in resize
im = self.im.resize(size, resample)
TypeError: must be 2-item sequence, not float
```
|
2015/05/14
|
[
"https://Stackoverflow.com/questions/30236277",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4706745/"
] |
Your code will surely fail. There are too many issues indicating you haven't read the document. Even the 1st `new_size` one will **fail for sure**.
`cap = cv2.VideoCapture(sys.argv[1])` this is wrong. Because it requires `int` instead of `str`. You have to do `cap = cv2.VideoCapture(int(sys.argv[1]))`
another obvious error is the conflicted window name you give in following code:
```
cv2.createTrackbar('t','frame',0,1,nothing)
cv2.imshow("t",frame)
```
`imshow` used a **window name** 't'. But the 't' is actually the **trackbar name**.
Moreover, if you have read the document you will know the `createTrackbar` will only accept `int` as `val` and `count`. Thus you either have `j = 0` or `j = 1` in your code. The `value` is initial value of the trackbar. Thus in your case it is always 0, which will raise an error in imshow.
The `getTrackbarPos` should be in event triggered callback not in the main loop. If you do it like you posted, it might still run, but it will not respond to every sliding-event. However, it does not cause visible trouble since the video capture loop is quite fast.
After fix all those errors, it will ends up like this:
```
scale = 700
max_scale = 1000
def modified():
scale = 500
_scale = float(scale)/max_scale
cv2.namedWindow('frame', cv2.WINDOW_AUTOSIZE)
cv2.createTrackbar('t','frame', scale, max_scale, nothing)
cap = cv2.VideoCapture(int(sys.argv[1]))
while(1):
ret, frame = cap.read()
if not ret:
break
scale = cv2.getTrackbarPos('t','frame')
if scale > 1:
_scale = float(scale)/max_scale
print "scale = ", _scale
frame = scipy.misc.imresize(frame, _scale)
cv2.imshow("frame",frame)
k = cv2.waitKey(30) & 0xff
if k == 27:
break
```
|
Sorry, i am familiar with c++, here is the C++ code, hope it helps.
The code mentioned below adds a contrast adjustment to the live video stream from the camera using createTrackbar function
```
#include "opencv2\highgui.hpp"
#include "opencv2\core.hpp"
#include <iostream>
using namespace cv;
using namespace std;
int main(int argc, char **argv[])
{
string owin = "Live Feed Original";
string mwin = "Modified Live stream";
int trackval = 50;
Mat oframe;
Mat inframe;
VideoCapture video(0);
if (!video.isOpened())
{
cout << "The Camera cannot be accessed" << endl;
return -1;
}
namedWindow(owin);
moveWindow(owin, 0, 0);
namedWindow(mwin);
createTrackbar("Contrast", mwin, &trackval, 100);
while (1)
{
video >> inframe;
imshow(owin, inframe);
inframe.convertTo(oframe, -1, trackval / 50.0);
imshow(mwin, oframe);
if (waitKey(33) == 27)
break;
}
}
```
|
46,248,019
|
I am using plotly in a Jupyter notebook (python v3.6) and trying to get the example code for mapbox to work (see: <https://plot.ly/python/scattermapbox/>).
When I execute the cell, I don't get any error, but I see no output either, just a blank output cell. When I mouse over the output cell area, I can see there's a frame there, and I even see the plotly icon tray at the top right, but there doesn't seem to be anything else. When I hit the 'edit chart' button at the bottom right, I get taken to the plotly page with a blank canvas.
As far as I can tell, my mapbox api token is valid.
My code looks identical to the example linked above:
```
data = Data([
Scattermapbox(
lat=['45.5017'],
lon=['-73.5673'],
mode='markers',
marker=Marker(
size=14
),
text=['Montreal'],
)
])
layout = Layout(
autosize=True,
hovermode='closest',
mapbox=dict(
accesstoken=mapbox_token,
bearing=0,
center=dict(
lat=45,
lon=-73
),
pitch=0,
zoom=1,
style='light'
),
)
fig = dict(data=data, layout=layout)
py.iplot(fig)
```
What am I doing wrong?
Thx!
|
2017/09/15
|
[
"https://Stackoverflow.com/questions/46248019",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1404267/"
] |
D'oh! Was using the wrong mapbox token. I should have used the public token, but instead used a private one. The only error was a js one in the console.
Thanks user1561393!
```
jupyter labextension install plotlywidget
```
|
What is your `mapbox_token`? My guess is you haven't signed up for Mapbox to get an API token (which allows you to download their excellent tile maps for mapbox-gl). The map's not going to show without this token.
|
29,974,933
|
I am using python multiprocessing Process. Why can't I start or restart a process after it exits and I do a join. The process is gone, but in the instantiated class \_popen is not set to None after the process dies and I do a join. If I try and start again it tells me I can't start a process twice.
|
2015/04/30
|
[
"https://Stackoverflow.com/questions/29974933",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/686334/"
] |
From the Python multiprocessing documentation.
>
> start()
>
>
> Start the process’s activity.
>
>
> This must be called at most once per process object. It arranges for the object’s run() method to be invoked in a separate process.
>
>
>
A Process object can be run only once. If you need to re-run the same routine (the target parameter) you must instantiate a new Process object.
This is due to the fact that Process objects are encapsulating unique OS instances.
Tip: do not play with the internals of Python Thread and Process objects, you might get seriously hurt.
The \_popen attribute you're seeing is a sentinel the Process object uses to understand when the child OS process is gone. It's literally a pipe and it's used to block the join() call until the process does not terminate.
|
From your question we can just guess, what you're talking about. Are you using `subprocess`? How do you start your process? By invoking `call()` or `Popen()`?
In case I guessed right:
Just keep your `args` list from your `subprocess` call and restart your process by calling that command again. The instance of your subprocess (e.g. a `Popen` instance) represents a running process (with **one** PID) rather than an abstract application which can be run multiple time.
e.g.:
```
args = ['python', 'myscript.py']
p = Popen(args)
p.wait()
p = Popen(args)
...
```
|
36,996,629
|
I am doing `pip install setuptools --upgrade` but getting error below
```
Installing collected packages: setuptools
Found existing installation: setuptools 1.1.6
Uninstalling setuptools-1.1.6:
Exception:
Traceback (most recent call last):
File "/Library/Python/2.7/site-packages/pip-8.1.1-py2.7.egg/pip/basecommand.py", line 209, in main
status = self.run(options, args)
File "/Library/Python/2.7/site-packages/pip-8.1.1-py2.7.egg/pip/commands/install.py", line 317, in run
prefix=options.prefix_path,
File "/Library/Python/2.7/site-packages/pip-8.1.1-py2.7.egg/pip/req/req_set.py", line 726, in install
requirement.uninstall(auto_confirm=True)
File "/Library/Python/2.7/site-packages/pip-8.1.1-py2.7.egg/pip/req/req_install.py", line 746, in uninstall
paths_to_remove.remove(auto_confirm)
File "/Library/Python/2.7/site-packages/pip-8.1.1-py2.7.egg/pip/req/req_uninstall.py", line 115, in remove
renames(path, new_path)
File "/Library/Python/2.7/site-packages/pip-8.1.1-py2.7.egg/pip/utils/__init__.py", line 267, in renames
shutil.move(old, new)
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/shutil.py", line 299, in move
copytree(src, real_dst, symlinks=True)
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/shutil.py", line 208, in copytree
raise Error, errors
Error: [('/System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python/_markerlib/__init__.py', '/tmp/pip-rV15My-uninstall/System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python/_markerlib/__init__.py', "[Errno 1] Operation not permitted: '/tmp/pip-rV15My-uninstall/System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python/_markerlib/__init__.py'"), ('/System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python/_markerlib/__init__.pyc', '/tmp/pip-rV15My-uninstall/System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python/_markerlib/__init__.pyc', "[Errno 1] Operation not permitted: '/tmp/pip-rV15My-uninstall/System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python/_markerlib/__init__.pyc'"), ('/System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python/_markerlib/markers.py', '/tmp/pip-rV15My-uninstall/System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python/_markerlib/markers.py', "[Errno 1] Operation not permitted: '/tmp/pip-rV15My-uninstall/System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python/_markerlib/markers.py'"), ('/System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python/_markerlib/markers.pyc', '/tmp/pip-rV15My-uninstall/System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python/_markerlib/markers.pyc', "[Errno 1] Operation not permitted: '/tmp/pip-rV15My-uninstall/System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python/_markerlib/markers.pyc'"), ('/System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python/_markerlib', '/tmp/pip-rV15My-uninstall/System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python/_markerlib', "[Errno 1] Operation not permitted: '/tmp/pip-rV15My-uninstall/System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python/_markerlib'")]
```
What am I missing? I tried sudo pip install also but didn't help.
|
2016/05/03
|
[
"https://Stackoverflow.com/questions/36996629",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1450312/"
] |
Try to upgrade manually:
```
pip uninstall setuptools
pip install setuptools
```
If it doesn't work, try:
`pip install --upgrade setuptools --user python`
As you can see, the operation didn't get appropriate privilege:
`[Errno 1] Operation not permitted: '/tmp/pip-rV15My-uninstall/System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python/_markerlib/markers.pyc'")`
|
I ran into a similar problem but with a different error, and different resolution. (My search for a solution led me here, so I'm posting my details here in case it helps.)
TL;DR: if upgrading `setuptools` in a Python virtual environment appears to work, but reports `OSError: [Errno 2] No such file or directory`, try deactivating and reactivating the virtual environment before continuing, e.g.:
```
source myenv/bin/activate
pip install --upgrade setuptools
deactivate
source myenv/bin/activate
:
```
Long version
------------
I'm in the process of upgrading Python version and libraries for a long-running project. I use a python virtual environment for development and testing. Host system is MacOS 10.11.5 (El Capitan). I've discovered that `pip` needs to be updated after the virtual environment is created (apparently due to some recent `pypa` TLS changes as of 2018-04), so my initial setup looks like this (having installed the latest version of the Python 2.7 series using the downloadable MacOS installer):
```
virtualenv myenv -p python2.7
source myenv/bin/activate
curl https://bootstrap.pypa.io/get-pip.py -o get-pip.py
python get-pip.py
```
So far, so good :) My problem comes when I try to run:
```
pip install --upgrade setuptools
```
The installation appears to work OK, but then I get an error message, thus:
```
Collecting setuptools
Using cached setuptools-39.0.1-py2.py3-none-any.whl
Installing collected packages: setuptools
Found existing installation: setuptools 0.6rc11
Uninstalling setuptools-0.6rc11:
Successfully uninstalled setuptools-0.6rc11
Successfully installed setuptools-39.0.1
Traceback (most recent call last):
File "/Users/graham/workspace/github/gklyne/annalist/anenv/bin/pip", line 11, in <module>
sys.exit(main())
File "/Users/graham/workspace/github/gklyne/annalist/anenv/lib/python2.7/site-packages/pip/__init__.py", line 248, in main
return command.main(cmd_args)
File "/Users/graham/workspace/github/gklyne/annalist/anenv/lib/python2.7/site-packages/pip/basecommand.py", line 252, in main
pip_version_check(session)
File "/Users/graham/workspace/github/gklyne/annalist/anenv/lib/python2.7/site-packages/pip/utils/outdated.py", line 102, in pip_version_check
installed_version = get_installed_version("pip")
File "/Users/graham/workspace/github/gklyne/annalist/anenv/lib/python2.7/site-packages/pip/utils/__init__.py", line 838, in get_installed_version
working_set = pkg_resources.WorkingSet()
File "/Users/graham/workspace/github/gklyne/annalist/anenv/lib/python2.7/site-packages/pip/_vendor/pkg_resources/__init__.py", line 644, in __init__
self.add_entry(entry)
File "/Users/graham/workspace/github/gklyne/annalist/anenv/lib/python2.7/site-packages/pip/_vendor/pkg_resources/__init__.py", line 700, in add_entry
for dist in find_distributions(entry, True):
File "/Users/graham/workspace/github/gklyne/annalist/anenv/lib/python2.7/site-packages/pip/_vendor/pkg_resources/__init__.py", line 1949, in find_eggs_in_zip
if metadata.has_metadata('PKG-INFO'):
File "/Users/graham/workspace/github/gklyne/annalist/anenv/lib/python2.7/site-packages/pip/_vendor/pkg_resources/__init__.py", line 1463, in has_metadata
return self.egg_info and self._has(self._fn(self.egg_info, name))
File "/Users/graham/workspace/github/gklyne/annalist/anenv/lib/python2.7/site-packages/pip/_vendor/pkg_resources/__init__.py", line 1823, in _has
return zip_path in self.zipinfo or zip_path in self._index()
File "/Users/graham/workspace/github/gklyne/annalist/anenv/lib/python2.7/site-packages/pip/_vendor/pkg_resources/__init__.py", line 1703, in zipinfo
return self._zip_manifests.load(self.loader.archive)
File "/Users/graham/workspace/github/gklyne/annalist/anenv/lib/python2.7/site-packages/pip/_vendor/pkg_resources/__init__.py", line 1643, in load
mtime = os.stat(path).st_mtime
OSError: [Errno 2] No such file or directory: '/Users/graham/workspace/github/gklyne/annalist/anenv/lib/python2.7/site-packages/setuptools-0.6c11-py2.7.egg'
```
Note that the installation appears to complete successfully, followed by the `OSError` exception, which appears to be an attempt to access the old setuptools. Despite the error message, `pip` seems to work just fine for installing new packages, but my local `setup.py` fails to find its dependencies; e.g.:
```
$ python setup.py install
running install
:
(lots of build messages)
:
Installed /Users/graham/workspace/github/gklyne/annalist/anenv/lib/python2.7/site-packages/oauth2client-1.2-py2.7.egg
Processing dependencies for oauth2client==1.2
Searching for httplib2>=0.8
Reading https://pypi.python.org/simple/httplib2/
Couldn't find index page for 'httplib2' (maybe misspelled?)
Scanning index of all packages (this may take a while)
Reading https://pypi.python.org/simple/
No local packages or working download links found for httplib2>=0.8
error: Could not find suitable distribution for Requirement.parse('httplib2>=0.8')
```
But if I use pip to install the same dependency ('httplib2>=0.8'), it works fine, and I can re-run `setup.py` without any problems.
At this point I'm guessing that the difference between running `setup.py` and `pip` is that the virtual environment is somehow hanging onto some old `setuptools` files, but `pip` comes with its own copy. So after upgrading setuptools and getting the `OSError: [Errno 2] No such file or directory` message, I deactivate and reactivate the virtual environment, thus:
```
deactivate
source myenv/bin/activate
```
and, *viola*, `setup.py` seems to work fine!
|
12,683,745
|
Normally, one shuts down Apache Tomcat by running its `shutdown.sh` script (or batch file). In some cases, such as when Tomcat's web container is hosting a web app that does some crazy things with multi-threading, running `shutdown.sh` gracefully shuts down *some* parts of Tomcat (as I can see more available memory returning to the system), but the Tomcat process keeps running.
I'm trying to write a simple Python script that:
1. Calls `shutdown.sh`
2. Runs `ps -aef | grep tomcat` to find any process with Tomcat referenced
3. If applicable, kills the process with `kill -9 <PID>`
Here's what I've got so far (as a prototype - I'm brand new to Python BTW):
```
#!/usr/bin/python
# Imports
import sys
import subprocess
# Load from imported module.
if __init__ == "__main__":
main()
# Main entry point.
def main():
# Shutdown Tomcat
shutdownCmd = "sh ${TOMCAT_HOME}/bin/shutdown.sh"
subprocess.call([shutdownCmd], shell=true)
# Check for PID
grepCmd = "ps -aef | grep tomcat"
grepResults = subprocess.call([grepCmd], shell=true)
if(grepResult.length > 1):
# Get PID and kill it.
pid = ???
killPidCmd = "kill -9 $pid"
subprocess.call([killPidCmd], shell=true)
# Exit.
sys.exit()
```
I'm struggling with the middle part - with obtaining the `grep` results, checking to see if their size is greater than 1 (since `grep` always returns a reference to itself, at least 1 result will always be returned, methinks), and then parsing that returned PID and passing it into the `killPidCmd`. Thanks in advance!
|
2012/10/02
|
[
"https://Stackoverflow.com/questions/12683745",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/892029/"
] |
you need to replace `grepResults = subprocess.call([grepCmd], shell=true)` with `grepResults = subprocess.check_output([grepCmd], shell=true)` if you want to save the results of the command in grepResults. Then you can use split to convert that to an array and the second element of the array will be the pid: `pid = int(grepResults.split()[1])'`
That will only kill the first process however. It doesn't kill all processes if more then one are open. In order to do that you would have to write:
```
grepResults = subprocess.check_output([grepCmd], shell=true).split()
for i in range(1, len(grepResults), 9):
pid = grepResults[i]
killPidCmd = "kill -9 " + pid
subprocess.call([killPidCmd], shell=true)
```
|
You can add "c" to ps so that only the command and not the arguments are printed. This would stop grab from matching its self.
I'm not sure if tomcat shows up as a java application though, so this may not work.
PS: Got this from googling: "grep includes self" and the first hit had that solution.
EDIT: My bad! OK something like this then?
```
p = subprocess.Popen(["ps caux | grep tomcat"], shell=True,stdout=subprocess.PIPE)
out, err = p.communicate()
out.split()[1] #<-- checkout the contents of this variable, it'll have your pid!
```
Basically "out" will have the program output as a string that you can read/manipulate
|
12,683,745
|
Normally, one shuts down Apache Tomcat by running its `shutdown.sh` script (or batch file). In some cases, such as when Tomcat's web container is hosting a web app that does some crazy things with multi-threading, running `shutdown.sh` gracefully shuts down *some* parts of Tomcat (as I can see more available memory returning to the system), but the Tomcat process keeps running.
I'm trying to write a simple Python script that:
1. Calls `shutdown.sh`
2. Runs `ps -aef | grep tomcat` to find any process with Tomcat referenced
3. If applicable, kills the process with `kill -9 <PID>`
Here's what I've got so far (as a prototype - I'm brand new to Python BTW):
```
#!/usr/bin/python
# Imports
import sys
import subprocess
# Load from imported module.
if __init__ == "__main__":
main()
# Main entry point.
def main():
# Shutdown Tomcat
shutdownCmd = "sh ${TOMCAT_HOME}/bin/shutdown.sh"
subprocess.call([shutdownCmd], shell=true)
# Check for PID
grepCmd = "ps -aef | grep tomcat"
grepResults = subprocess.call([grepCmd], shell=true)
if(grepResult.length > 1):
# Get PID and kill it.
pid = ???
killPidCmd = "kill -9 $pid"
subprocess.call([killPidCmd], shell=true)
# Exit.
sys.exit()
```
I'm struggling with the middle part - with obtaining the `grep` results, checking to see if their size is greater than 1 (since `grep` always returns a reference to itself, at least 1 result will always be returned, methinks), and then parsing that returned PID and passing it into the `killPidCmd`. Thanks in advance!
|
2012/10/02
|
[
"https://Stackoverflow.com/questions/12683745",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/892029/"
] |
You can add "c" to ps so that only the command and not the arguments are printed. This would stop grab from matching its self.
I'm not sure if tomcat shows up as a java application though, so this may not work.
PS: Got this from googling: "grep includes self" and the first hit had that solution.
EDIT: My bad! OK something like this then?
```
p = subprocess.Popen(["ps caux | grep tomcat"], shell=True,stdout=subprocess.PIPE)
out, err = p.communicate()
out.split()[1] #<-- checkout the contents of this variable, it'll have your pid!
```
Basically "out" will have the program output as a string that you can read/manipulate
|
Creating child processes to run `ps` and string match the output with `grep` is not necessary. Python has great string handling 'baked in' and Linux exposes all the needed info in /proc. The procfs mount is where the command line utilities get this info. Might as well go directly to the source.
```
import os
SIGTERM = 15
def pidof(image):
matching_proc_images = []
for pid in [dir for dir in os.listdir('/proc') if dir.isdigit()]:
lines = open('/proc/%s/status' % pid, 'r').readlines()
for line in lines:
if line.startswith('Name:'):
name = line.split(':', 1)[1].strip()
if name == image:
matching_proc_images.append(int(pid))
return matching_proc_images
for pid in pidof('tomcat'): os.kill(pid, SIGTERM)
```
|
12,683,745
|
Normally, one shuts down Apache Tomcat by running its `shutdown.sh` script (or batch file). In some cases, such as when Tomcat's web container is hosting a web app that does some crazy things with multi-threading, running `shutdown.sh` gracefully shuts down *some* parts of Tomcat (as I can see more available memory returning to the system), but the Tomcat process keeps running.
I'm trying to write a simple Python script that:
1. Calls `shutdown.sh`
2. Runs `ps -aef | grep tomcat` to find any process with Tomcat referenced
3. If applicable, kills the process with `kill -9 <PID>`
Here's what I've got so far (as a prototype - I'm brand new to Python BTW):
```
#!/usr/bin/python
# Imports
import sys
import subprocess
# Load from imported module.
if __init__ == "__main__":
main()
# Main entry point.
def main():
# Shutdown Tomcat
shutdownCmd = "sh ${TOMCAT_HOME}/bin/shutdown.sh"
subprocess.call([shutdownCmd], shell=true)
# Check for PID
grepCmd = "ps -aef | grep tomcat"
grepResults = subprocess.call([grepCmd], shell=true)
if(grepResult.length > 1):
# Get PID and kill it.
pid = ???
killPidCmd = "kill -9 $pid"
subprocess.call([killPidCmd], shell=true)
# Exit.
sys.exit()
```
I'm struggling with the middle part - with obtaining the `grep` results, checking to see if their size is greater than 1 (since `grep` always returns a reference to itself, at least 1 result will always be returned, methinks), and then parsing that returned PID and passing it into the `killPidCmd`. Thanks in advance!
|
2012/10/02
|
[
"https://Stackoverflow.com/questions/12683745",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/892029/"
] |
you need to replace `grepResults = subprocess.call([grepCmd], shell=true)` with `grepResults = subprocess.check_output([grepCmd], shell=true)` if you want to save the results of the command in grepResults. Then you can use split to convert that to an array and the second element of the array will be the pid: `pid = int(grepResults.split()[1])'`
That will only kill the first process however. It doesn't kill all processes if more then one are open. In order to do that you would have to write:
```
grepResults = subprocess.check_output([grepCmd], shell=true).split()
for i in range(1, len(grepResults), 9):
pid = grepResults[i]
killPidCmd = "kill -9 " + pid
subprocess.call([killPidCmd], shell=true)
```
|
Creating child processes to run `ps` and string match the output with `grep` is not necessary. Python has great string handling 'baked in' and Linux exposes all the needed info in /proc. The procfs mount is where the command line utilities get this info. Might as well go directly to the source.
```
import os
SIGTERM = 15
def pidof(image):
matching_proc_images = []
for pid in [dir for dir in os.listdir('/proc') if dir.isdigit()]:
lines = open('/proc/%s/status' % pid, 'r').readlines()
for line in lines:
if line.startswith('Name:'):
name = line.split(':', 1)[1].strip()
if name == image:
matching_proc_images.append(int(pid))
return matching_proc_images
for pid in pidof('tomcat'): os.kill(pid, SIGTERM)
```
|
22,869,920
|
I am trying to insert raw JSON strings into a sqlite database using the sqlite3 module in python.
When I do the following:
```
rows = [["a", "<json value>"]....["n", "<json_value>"]]
cursor.executemany("""INSERT OR IGNORE INTO FEATURES(UID, JSON) VALUES(?, ?)""", rows)
```
I get the following error:
>
> sqlite3.ProgrammingError: Incorrect number of bindings supplied. The
> current statement uses 2, and there are 48 supplied.
>
>
>
How can I insert the raw json into a table? I assume it's the commas in the json string.
How can I get around this?
|
2014/04/04
|
[
"https://Stackoverflow.com/questions/22869920",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1087908/"
] |
Your input is interpreted as a list of characters (that's where the '48 supplied' is coming from - 48 is the length of the `<json value>` string).
You will be able to pass your input in as a string if you wrap it in square brackets like so
```
["<json value>"]
```
The whole line would then look like
```
rows = [["a", ["<json value>"]]....["n", ["<json value>"]]]
```
|
It's kind of a long shot... but perhaps you could quote the JSON values to ensure the parsing works as desired:
```
cursor.executemany("""INSERT OR IGNORE INTO FEATURES(UID, JSON) VALUES(?, '?')""", rows)
```
EDIT: Alternatively... this might force the json into a sting in the insertion?
```
rows = [ uid, '"{}"'.format( json_val ) for uid, json_val in rows ]
cursor.executemany("""INSERT OR IGNORE INTO FEATURES(UID, JSON) VALUES(?, ?)""", rows)
```
|
22,869,920
|
I am trying to insert raw JSON strings into a sqlite database using the sqlite3 module in python.
When I do the following:
```
rows = [["a", "<json value>"]....["n", "<json_value>"]]
cursor.executemany("""INSERT OR IGNORE INTO FEATURES(UID, JSON) VALUES(?, ?)""", rows)
```
I get the following error:
>
> sqlite3.ProgrammingError: Incorrect number of bindings supplied. The
> current statement uses 2, and there are 48 supplied.
>
>
>
How can I insert the raw json into a table? I assume it's the commas in the json string.
How can I get around this?
|
2014/04/04
|
[
"https://Stackoverflow.com/questions/22869920",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1087908/"
] |
Your input is interpreted as a list of characters (that's where the '48 supplied' is coming from - 48 is the length of the `<json value>` string).
You will be able to pass your input in as a string if you wrap it in square brackets like so
```
["<json value>"]
```
The whole line would then look like
```
rows = [["a", ["<json value>"]]....["n", ["<json value>"]]]
```
|
Second argument passed to [`executemany()`](https://docs.python.org/2/library/sqlite3.html#sqlite3.Cursor.executemany) has to be list of touples, not list of lists:
```
[tuple(l) for l in rows]
```
From [`sqlite3`](https://docs.python.org/2/library/sqlite3.html) module documentation:
>
> Put `?` as a placeholder wherever you want to use a value, and then
> provide a tuple of values as the second argument to the cursor’s
> [`execute()`](https://docs.python.org/2/library/sqlite3.html#sqlite3.Cursor.execute) method.
>
>
>
The same applies to [`executemany()`](https://docs.python.org/2/library/sqlite3.html#sqlite3.Cursor.executemany).
|
34,227,066
|
Using python I can easily increase the current process's niceness:
```
>>> import os
>>> import psutil
>>> # Use os to increase by 3
>>> os.nice(3)
3
>>> # Use psutil to set to 10
>>> psutil.Process(os.getpid()).nice(10)
>>> psutil.Process(os.getpid()).nice()
10
```
However, decreasing a process's niceness does not seem to be allowed:
```
>>> os.nice(-1)
OSError: [Errno 1] Operation not permitted
>>> psutil.Process(os.getpid()).nice(5)
psutil.AccessDenied: psutil.AccessDenied (pid=14955)
```
What is the correct way to do this? And is the ratchet mechanism a bug or a feature?
|
2015/12/11
|
[
"https://Stackoverflow.com/questions/34227066",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1448640/"
] |
Linux, by default, doesn't allow unprivileged users to decrease the nice value (i.e. increase the priority) of their processes, so that one user doesn't create a high-priority process to starve out other users. Python is simply forwarding the error the OS gives you as an exception.
The root user can increase the priority of processes, but running as root has other consequences.
|
This is not a restriction by Python or the `os.nice` interface. It is described in `man 2 nice` that only the superuser may decrease the niceness of a process:
>
> nice() adds inc to the nice value for the calling process. (A higher
> nice value means a low priority.) Only the superuser may specify a
> negative increment, or priority increase. The range for nice values is
> described in getpriority(2).
>
>
>
|
34,227,066
|
Using python I can easily increase the current process's niceness:
```
>>> import os
>>> import psutil
>>> # Use os to increase by 3
>>> os.nice(3)
3
>>> # Use psutil to set to 10
>>> psutil.Process(os.getpid()).nice(10)
>>> psutil.Process(os.getpid()).nice()
10
```
However, decreasing a process's niceness does not seem to be allowed:
```
>>> os.nice(-1)
OSError: [Errno 1] Operation not permitted
>>> psutil.Process(os.getpid()).nice(5)
psutil.AccessDenied: psutil.AccessDenied (pid=14955)
```
What is the correct way to do this? And is the ratchet mechanism a bug or a feature?
|
2015/12/11
|
[
"https://Stackoverflow.com/questions/34227066",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1448640/"
] |
Linux, by default, doesn't allow unprivileged users to decrease the nice value (i.e. increase the priority) of their processes, so that one user doesn't create a high-priority process to starve out other users. Python is simply forwarding the error the OS gives you as an exception.
The root user can increase the priority of processes, but running as root has other consequences.
|
I had the same Error `[Errno 2] Operation not permitted`.
I don't want to start my script with sudo so I came around with the following workaround:
```
def decrease_nice():
pid = os.getpid()
os.system("sudo renice -n -19 -p " + str(pid))
def normal_nice():
pid = os.getpid()
os.system("sudo renice -n 0 -p " + str(pid))
```
|
34,227,066
|
Using python I can easily increase the current process's niceness:
```
>>> import os
>>> import psutil
>>> # Use os to increase by 3
>>> os.nice(3)
3
>>> # Use psutil to set to 10
>>> psutil.Process(os.getpid()).nice(10)
>>> psutil.Process(os.getpid()).nice()
10
```
However, decreasing a process's niceness does not seem to be allowed:
```
>>> os.nice(-1)
OSError: [Errno 1] Operation not permitted
>>> psutil.Process(os.getpid()).nice(5)
psutil.AccessDenied: psutil.AccessDenied (pid=14955)
```
What is the correct way to do this? And is the ratchet mechanism a bug or a feature?
|
2015/12/11
|
[
"https://Stackoverflow.com/questions/34227066",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1448640/"
] |
Linux, by default, doesn't allow unprivileged users to decrease the nice value (i.e. increase the priority) of their processes, so that one user doesn't create a high-priority process to starve out other users. Python is simply forwarding the error the OS gives you as an exception.
The root user can increase the priority of processes, but running as root has other consequences.
|
You can't decrease a nice value below 0 without sudo, but there is a way to "undo" the nice value applied earlier and get around the "ratchet mechanism"
The workaround is to use the threading module. In the example below, I start a function called `run` in it's own thread and it promptly sets its own nice value to 5. When it finishes, the main thread continues right along doing nothing very quickly at a nice value of 0. You can verify this by having the top command show threads with `top -H`
```
import os, threading
def run():
os.nice(5)
for x in range(500000000):
x = x
print("Done, going back to Nice 0")
thread = threading.Thread(target=run)
thread.start()
thread.join()
while True:
x=1
```
|
34,227,066
|
Using python I can easily increase the current process's niceness:
```
>>> import os
>>> import psutil
>>> # Use os to increase by 3
>>> os.nice(3)
3
>>> # Use psutil to set to 10
>>> psutil.Process(os.getpid()).nice(10)
>>> psutil.Process(os.getpid()).nice()
10
```
However, decreasing a process's niceness does not seem to be allowed:
```
>>> os.nice(-1)
OSError: [Errno 1] Operation not permitted
>>> psutil.Process(os.getpid()).nice(5)
psutil.AccessDenied: psutil.AccessDenied (pid=14955)
```
What is the correct way to do this? And is the ratchet mechanism a bug or a feature?
|
2015/12/11
|
[
"https://Stackoverflow.com/questions/34227066",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1448640/"
] |
This is not a restriction by Python or the `os.nice` interface. It is described in `man 2 nice` that only the superuser may decrease the niceness of a process:
>
> nice() adds inc to the nice value for the calling process. (A higher
> nice value means a low priority.) Only the superuser may specify a
> negative increment, or priority increase. The range for nice values is
> described in getpriority(2).
>
>
>
|
I had the same Error `[Errno 2] Operation not permitted`.
I don't want to start my script with sudo so I came around with the following workaround:
```
def decrease_nice():
pid = os.getpid()
os.system("sudo renice -n -19 -p " + str(pid))
def normal_nice():
pid = os.getpid()
os.system("sudo renice -n 0 -p " + str(pid))
```
|
34,227,066
|
Using python I can easily increase the current process's niceness:
```
>>> import os
>>> import psutil
>>> # Use os to increase by 3
>>> os.nice(3)
3
>>> # Use psutil to set to 10
>>> psutil.Process(os.getpid()).nice(10)
>>> psutil.Process(os.getpid()).nice()
10
```
However, decreasing a process's niceness does not seem to be allowed:
```
>>> os.nice(-1)
OSError: [Errno 1] Operation not permitted
>>> psutil.Process(os.getpid()).nice(5)
psutil.AccessDenied: psutil.AccessDenied (pid=14955)
```
What is the correct way to do this? And is the ratchet mechanism a bug or a feature?
|
2015/12/11
|
[
"https://Stackoverflow.com/questions/34227066",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1448640/"
] |
This is not a restriction by Python or the `os.nice` interface. It is described in `man 2 nice` that only the superuser may decrease the niceness of a process:
>
> nice() adds inc to the nice value for the calling process. (A higher
> nice value means a low priority.) Only the superuser may specify a
> negative increment, or priority increase. The range for nice values is
> described in getpriority(2).
>
>
>
|
You can't decrease a nice value below 0 without sudo, but there is a way to "undo" the nice value applied earlier and get around the "ratchet mechanism"
The workaround is to use the threading module. In the example below, I start a function called `run` in it's own thread and it promptly sets its own nice value to 5. When it finishes, the main thread continues right along doing nothing very quickly at a nice value of 0. You can verify this by having the top command show threads with `top -H`
```
import os, threading
def run():
os.nice(5)
for x in range(500000000):
x = x
print("Done, going back to Nice 0")
thread = threading.Thread(target=run)
thread.start()
thread.join()
while True:
x=1
```
|
41,455,463
|
For a list of daily maximum temperature values from 5 to 27 degrees celsius, I want to calculate the corresponding maximum ozone concentration, from the following pandas DataFrame:
[](https://i.stack.imgur.com/iGW9y.png)
I can do this by using the following code, by changing the 5 then 6, 7 etc.
```
df_c=df_b[df_b['Tmax']==5]
df_c.O3max.max()
```
Then I have to copy and paste the output values into an excel spreadsheet. I'm sure there must be a much more pythonic way of doing this, such as by using a list comprehension. Ideally I would like to generate a list of values from the column 03max. Please give me some suggestions.
|
2017/01/04
|
[
"https://Stackoverflow.com/questions/41455463",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/7159945/"
] |
It means you are using python 2.x and you have many options because default integer division results in integer numbers
option 1: import the division library
```
from __future__ import division
```
option 2: change either of the factors to float or alternatively you can change 3 to float (decimal point) by ading .0 to it or you multiply by 1.0
```
print 'The diameter of {planet} is {measure:.2f}'.format(planet="Earth", measure=10/3.0)
```
|
You could also just do:
```
print 'The diameter of {} is {}'.format("Earth",10/3)
```
|
41,455,463
|
For a list of daily maximum temperature values from 5 to 27 degrees celsius, I want to calculate the corresponding maximum ozone concentration, from the following pandas DataFrame:
[](https://i.stack.imgur.com/iGW9y.png)
I can do this by using the following code, by changing the 5 then 6, 7 etc.
```
df_c=df_b[df_b['Tmax']==5]
df_c.O3max.max()
```
Then I have to copy and paste the output values into an excel spreadsheet. I'm sure there must be a much more pythonic way of doing this, such as by using a list comprehension. Ideally I would like to generate a list of values from the column 03max. Please give me some suggestions.
|
2017/01/04
|
[
"https://Stackoverflow.com/questions/41455463",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/7159945/"
] |
You can cast to float by doing `measure = 10/ float(3)`. If the numerator or denominator is a float, then the result will be also.
and
In Python 3.x, the single slash (/) always means true (non-truncating) division. (The // operator is used for truncating division.) In Python 2.x (2.2 and above), you can get this same behaviour by putting `from __future__ import division`
|
You could also just do:
```
print 'The diameter of {} is {}'.format("Earth",10/3)
```
|
13,639,464
|
I would like a javascript function that mimics the python .format() function that works like
```
.format(*args, **kwargs)
```
A previous question gives a possible (but not complete) solution for '.format(\*args)
[JavaScript equivalent to printf/string.format](https://stackoverflow.com/questions/610406/javascript-equivalent-to-printf-string-format)
I would like to be able to do
```
"hello {} and {}".format("you", "bob"
==> hello you and bob
"hello {0} and {1}".format("you", "bob")
==> hello you and bob
"hello {0} and {1} and {a}".format("you", "bob",a="mary")
==> hello you and bob and mary
"hello {0} and {1} and {a} and {2}".format("you", "bob","jill",a="mary")
==> hello you and bob and mary and jill
```
I realize that's a tall order, but maybe somewhere out there is a complete (or at least partial) solution that includes keyword arguments as well.
Oh, and I hear AJAX and JQuery possibly have methods for this, but I would like to be able to do it without all that overhead.
In particular, I would like to be able to use it with a script for a google doc.
Thanks
|
2012/11/30
|
[
"https://Stackoverflow.com/questions/13639464",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/188963/"
] |
UPDATE: If you're using ES6, template strings work very similarly to `String.format`: <https://developers.google.com/web/updates/2015/01/ES6-Template-Strings>
If not, the below works for all the cases above, with a very similar syntax to python's `String.format` method. Test cases below.
```js
String.prototype.format = function() {
var args = arguments;
this.unkeyed_index = 0;
return this.replace(/\{(\w*)\}/g, function(match, key) {
if (key === '') {
key = this.unkeyed_index;
this.unkeyed_index++
}
if (key == +key) {
return args[key] !== 'undefined'
? args[key]
: match;
} else {
for (var i = 0; i < args.length; i++) {
if (typeof args[i] === 'object' && typeof args[i][key] !== 'undefined') {
return args[i][key];
}
}
return match;
}
}.bind(this));
};
// Run some tests
$('#tests')
.append(
"hello {} and {}<br />".format("you", "bob")
)
.append(
"hello {0} and {1}<br />".format("you", "bob")
)
.append(
"hello {0} and {1} and {a}<br />".format("you", "bob", {a:"mary"})
)
.append(
"hello {0} and {1} and {a} and {2}<br />".format("you", "bob", "jill", {a:"mary"})
);
```
```html
<script src="https://ajax.googleapis.com/ajax/libs/jquery/2.1.1/jquery.min.js"></script>
<div id="tests"></div>
```
|
This should work similar to python's `format` but with an object with named keys, it could be numbers as well.
```
String.prototype.format = function( params ) {
return this.replace(
/\{(\w+)\}/g,
function( a,b ) { return params[ b ]; }
);
};
console.log( "hello {a} and {b}.".format( { a: 'foo', b: 'baz' } ) );
//^= "hello foo and baz."
```
|
7,800,213
|
I'm trying to read data from Google Fusion Tables API into Python using the *csv* library. It seems like querying the API returns CSV data, but when I try and use it with *csv.reader*, it seems to mangle the data and split it up on every character rather than just on the commas and newlines. Am I missing a step? Here's a sample I made to illustrate, using a public table:
```
#!/usr/bin/python
import csv
import urllib2, urllib
request_url = 'https://www.google.com/fusiontables/api/query'
query = 'SELECT * FROM 1140242 LIMIT 10'
url = "%s?%s" % (request_url, urllib.urlencode({'sql': query}))
serv_req = urllib2.Request(url=url)
serv_resp = urllib2.urlopen(serv_req)
reader = csv.reader(serv_resp.read())
for row in reader:
print row #prints out each character of each cell and the column headings
```
Ultimately I'd be using the *csv.DictReader* class, but the base *reader* shows the issue as well
|
2011/10/17
|
[
"https://Stackoverflow.com/questions/7800213",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/201103/"
] |
`csv.reader()` takes in a file-like object.
Change
```
reader = csv.reader(serv_resp.read())
```
to
```
reader = csv.reader(serv_resp)
```
Alternatively, you could do:
```
reader = csv.DictReader(serv_resp)
```
|
It's not the CSV module that's causing the problem. Take a look at the output from `serv_resp.read()`. Try using `serv_resp.readlines()` instead.
|
68,043,856
|
I wrote python code to check how many characters need to be deleted from two strings for them to become anagrams of each other.
This is the problem statement "Given two strings, and , that may or may not be of the same length, determine the minimum number of character deletions required to make and anagrams. Any characters can be deleted from either of the strings"
```
def makeAnagram(a, b):
# Write your code here
ac=0 # tocount the no of occurences of chracter in a
bc=0 # tocount the no of occurences of chracter in b
p=False #used to store result of whether an element is in that string
c=0 #count of characters to be deleted to make these two strings anagrams
t=[] # list of previously checked chracters
for x in a:
if x in t == True:
continue
ac=a.count(x)
t.insert(0,x)
for y in b:
p = x in b
if p==True:
bc=b.count(x)
if bc!=ac:
d=ac-bc
c=c+abs(d)
elif p==False:
c=c+1
return(c)
```
|
2021/06/19
|
[
"https://Stackoverflow.com/questions/68043856",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/9262680/"
] |
You can use [`collections.Counter`](https://docs.python.org/3/library/collections.html#collections.Counter) for this:
```
from collections import Counter
def makeAnagram(a, b):
return sum((Counter(a) - Counter(b) | Counter(b) - Counter(a)).values())
```
`Counter(x)` (where x is a string) returns a dictionary that maps characters to how many times they appear in the string.
`Counter(a) - Counter(b)` gives you a dictionary that maps characters which are overabundant in `b` to how many times they appear in `b` more than the number of times they appear in `a`.
`Counter(b) - Counter(a)` is like above, but for characters which are overabundant in `a`.
The `|` merges the two resulting counters. We then take the values of this, and sum them to get the total number of characters which are overrepresented in either string. This is equivalent to the minimum number of characters that need to be deleted to form an anagram.
---
As for why your code doesn't work, I can't pin down any one problem with it. To obtain the code below, all I did was some simplification (e.g. removing unnecessary variables, looping over a and b together, removing `== True` and `== False`, replacing `t` with a `set`, giving variables descriptive names, etc.), and the code began working. Here is that simplified working code:
```
def makeAnagram(a, b):
c = 0 # count of characters to be deleted to make these two strings anagrams
seen = set() # set of previously checked characters
for character in a + b:
if character not in seen:
seen.add(character)
c += abs(a.count(character) - b.count(character))
return c
```
I recommend you make it a point to learn how to write simple/short code. It may not seem important compared to actually tackling the algorithms and getting results. It may seem like cleanup or styling work. But it pays off enormously. Bug are harder to introduce in simple code, and easier to spot. Oftentimes simple code will be more performant than equivalent complex code too, either because the programmer was able to more easily see ways to improve it, or because the more performant approach just arose naturally from the cleaner code.
|
Assuming there are only lowercase letters
The idea is to make character count arrays for both the strings and store frequency of each character. Now iterate the count arrays of both strings and difference in frequency of any character `abs(count1[str1[i]-‘a’] – count2[str2[i]-‘a’])` in both the strings is the number of character to be removed in either string.
```py
CHARS = 26
# function to calculate minimum
# numbers of characters
# to be removed to make two
# strings anagram
def remAnagram(str1, str2):
count1 = [0]*CHARS
count2 = [0]*CHARS
i = 0
while i < len(str1):
count1[ord(str1[i])-ord('a')] += 1
i += 1
i =0
while i < len(str2):
count2[ord(str2[i])-ord('a')] += 1
i += 1
# traverse count arrays to find
# number of characters
# to be removed
result = 0
for i in range(26):
result += abs(count1[i] - count2[i])
return result
```
Here time complexity is O(n + m) where n and m are the length of the two strings
Space complexity is O(1) as we use only array of size 26
This can be further optimised by just using a single array for taking the count.
In this case for string s1 -> we increment the counter
for string s2 -> we decrement the counter
```py
def makeAnagram(a, b):
buffer = [0] * 26
for char in a:
buffer[ord(char) - ord('a')] += 1
for char in b:
buffer[ord(char) - ord('a')] -= 1
return sum(map(abs, buffer))
```
```py
if __name__ == "__main__" :
str1 = "bcadeh"
str2 = "hea"
print(makeAnagram(str1, str2))
```
Output : 3
|
49,053,579
|
Say I created a django project called `django_site`.
I created two sub-projects: `site1` and `polls`.
You see that I have two `index.html` in two sub-projects directories.
However, now, if I open on web browser `localhost:8000/site1` or `localhost:8000/polls`, they all point to the `index.html` of `polls`.
How can I configure so when I open `localhost:8000/site1` it will use the `index.html` of `site1`?
My `settings.py` in `django_site` directory:
```
..
TEMPLATES = [
{
'BACKEND': 'django.template.backends.django.DjangoTemplates',
'DIRS': [os.path.join(BASE_DIR, 'templates'),],
'APP_DIRS': True,
'OPTIONS': {
'context_processors': [
'django.template.context_processors.debug',
'django.template.context_processors.request',
'django.contrib.auth.context_processors.auth',
'django.contrib.messages.context_processors.messages',
],
},
},
]
..
```
My directory structure:
```
E:.
| db.sqlite3
| manage.py
| tree.txt
| tree1.txt
|
+---site1
| | admin.py
| | apps.py
| | models.py
| | tests.py
| | urls.py
| | views.py
| | __init__.py
| |
| +---migrations
| | __init__.py
| |
| +---templates
| | \---site1
| | \---templates
| | index.html
| |
| \---__pycache__
| models.cpython-36.pyc
| urls.cpython-36.pyc
| views.cpython-36.pyc
| __init__.cpython-36.pyc
|
+---django_site
| | settings.py
| | urls.py
| | wsgi.py
| | __init__.py
| |
| \---__pycache__
| settings.cpython-36.pyc
| urls.cpython-36.pyc
| wsgi.cpython-36.pyc
| __init__.cpython-36.pyc
|
\---polls
| admin.py
| apps.py
| models.py
| tests.py
| urls.py
| views.py
| __init__.py
|
+---migrations
| | 0001_initial.py
| | 0002_auto_20180214_0906.py
| | __init__.py
| |
| \---__pycache__
| 0001_initial.cpython-36.pyc
| 0002_auto_20180214_0906.cpython-36.pyc
| __init__.cpython-36.pyc
|
+---static
| jquery-3.3.1.min.js
|
+---templates
| \---polls
| \---templates
| index.html
|
\---__pycache__
admin.cpython-36.pyc
apps.cpython-36.pyc
models.cpython-36.pyc
urls.cpython-36.pyc
views.cpython-36.pyc
__init__.cpython-36.pyc
```
My `urls.py` in `django_site`
```
from django.urls import include, path
from django.contrib import admin
urlpatterns = [
path('polls/', include('polls.urls')),
path('site1/', include('site1.urls')),
path('admin/', admin.site.urls),
]
```
My `urls.py` in `site1` or `polls` (they are the same):
```
from django.urls import path
from . import views
urlpatterns = [
path('', views.index, name='index'),
]
```
|
2018/03/01
|
[
"https://Stackoverflow.com/questions/49053579",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/546678/"
] |
You have to configure urls like this :
In your `django_site directory` you have an url file. You have to get urls from all your django apps :
```
from django.urls import include, path
from django.contrib import admin
from polls import views
from site1 import views
urlpatterns = [
path('polls/', include('polls.urls')),
path('site1/', include('site1.urls')),
]
```
Then, inside each django module, you should have :
```
from django.urls import include, path
from django.contrib import admin
from polls import views
urlpatterns = [
path('', views.your_function, name="index"),
]
```
and the same thing for `site1` :
```
from django.urls import include, path
from django.contrib import admin
from site1 import views
urlpatterns = [
path('', views.your_function, name="index"),
]
```
`views.your_function` have to return your index template : `index.html`
Your function could be :
```
def MyFunction(request):
return render(request, 'index.html')
```
In this way, you have to get : `path('', views.MyFunction, name="index"),`
|
I solved the problem by:
In `views.py` in `site1` or `polls`:
```
def index(request):
return render(request, 'polls/index.html', context)
```
And inside `django_site` I created a folder `templates`, then in this folder there are two folders `site1` and `polls`. In each subfolder I put `index.html` respectively.
|
30,013,356
|
I have a list of emails about 10.000 Long, with incomplete emails id, due to data unreliability and would like to know how can I complete them using python.
sample emails:
xyz@gmail.co
xyz@gmail.
xyz@gma
xyz@g
I've tried using `validate_email` package to filter out bad emails and have tried various regex patterns and I end up with `xyz@gmail.com.co` similar to search and replace using sublime text. I think there is a better way to this than regex and would like to know.
|
2015/05/03
|
[
"https://Stackoverflow.com/questions/30013356",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/394449/"
] |
A strategy to consider is to build a "trie" data structure for the domains that you have such as `gma` and `gmail.co`. Then where a domain is a prefix of one other domain, you can consider going down the longer branch of the trie if there is a unique such branch. This will mean in your example replacing `gma` ultimately with `gmail.co`.
There is an answer concerning [how to create a trie in Python](https://stackoverflow.com/questions/11015320/how-to-create-a-trie-in-python).
|
```
def email_check():
fo = open("/home/cam/Desktop/out.dat", "rw+") #output file
with open('/home/cam/Desktop/email.dat','rw') as f:
for line in f:
at_pos=line.find('@')
if line[at_pos + 1] == 'g':
line=line[:at_pos+1]+'gmail.com'
elif line[at_pos +1] == 'y':
line=line[:at_pos+1]+'yahoomail.com'
elif line[at_pos + 1] == 'h':
line=line[:at_pos+1]+'hotmail.com'
fo.write(line)
fo.write('\n')
f.close()
email_check()
```
|
53,252,181
|
I'm a newby with Spark and trying to complete a Spark tutorial:
[link to tutorial](https://www.youtube.com/watch?v=3CPI2D_QD44&index=4&list=PLot-YkcC7wZ_2sxmRTZr2c121rjcaleqv)
After installing it on local machine (Win10 64, Python 3, Spark 2.4.0) and setting all env variables (HADOOP\_HOME, SPARK\_HOME etc) I'm trying to run a simple Spark job via WordCount.py file:
```
from pyspark import SparkContext, SparkConf
if __name__ == "__main__":
conf = SparkConf().setAppName("word count").setMaster("local[2]")
sc = SparkContext(conf = conf)
lines = sc.textFile("C:/Users/mjdbr/Documents/BigData/python-spark-tutorial/in/word_count.text")
words = lines.flatMap(lambda line: line.split(" "))
wordCounts = words.countByValue()
for word, count in wordCounts.items():
print("{} : {}".format(word, count))
```
After running it from terminal:
```
spark-submit WordCount.py
```
I get below error.
I checked (by commenting out line by line) that it crashes at
```
wordCounts = words.countByValue()
```
Any idea what should I check to make it work?
```
Traceback (most recent call last):
File "C:\Users\mjdbr\Anaconda3\lib\runpy.py", line 193, in _run_module_as_main
"__main__", mod_spec)
File "C:\Users\mjdbr\Anaconda3\lib\runpy.py", line 85, in _run_code
exec(code, run_globals)
File "C:\Spark\spark-2.4.0-bin-hadoop2.7\python\lib\pyspark.zip\pyspark\worker.py", line 25, in <module>
ModuleNotFoundError: No module named 'resource'
18/11/10 23:16:58 ERROR Executor: Exception in task 0.0 in stage 0.0 (TID 0)
org.apache.spark.SparkException: Python worker failed to connect back.
at org.apache.spark.api.python.PythonWorkerFactory.createSimpleWorker(PythonWorkerFactory.scala:170)
at org.apache.spark.api.python.PythonWorkerFactory.create(PythonWorkerFactory.scala:97)
at org.apache.spark.SparkEnv.createPythonWorker(SparkEnv.scala:117)
at org.apache.spark.api.python.BasePythonRunner.compute(PythonRunner.scala:108)
at org.apache.spark.api.python.PythonRDD.compute(PythonRDD.scala:65)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:288)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90)
at org.apache.spark.scheduler.Task.run(Task.scala:121)
at org.apache.spark.executor.Executor$TaskRunner$$anonfun$10.apply(Executor.scala:402)
at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1360)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:408)
at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
at java.lang.Thread.run(Unknown Source)
Caused by: java.net.SocketTimeoutException: Accept timed out
at java.net.DualStackPlainSocketImpl.waitForNewConnection(Native Method)
at java.net.DualStackPlainSocketImpl.socketAccept(Unknown Source)
at java.net.AbstractPlainSocketImpl.accept(Unknown Source)
at java.net.PlainSocketImpl.accept(Unknown Source)
at java.net.ServerSocket.implAccept(Unknown Source)
at java.net.ServerSocket.accept(Unknown Source)
at org.apache.spark.api.python.PythonWorkerFactory.createSimpleWorker(PythonWorkerFactory.scala:164)
... 14 more
18/11/10 23:16:58 ERROR TaskSetManager: Task 0 in stage 0.0 failed 1 times; aborting job
Traceback (most recent call last):
File "C:/Users/mjdbr/Documents/BigData/python-spark-tutorial/rdd/WordCount.py", line 19, in <module>
wordCounts = words.countByValue()
File "C:\Spark\spark-2.4.0-bin-hadoop2.7\python\lib\pyspark.zip\pyspark\rdd.py", line 1261, in countByValue
File "C:\Spark\spark-2.4.0-bin-hadoop2.7\python\lib\pyspark.zip\pyspark\rdd.py", line 844, in reduce
File "C:\Spark\spark-2.4.0-bin-hadoop2.7\python\lib\pyspark.zip\pyspark\rdd.py", line 816, in collect
File "C:\Spark\spark-2.4.0-bin-hadoop2.7\python\lib\py4j-0.10.7-src.zip\py4j\java_gateway.py", line 1257, in __call__
File "C:\Spark\spark-2.4.0-bin-hadoop2.7\python\lib\py4j-0.10.7-src.zip\py4j\protocol.py", line 328, in get_return_value
py4j.protocol.Py4JJavaError: An error occurred while calling z:org.apache.spark.api.python.PythonRDD.collectAndServe.
: org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 0.0 failed 1 times, most recent failure:
Lost task 0.0 in stage 0.0 (TID 0, localhost, executor driver): org.apache.spark.SparkException: Python worker failed to connect back.
at org.apache.spark.api.python.PythonWorkerFactory.createSimpleWorker(PythonWorkerFactory.scala:170)
at org.apache.spark.api.python.PythonWorkerFactory.create(PythonWorkerFactory.scala:97)
at org.apache.spark.SparkEnv.createPythonWorker(SparkEnv.scala:117)
at org.apache.spark.api.python.BasePythonRunner.compute(PythonRunner.scala:108)
at org.apache.spark.api.python.PythonRDD.compute(PythonRDD.scala:65)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:288)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90)
at org.apache.spark.scheduler.Task.run(Task.scala:121)
at org.apache.spark.executor.Executor$TaskRunner$$anonfun$10.apply(Executor.scala:402)
at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1360)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:408)
at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
at java.lang.Thread.run(Unknown Source)
Caused by: java.net.SocketTimeoutException: Accept timed out
at java.net.DualStackPlainSocketImpl.waitForNewConnection(Native Method)
at java.net.DualStackPlainSocketImpl.socketAccept(Unknown Source)
at java.net.AbstractPlainSocketImpl.accept(Unknown Source)
at java.net.PlainSocketImpl.accept(Unknown Source)
at java.net.ServerSocket.implAccept(Unknown Source)
at java.net.ServerSocket.accept(Unknown Source)
at org.apache.spark.api.python.PythonWorkerFactory.createSimpleWorker(PythonWorkerFactory.scala:164)
... 14 more
Driver stacktrace:
at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1887)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1875)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1874)
at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48)
at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1874)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:926)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:926)
at scala.Option.foreach(Option.scala:257)
at org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:926)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:2108)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:2057)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:2046)
at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:49)
at org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:737)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:2061)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:2082)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:2101)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:2126)
at org.apache.spark.rdd.RDD$$anonfun$collect$1.apply(RDD.scala:945)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112)
at org.apache.spark.rdd.RDD.withScope(RDD.scala:363)
at org.apache.spark.rdd.RDD.collect(RDD.scala:944)
at org.apache.spark.api.python.PythonRDD$.collectAndServe(PythonRDD.scala:166)
at org.apache.spark.api.python.PythonRDD.collectAndServe(PythonRDD.scala)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source)
at java.lang.reflect.Method.invoke(Unknown Source)
at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244)
at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)
at py4j.Gateway.invoke(Gateway.java:282)
at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)
at py4j.commands.CallCommand.execute(CallCommand.java:79)
at py4j.GatewayConnection.run(GatewayConnection.java:238)
at java.lang.Thread.run(Unknown Source)
Caused by: org.apache.spark.SparkException: Python worker failed to connect back.
at org.apache.spark.api.python.PythonWorkerFactory.createSimpleWorker(PythonWorkerFactory.scala:170)
at org.apache.spark.api.python.PythonWorkerFactory.create(PythonWorkerFactory.scala:97)
at org.apache.spark.SparkEnv.createPythonWorker(SparkEnv.scala:117)
at org.apache.spark.api.python.BasePythonRunner.compute(PythonRunner.scala:108)
at org.apache.spark.api.python.PythonRDD.compute(PythonRDD.scala:65)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:288)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90)
at org.apache.spark.scheduler.Task.run(Task.scala:121)
at org.apache.spark.executor.Executor$TaskRunner$$anonfun$10.apply(Executor.scala:402)
at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1360)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:408)
at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
... 1 more
Caused by: java.net.SocketTimeoutException: Accept timed out
at java.net.DualStackPlainSocketImpl.waitForNewConnection(Native Method)
at java.net.DualStackPlainSocketImpl.socketAccept(Unknown Source)
at java.net.AbstractPlainSocketImpl.accept(Unknown Source)
at java.net.PlainSocketImpl.accept(Unknown Source)
at java.net.ServerSocket.implAccept(Unknown Source)
at java.net.ServerSocket.accept(Unknown Source)
at org.apache.spark.api.python.PythonWorkerFactory.createSimpleWorker(PythonWorkerFactory.scala:164)
... 14 more
```
As suggested by theplatypus - checked if the 'resource' module can be imported directly from terminal - apparently not:
```
>>> import resource
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
ModuleNotFoundError: No module named 'resource'
```
In terms of installation resources - I followed instructions from [this tutorial](https://www.youtube.com/watch?v=iarn1KHeouc&index=3&list=PLot-YkcC7wZ_2sxmRTZr2c121rjcaleqv):
1. downloaded spark-2.4.0-bin-hadoop2.7.tgz from [Apache Spark website](https://spark.apache.org/downloads.html)
2. un-zipped it to my C-drive
3. already had Python\_3 installed (Anaconda distribution) as well as Java
4. created local 'C:\hadoop\bin' folder to store winutils.exe
5. created 'C:\tmp\hive' folder and gave Spark access to it
6. added environment variables (SPARK\_HOME, HADOOP\_HOME etc)
Is there any extra resource I should install?
|
2018/11/11
|
[
"https://Stackoverflow.com/questions/53252181",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/10637265/"
] |
The heart of the problem is the connection between pyspark and python, solved by redefining the environment variable.
I´ve just changed the environment variable's values `PYSPARK_DRIVER_PYTHON` from `ipython` to `jupyter` and `PYSPARK_PYTHON` from `python3` to `python`.
Now I'm using Jupyter Notebook, Python 3.7, Java JDK 11.0.6, Spark 2.4.2
|
I had the same issue. I had set all the `environment variables` correctly but still wasn't able to resolve it
In my case,
```
import findspark
findspark.init()
```
adding this before even creating the sparkSession helped.
I was using `Visual Studio Code` on `Windows 10` and spark version was `3.2.0`. Python version is `3.9` .
Note: Initially check if the paths for `HADOOP_HOME` `SPARK_HOME` `PYSPARK_PYTHON` have been set correctly
|
53,252,181
|
I'm a newby with Spark and trying to complete a Spark tutorial:
[link to tutorial](https://www.youtube.com/watch?v=3CPI2D_QD44&index=4&list=PLot-YkcC7wZ_2sxmRTZr2c121rjcaleqv)
After installing it on local machine (Win10 64, Python 3, Spark 2.4.0) and setting all env variables (HADOOP\_HOME, SPARK\_HOME etc) I'm trying to run a simple Spark job via WordCount.py file:
```
from pyspark import SparkContext, SparkConf
if __name__ == "__main__":
conf = SparkConf().setAppName("word count").setMaster("local[2]")
sc = SparkContext(conf = conf)
lines = sc.textFile("C:/Users/mjdbr/Documents/BigData/python-spark-tutorial/in/word_count.text")
words = lines.flatMap(lambda line: line.split(" "))
wordCounts = words.countByValue()
for word, count in wordCounts.items():
print("{} : {}".format(word, count))
```
After running it from terminal:
```
spark-submit WordCount.py
```
I get below error.
I checked (by commenting out line by line) that it crashes at
```
wordCounts = words.countByValue()
```
Any idea what should I check to make it work?
```
Traceback (most recent call last):
File "C:\Users\mjdbr\Anaconda3\lib\runpy.py", line 193, in _run_module_as_main
"__main__", mod_spec)
File "C:\Users\mjdbr\Anaconda3\lib\runpy.py", line 85, in _run_code
exec(code, run_globals)
File "C:\Spark\spark-2.4.0-bin-hadoop2.7\python\lib\pyspark.zip\pyspark\worker.py", line 25, in <module>
ModuleNotFoundError: No module named 'resource'
18/11/10 23:16:58 ERROR Executor: Exception in task 0.0 in stage 0.0 (TID 0)
org.apache.spark.SparkException: Python worker failed to connect back.
at org.apache.spark.api.python.PythonWorkerFactory.createSimpleWorker(PythonWorkerFactory.scala:170)
at org.apache.spark.api.python.PythonWorkerFactory.create(PythonWorkerFactory.scala:97)
at org.apache.spark.SparkEnv.createPythonWorker(SparkEnv.scala:117)
at org.apache.spark.api.python.BasePythonRunner.compute(PythonRunner.scala:108)
at org.apache.spark.api.python.PythonRDD.compute(PythonRDD.scala:65)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:288)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90)
at org.apache.spark.scheduler.Task.run(Task.scala:121)
at org.apache.spark.executor.Executor$TaskRunner$$anonfun$10.apply(Executor.scala:402)
at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1360)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:408)
at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
at java.lang.Thread.run(Unknown Source)
Caused by: java.net.SocketTimeoutException: Accept timed out
at java.net.DualStackPlainSocketImpl.waitForNewConnection(Native Method)
at java.net.DualStackPlainSocketImpl.socketAccept(Unknown Source)
at java.net.AbstractPlainSocketImpl.accept(Unknown Source)
at java.net.PlainSocketImpl.accept(Unknown Source)
at java.net.ServerSocket.implAccept(Unknown Source)
at java.net.ServerSocket.accept(Unknown Source)
at org.apache.spark.api.python.PythonWorkerFactory.createSimpleWorker(PythonWorkerFactory.scala:164)
... 14 more
18/11/10 23:16:58 ERROR TaskSetManager: Task 0 in stage 0.0 failed 1 times; aborting job
Traceback (most recent call last):
File "C:/Users/mjdbr/Documents/BigData/python-spark-tutorial/rdd/WordCount.py", line 19, in <module>
wordCounts = words.countByValue()
File "C:\Spark\spark-2.4.0-bin-hadoop2.7\python\lib\pyspark.zip\pyspark\rdd.py", line 1261, in countByValue
File "C:\Spark\spark-2.4.0-bin-hadoop2.7\python\lib\pyspark.zip\pyspark\rdd.py", line 844, in reduce
File "C:\Spark\spark-2.4.0-bin-hadoop2.7\python\lib\pyspark.zip\pyspark\rdd.py", line 816, in collect
File "C:\Spark\spark-2.4.0-bin-hadoop2.7\python\lib\py4j-0.10.7-src.zip\py4j\java_gateway.py", line 1257, in __call__
File "C:\Spark\spark-2.4.0-bin-hadoop2.7\python\lib\py4j-0.10.7-src.zip\py4j\protocol.py", line 328, in get_return_value
py4j.protocol.Py4JJavaError: An error occurred while calling z:org.apache.spark.api.python.PythonRDD.collectAndServe.
: org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 0.0 failed 1 times, most recent failure:
Lost task 0.0 in stage 0.0 (TID 0, localhost, executor driver): org.apache.spark.SparkException: Python worker failed to connect back.
at org.apache.spark.api.python.PythonWorkerFactory.createSimpleWorker(PythonWorkerFactory.scala:170)
at org.apache.spark.api.python.PythonWorkerFactory.create(PythonWorkerFactory.scala:97)
at org.apache.spark.SparkEnv.createPythonWorker(SparkEnv.scala:117)
at org.apache.spark.api.python.BasePythonRunner.compute(PythonRunner.scala:108)
at org.apache.spark.api.python.PythonRDD.compute(PythonRDD.scala:65)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:288)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90)
at org.apache.spark.scheduler.Task.run(Task.scala:121)
at org.apache.spark.executor.Executor$TaskRunner$$anonfun$10.apply(Executor.scala:402)
at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1360)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:408)
at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
at java.lang.Thread.run(Unknown Source)
Caused by: java.net.SocketTimeoutException: Accept timed out
at java.net.DualStackPlainSocketImpl.waitForNewConnection(Native Method)
at java.net.DualStackPlainSocketImpl.socketAccept(Unknown Source)
at java.net.AbstractPlainSocketImpl.accept(Unknown Source)
at java.net.PlainSocketImpl.accept(Unknown Source)
at java.net.ServerSocket.implAccept(Unknown Source)
at java.net.ServerSocket.accept(Unknown Source)
at org.apache.spark.api.python.PythonWorkerFactory.createSimpleWorker(PythonWorkerFactory.scala:164)
... 14 more
Driver stacktrace:
at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1887)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1875)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1874)
at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48)
at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1874)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:926)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:926)
at scala.Option.foreach(Option.scala:257)
at org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:926)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:2108)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:2057)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:2046)
at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:49)
at org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:737)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:2061)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:2082)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:2101)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:2126)
at org.apache.spark.rdd.RDD$$anonfun$collect$1.apply(RDD.scala:945)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112)
at org.apache.spark.rdd.RDD.withScope(RDD.scala:363)
at org.apache.spark.rdd.RDD.collect(RDD.scala:944)
at org.apache.spark.api.python.PythonRDD$.collectAndServe(PythonRDD.scala:166)
at org.apache.spark.api.python.PythonRDD.collectAndServe(PythonRDD.scala)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source)
at java.lang.reflect.Method.invoke(Unknown Source)
at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244)
at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)
at py4j.Gateway.invoke(Gateway.java:282)
at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)
at py4j.commands.CallCommand.execute(CallCommand.java:79)
at py4j.GatewayConnection.run(GatewayConnection.java:238)
at java.lang.Thread.run(Unknown Source)
Caused by: org.apache.spark.SparkException: Python worker failed to connect back.
at org.apache.spark.api.python.PythonWorkerFactory.createSimpleWorker(PythonWorkerFactory.scala:170)
at org.apache.spark.api.python.PythonWorkerFactory.create(PythonWorkerFactory.scala:97)
at org.apache.spark.SparkEnv.createPythonWorker(SparkEnv.scala:117)
at org.apache.spark.api.python.BasePythonRunner.compute(PythonRunner.scala:108)
at org.apache.spark.api.python.PythonRDD.compute(PythonRDD.scala:65)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:288)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90)
at org.apache.spark.scheduler.Task.run(Task.scala:121)
at org.apache.spark.executor.Executor$TaskRunner$$anonfun$10.apply(Executor.scala:402)
at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1360)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:408)
at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
... 1 more
Caused by: java.net.SocketTimeoutException: Accept timed out
at java.net.DualStackPlainSocketImpl.waitForNewConnection(Native Method)
at java.net.DualStackPlainSocketImpl.socketAccept(Unknown Source)
at java.net.AbstractPlainSocketImpl.accept(Unknown Source)
at java.net.PlainSocketImpl.accept(Unknown Source)
at java.net.ServerSocket.implAccept(Unknown Source)
at java.net.ServerSocket.accept(Unknown Source)
at org.apache.spark.api.python.PythonWorkerFactory.createSimpleWorker(PythonWorkerFactory.scala:164)
... 14 more
```
As suggested by theplatypus - checked if the 'resource' module can be imported directly from terminal - apparently not:
```
>>> import resource
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
ModuleNotFoundError: No module named 'resource'
```
In terms of installation resources - I followed instructions from [this tutorial](https://www.youtube.com/watch?v=iarn1KHeouc&index=3&list=PLot-YkcC7wZ_2sxmRTZr2c121rjcaleqv):
1. downloaded spark-2.4.0-bin-hadoop2.7.tgz from [Apache Spark website](https://spark.apache.org/downloads.html)
2. un-zipped it to my C-drive
3. already had Python\_3 installed (Anaconda distribution) as well as Java
4. created local 'C:\hadoop\bin' folder to store winutils.exe
5. created 'C:\tmp\hive' folder and gave Spark access to it
6. added environment variables (SPARK\_HOME, HADOOP\_HOME etc)
Is there any extra resource I should install?
|
2018/11/11
|
[
"https://Stackoverflow.com/questions/53252181",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/10637265/"
] |
I got the same error. I solved it installing the previous version of Spark (2.3 instead of 2.4). Now it works perfectly, maybe it is an issue of the lastest version of pyspark.
|
There seems to be many reasons for this error to occur. I still had the same problem despite my `environmental variables` being all correctly set.
In my case adding this
```
import findspark
findspark.init()
```
solved the problem.
I'am with`jupyter notebook` on `windows10` with `spark-3.1.2, python3.6`.
|
53,252,181
|
I'm a newby with Spark and trying to complete a Spark tutorial:
[link to tutorial](https://www.youtube.com/watch?v=3CPI2D_QD44&index=4&list=PLot-YkcC7wZ_2sxmRTZr2c121rjcaleqv)
After installing it on local machine (Win10 64, Python 3, Spark 2.4.0) and setting all env variables (HADOOP\_HOME, SPARK\_HOME etc) I'm trying to run a simple Spark job via WordCount.py file:
```
from pyspark import SparkContext, SparkConf
if __name__ == "__main__":
conf = SparkConf().setAppName("word count").setMaster("local[2]")
sc = SparkContext(conf = conf)
lines = sc.textFile("C:/Users/mjdbr/Documents/BigData/python-spark-tutorial/in/word_count.text")
words = lines.flatMap(lambda line: line.split(" "))
wordCounts = words.countByValue()
for word, count in wordCounts.items():
print("{} : {}".format(word, count))
```
After running it from terminal:
```
spark-submit WordCount.py
```
I get below error.
I checked (by commenting out line by line) that it crashes at
```
wordCounts = words.countByValue()
```
Any idea what should I check to make it work?
```
Traceback (most recent call last):
File "C:\Users\mjdbr\Anaconda3\lib\runpy.py", line 193, in _run_module_as_main
"__main__", mod_spec)
File "C:\Users\mjdbr\Anaconda3\lib\runpy.py", line 85, in _run_code
exec(code, run_globals)
File "C:\Spark\spark-2.4.0-bin-hadoop2.7\python\lib\pyspark.zip\pyspark\worker.py", line 25, in <module>
ModuleNotFoundError: No module named 'resource'
18/11/10 23:16:58 ERROR Executor: Exception in task 0.0 in stage 0.0 (TID 0)
org.apache.spark.SparkException: Python worker failed to connect back.
at org.apache.spark.api.python.PythonWorkerFactory.createSimpleWorker(PythonWorkerFactory.scala:170)
at org.apache.spark.api.python.PythonWorkerFactory.create(PythonWorkerFactory.scala:97)
at org.apache.spark.SparkEnv.createPythonWorker(SparkEnv.scala:117)
at org.apache.spark.api.python.BasePythonRunner.compute(PythonRunner.scala:108)
at org.apache.spark.api.python.PythonRDD.compute(PythonRDD.scala:65)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:288)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90)
at org.apache.spark.scheduler.Task.run(Task.scala:121)
at org.apache.spark.executor.Executor$TaskRunner$$anonfun$10.apply(Executor.scala:402)
at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1360)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:408)
at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
at java.lang.Thread.run(Unknown Source)
Caused by: java.net.SocketTimeoutException: Accept timed out
at java.net.DualStackPlainSocketImpl.waitForNewConnection(Native Method)
at java.net.DualStackPlainSocketImpl.socketAccept(Unknown Source)
at java.net.AbstractPlainSocketImpl.accept(Unknown Source)
at java.net.PlainSocketImpl.accept(Unknown Source)
at java.net.ServerSocket.implAccept(Unknown Source)
at java.net.ServerSocket.accept(Unknown Source)
at org.apache.spark.api.python.PythonWorkerFactory.createSimpleWorker(PythonWorkerFactory.scala:164)
... 14 more
18/11/10 23:16:58 ERROR TaskSetManager: Task 0 in stage 0.0 failed 1 times; aborting job
Traceback (most recent call last):
File "C:/Users/mjdbr/Documents/BigData/python-spark-tutorial/rdd/WordCount.py", line 19, in <module>
wordCounts = words.countByValue()
File "C:\Spark\spark-2.4.0-bin-hadoop2.7\python\lib\pyspark.zip\pyspark\rdd.py", line 1261, in countByValue
File "C:\Spark\spark-2.4.0-bin-hadoop2.7\python\lib\pyspark.zip\pyspark\rdd.py", line 844, in reduce
File "C:\Spark\spark-2.4.0-bin-hadoop2.7\python\lib\pyspark.zip\pyspark\rdd.py", line 816, in collect
File "C:\Spark\spark-2.4.0-bin-hadoop2.7\python\lib\py4j-0.10.7-src.zip\py4j\java_gateway.py", line 1257, in __call__
File "C:\Spark\spark-2.4.0-bin-hadoop2.7\python\lib\py4j-0.10.7-src.zip\py4j\protocol.py", line 328, in get_return_value
py4j.protocol.Py4JJavaError: An error occurred while calling z:org.apache.spark.api.python.PythonRDD.collectAndServe.
: org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 0.0 failed 1 times, most recent failure:
Lost task 0.0 in stage 0.0 (TID 0, localhost, executor driver): org.apache.spark.SparkException: Python worker failed to connect back.
at org.apache.spark.api.python.PythonWorkerFactory.createSimpleWorker(PythonWorkerFactory.scala:170)
at org.apache.spark.api.python.PythonWorkerFactory.create(PythonWorkerFactory.scala:97)
at org.apache.spark.SparkEnv.createPythonWorker(SparkEnv.scala:117)
at org.apache.spark.api.python.BasePythonRunner.compute(PythonRunner.scala:108)
at org.apache.spark.api.python.PythonRDD.compute(PythonRDD.scala:65)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:288)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90)
at org.apache.spark.scheduler.Task.run(Task.scala:121)
at org.apache.spark.executor.Executor$TaskRunner$$anonfun$10.apply(Executor.scala:402)
at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1360)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:408)
at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
at java.lang.Thread.run(Unknown Source)
Caused by: java.net.SocketTimeoutException: Accept timed out
at java.net.DualStackPlainSocketImpl.waitForNewConnection(Native Method)
at java.net.DualStackPlainSocketImpl.socketAccept(Unknown Source)
at java.net.AbstractPlainSocketImpl.accept(Unknown Source)
at java.net.PlainSocketImpl.accept(Unknown Source)
at java.net.ServerSocket.implAccept(Unknown Source)
at java.net.ServerSocket.accept(Unknown Source)
at org.apache.spark.api.python.PythonWorkerFactory.createSimpleWorker(PythonWorkerFactory.scala:164)
... 14 more
Driver stacktrace:
at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1887)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1875)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1874)
at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48)
at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1874)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:926)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:926)
at scala.Option.foreach(Option.scala:257)
at org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:926)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:2108)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:2057)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:2046)
at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:49)
at org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:737)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:2061)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:2082)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:2101)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:2126)
at org.apache.spark.rdd.RDD$$anonfun$collect$1.apply(RDD.scala:945)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112)
at org.apache.spark.rdd.RDD.withScope(RDD.scala:363)
at org.apache.spark.rdd.RDD.collect(RDD.scala:944)
at org.apache.spark.api.python.PythonRDD$.collectAndServe(PythonRDD.scala:166)
at org.apache.spark.api.python.PythonRDD.collectAndServe(PythonRDD.scala)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source)
at java.lang.reflect.Method.invoke(Unknown Source)
at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244)
at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)
at py4j.Gateway.invoke(Gateway.java:282)
at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)
at py4j.commands.CallCommand.execute(CallCommand.java:79)
at py4j.GatewayConnection.run(GatewayConnection.java:238)
at java.lang.Thread.run(Unknown Source)
Caused by: org.apache.spark.SparkException: Python worker failed to connect back.
at org.apache.spark.api.python.PythonWorkerFactory.createSimpleWorker(PythonWorkerFactory.scala:170)
at org.apache.spark.api.python.PythonWorkerFactory.create(PythonWorkerFactory.scala:97)
at org.apache.spark.SparkEnv.createPythonWorker(SparkEnv.scala:117)
at org.apache.spark.api.python.BasePythonRunner.compute(PythonRunner.scala:108)
at org.apache.spark.api.python.PythonRDD.compute(PythonRDD.scala:65)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:288)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90)
at org.apache.spark.scheduler.Task.run(Task.scala:121)
at org.apache.spark.executor.Executor$TaskRunner$$anonfun$10.apply(Executor.scala:402)
at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1360)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:408)
at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
... 1 more
Caused by: java.net.SocketTimeoutException: Accept timed out
at java.net.DualStackPlainSocketImpl.waitForNewConnection(Native Method)
at java.net.DualStackPlainSocketImpl.socketAccept(Unknown Source)
at java.net.AbstractPlainSocketImpl.accept(Unknown Source)
at java.net.PlainSocketImpl.accept(Unknown Source)
at java.net.ServerSocket.implAccept(Unknown Source)
at java.net.ServerSocket.accept(Unknown Source)
at org.apache.spark.api.python.PythonWorkerFactory.createSimpleWorker(PythonWorkerFactory.scala:164)
... 14 more
```
As suggested by theplatypus - checked if the 'resource' module can be imported directly from terminal - apparently not:
```
>>> import resource
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
ModuleNotFoundError: No module named 'resource'
```
In terms of installation resources - I followed instructions from [this tutorial](https://www.youtube.com/watch?v=iarn1KHeouc&index=3&list=PLot-YkcC7wZ_2sxmRTZr2c121rjcaleqv):
1. downloaded spark-2.4.0-bin-hadoop2.7.tgz from [Apache Spark website](https://spark.apache.org/downloads.html)
2. un-zipped it to my C-drive
3. already had Python\_3 installed (Anaconda distribution) as well as Java
4. created local 'C:\hadoop\bin' folder to store winutils.exe
5. created 'C:\tmp\hive' folder and gave Spark access to it
6. added environment variables (SPARK\_HOME, HADOOP\_HOME etc)
Is there any extra resource I should install?
|
2018/11/11
|
[
"https://Stackoverflow.com/questions/53252181",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/10637265/"
] |
I got the same error. I solved it installing the previous version of Spark (2.3 instead of 2.4). Now it works perfectly, maybe it is an issue of the lastest version of pyspark.
|
Looking at the source of the error ([worker.py#L25](https://github.com/apache/spark/blob/master/python/pyspark/worker.py#L25)), it seems that the python interpreter used to instanciate a pyspark worker doesn't have access to the `resource` module, a built-in module referred in [Python's doc](https://docs.python.org/3.7/library/unix.html) as part of "Unix Specific Services".
Are you sure you can run pyspark on Windows (without some additional software like GOW or MingW at least), and so that you didn't skip some Windows-specific installation steps ?
Could you open a python console (the one used by pyspark) and see if you can `>>> import resource` without getting the same `ModuleNotFoundError` ? If you don't, then could you provide the ressources you used to install it on W10 ?
|
53,252,181
|
I'm a newby with Spark and trying to complete a Spark tutorial:
[link to tutorial](https://www.youtube.com/watch?v=3CPI2D_QD44&index=4&list=PLot-YkcC7wZ_2sxmRTZr2c121rjcaleqv)
After installing it on local machine (Win10 64, Python 3, Spark 2.4.0) and setting all env variables (HADOOP\_HOME, SPARK\_HOME etc) I'm trying to run a simple Spark job via WordCount.py file:
```
from pyspark import SparkContext, SparkConf
if __name__ == "__main__":
conf = SparkConf().setAppName("word count").setMaster("local[2]")
sc = SparkContext(conf = conf)
lines = sc.textFile("C:/Users/mjdbr/Documents/BigData/python-spark-tutorial/in/word_count.text")
words = lines.flatMap(lambda line: line.split(" "))
wordCounts = words.countByValue()
for word, count in wordCounts.items():
print("{} : {}".format(word, count))
```
After running it from terminal:
```
spark-submit WordCount.py
```
I get below error.
I checked (by commenting out line by line) that it crashes at
```
wordCounts = words.countByValue()
```
Any idea what should I check to make it work?
```
Traceback (most recent call last):
File "C:\Users\mjdbr\Anaconda3\lib\runpy.py", line 193, in _run_module_as_main
"__main__", mod_spec)
File "C:\Users\mjdbr\Anaconda3\lib\runpy.py", line 85, in _run_code
exec(code, run_globals)
File "C:\Spark\spark-2.4.0-bin-hadoop2.7\python\lib\pyspark.zip\pyspark\worker.py", line 25, in <module>
ModuleNotFoundError: No module named 'resource'
18/11/10 23:16:58 ERROR Executor: Exception in task 0.0 in stage 0.0 (TID 0)
org.apache.spark.SparkException: Python worker failed to connect back.
at org.apache.spark.api.python.PythonWorkerFactory.createSimpleWorker(PythonWorkerFactory.scala:170)
at org.apache.spark.api.python.PythonWorkerFactory.create(PythonWorkerFactory.scala:97)
at org.apache.spark.SparkEnv.createPythonWorker(SparkEnv.scala:117)
at org.apache.spark.api.python.BasePythonRunner.compute(PythonRunner.scala:108)
at org.apache.spark.api.python.PythonRDD.compute(PythonRDD.scala:65)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:288)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90)
at org.apache.spark.scheduler.Task.run(Task.scala:121)
at org.apache.spark.executor.Executor$TaskRunner$$anonfun$10.apply(Executor.scala:402)
at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1360)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:408)
at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
at java.lang.Thread.run(Unknown Source)
Caused by: java.net.SocketTimeoutException: Accept timed out
at java.net.DualStackPlainSocketImpl.waitForNewConnection(Native Method)
at java.net.DualStackPlainSocketImpl.socketAccept(Unknown Source)
at java.net.AbstractPlainSocketImpl.accept(Unknown Source)
at java.net.PlainSocketImpl.accept(Unknown Source)
at java.net.ServerSocket.implAccept(Unknown Source)
at java.net.ServerSocket.accept(Unknown Source)
at org.apache.spark.api.python.PythonWorkerFactory.createSimpleWorker(PythonWorkerFactory.scala:164)
... 14 more
18/11/10 23:16:58 ERROR TaskSetManager: Task 0 in stage 0.0 failed 1 times; aborting job
Traceback (most recent call last):
File "C:/Users/mjdbr/Documents/BigData/python-spark-tutorial/rdd/WordCount.py", line 19, in <module>
wordCounts = words.countByValue()
File "C:\Spark\spark-2.4.0-bin-hadoop2.7\python\lib\pyspark.zip\pyspark\rdd.py", line 1261, in countByValue
File "C:\Spark\spark-2.4.0-bin-hadoop2.7\python\lib\pyspark.zip\pyspark\rdd.py", line 844, in reduce
File "C:\Spark\spark-2.4.0-bin-hadoop2.7\python\lib\pyspark.zip\pyspark\rdd.py", line 816, in collect
File "C:\Spark\spark-2.4.0-bin-hadoop2.7\python\lib\py4j-0.10.7-src.zip\py4j\java_gateway.py", line 1257, in __call__
File "C:\Spark\spark-2.4.0-bin-hadoop2.7\python\lib\py4j-0.10.7-src.zip\py4j\protocol.py", line 328, in get_return_value
py4j.protocol.Py4JJavaError: An error occurred while calling z:org.apache.spark.api.python.PythonRDD.collectAndServe.
: org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 0.0 failed 1 times, most recent failure:
Lost task 0.0 in stage 0.0 (TID 0, localhost, executor driver): org.apache.spark.SparkException: Python worker failed to connect back.
at org.apache.spark.api.python.PythonWorkerFactory.createSimpleWorker(PythonWorkerFactory.scala:170)
at org.apache.spark.api.python.PythonWorkerFactory.create(PythonWorkerFactory.scala:97)
at org.apache.spark.SparkEnv.createPythonWorker(SparkEnv.scala:117)
at org.apache.spark.api.python.BasePythonRunner.compute(PythonRunner.scala:108)
at org.apache.spark.api.python.PythonRDD.compute(PythonRDD.scala:65)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:288)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90)
at org.apache.spark.scheduler.Task.run(Task.scala:121)
at org.apache.spark.executor.Executor$TaskRunner$$anonfun$10.apply(Executor.scala:402)
at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1360)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:408)
at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
at java.lang.Thread.run(Unknown Source)
Caused by: java.net.SocketTimeoutException: Accept timed out
at java.net.DualStackPlainSocketImpl.waitForNewConnection(Native Method)
at java.net.DualStackPlainSocketImpl.socketAccept(Unknown Source)
at java.net.AbstractPlainSocketImpl.accept(Unknown Source)
at java.net.PlainSocketImpl.accept(Unknown Source)
at java.net.ServerSocket.implAccept(Unknown Source)
at java.net.ServerSocket.accept(Unknown Source)
at org.apache.spark.api.python.PythonWorkerFactory.createSimpleWorker(PythonWorkerFactory.scala:164)
... 14 more
Driver stacktrace:
at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1887)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1875)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1874)
at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48)
at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1874)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:926)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:926)
at scala.Option.foreach(Option.scala:257)
at org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:926)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:2108)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:2057)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:2046)
at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:49)
at org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:737)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:2061)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:2082)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:2101)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:2126)
at org.apache.spark.rdd.RDD$$anonfun$collect$1.apply(RDD.scala:945)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112)
at org.apache.spark.rdd.RDD.withScope(RDD.scala:363)
at org.apache.spark.rdd.RDD.collect(RDD.scala:944)
at org.apache.spark.api.python.PythonRDD$.collectAndServe(PythonRDD.scala:166)
at org.apache.spark.api.python.PythonRDD.collectAndServe(PythonRDD.scala)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source)
at java.lang.reflect.Method.invoke(Unknown Source)
at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244)
at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)
at py4j.Gateway.invoke(Gateway.java:282)
at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)
at py4j.commands.CallCommand.execute(CallCommand.java:79)
at py4j.GatewayConnection.run(GatewayConnection.java:238)
at java.lang.Thread.run(Unknown Source)
Caused by: org.apache.spark.SparkException: Python worker failed to connect back.
at org.apache.spark.api.python.PythonWorkerFactory.createSimpleWorker(PythonWorkerFactory.scala:170)
at org.apache.spark.api.python.PythonWorkerFactory.create(PythonWorkerFactory.scala:97)
at org.apache.spark.SparkEnv.createPythonWorker(SparkEnv.scala:117)
at org.apache.spark.api.python.BasePythonRunner.compute(PythonRunner.scala:108)
at org.apache.spark.api.python.PythonRDD.compute(PythonRDD.scala:65)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:288)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90)
at org.apache.spark.scheduler.Task.run(Task.scala:121)
at org.apache.spark.executor.Executor$TaskRunner$$anonfun$10.apply(Executor.scala:402)
at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1360)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:408)
at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
... 1 more
Caused by: java.net.SocketTimeoutException: Accept timed out
at java.net.DualStackPlainSocketImpl.waitForNewConnection(Native Method)
at java.net.DualStackPlainSocketImpl.socketAccept(Unknown Source)
at java.net.AbstractPlainSocketImpl.accept(Unknown Source)
at java.net.PlainSocketImpl.accept(Unknown Source)
at java.net.ServerSocket.implAccept(Unknown Source)
at java.net.ServerSocket.accept(Unknown Source)
at org.apache.spark.api.python.PythonWorkerFactory.createSimpleWorker(PythonWorkerFactory.scala:164)
... 14 more
```
As suggested by theplatypus - checked if the 'resource' module can be imported directly from terminal - apparently not:
```
>>> import resource
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
ModuleNotFoundError: No module named 'resource'
```
In terms of installation resources - I followed instructions from [this tutorial](https://www.youtube.com/watch?v=iarn1KHeouc&index=3&list=PLot-YkcC7wZ_2sxmRTZr2c121rjcaleqv):
1. downloaded spark-2.4.0-bin-hadoop2.7.tgz from [Apache Spark website](https://spark.apache.org/downloads.html)
2. un-zipped it to my C-drive
3. already had Python\_3 installed (Anaconda distribution) as well as Java
4. created local 'C:\hadoop\bin' folder to store winutils.exe
5. created 'C:\tmp\hive' folder and gave Spark access to it
6. added environment variables (SPARK\_HOME, HADOOP\_HOME etc)
Is there any extra resource I should install?
|
2018/11/11
|
[
"https://Stackoverflow.com/questions/53252181",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/10637265/"
] |
When you run the python installer, on the Customize Python section, make sure that the option Add python.exe to Path is selected. If this option is not selected, some of the PySpark utilities such as pyspark and spark-submit might not work. This worked for me! Happy Sharing :)
|
There seems to be many reasons for this error to occur. I still had the same problem despite my `environmental variables` being all correctly set.
In my case adding this
```
import findspark
findspark.init()
```
solved the problem.
I'am with`jupyter notebook` on `windows10` with `spark-3.1.2, python3.6`.
|
53,252,181
|
I'm a newby with Spark and trying to complete a Spark tutorial:
[link to tutorial](https://www.youtube.com/watch?v=3CPI2D_QD44&index=4&list=PLot-YkcC7wZ_2sxmRTZr2c121rjcaleqv)
After installing it on local machine (Win10 64, Python 3, Spark 2.4.0) and setting all env variables (HADOOP\_HOME, SPARK\_HOME etc) I'm trying to run a simple Spark job via WordCount.py file:
```
from pyspark import SparkContext, SparkConf
if __name__ == "__main__":
conf = SparkConf().setAppName("word count").setMaster("local[2]")
sc = SparkContext(conf = conf)
lines = sc.textFile("C:/Users/mjdbr/Documents/BigData/python-spark-tutorial/in/word_count.text")
words = lines.flatMap(lambda line: line.split(" "))
wordCounts = words.countByValue()
for word, count in wordCounts.items():
print("{} : {}".format(word, count))
```
After running it from terminal:
```
spark-submit WordCount.py
```
I get below error.
I checked (by commenting out line by line) that it crashes at
```
wordCounts = words.countByValue()
```
Any idea what should I check to make it work?
```
Traceback (most recent call last):
File "C:\Users\mjdbr\Anaconda3\lib\runpy.py", line 193, in _run_module_as_main
"__main__", mod_spec)
File "C:\Users\mjdbr\Anaconda3\lib\runpy.py", line 85, in _run_code
exec(code, run_globals)
File "C:\Spark\spark-2.4.0-bin-hadoop2.7\python\lib\pyspark.zip\pyspark\worker.py", line 25, in <module>
ModuleNotFoundError: No module named 'resource'
18/11/10 23:16:58 ERROR Executor: Exception in task 0.0 in stage 0.0 (TID 0)
org.apache.spark.SparkException: Python worker failed to connect back.
at org.apache.spark.api.python.PythonWorkerFactory.createSimpleWorker(PythonWorkerFactory.scala:170)
at org.apache.spark.api.python.PythonWorkerFactory.create(PythonWorkerFactory.scala:97)
at org.apache.spark.SparkEnv.createPythonWorker(SparkEnv.scala:117)
at org.apache.spark.api.python.BasePythonRunner.compute(PythonRunner.scala:108)
at org.apache.spark.api.python.PythonRDD.compute(PythonRDD.scala:65)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:288)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90)
at org.apache.spark.scheduler.Task.run(Task.scala:121)
at org.apache.spark.executor.Executor$TaskRunner$$anonfun$10.apply(Executor.scala:402)
at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1360)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:408)
at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
at java.lang.Thread.run(Unknown Source)
Caused by: java.net.SocketTimeoutException: Accept timed out
at java.net.DualStackPlainSocketImpl.waitForNewConnection(Native Method)
at java.net.DualStackPlainSocketImpl.socketAccept(Unknown Source)
at java.net.AbstractPlainSocketImpl.accept(Unknown Source)
at java.net.PlainSocketImpl.accept(Unknown Source)
at java.net.ServerSocket.implAccept(Unknown Source)
at java.net.ServerSocket.accept(Unknown Source)
at org.apache.spark.api.python.PythonWorkerFactory.createSimpleWorker(PythonWorkerFactory.scala:164)
... 14 more
18/11/10 23:16:58 ERROR TaskSetManager: Task 0 in stage 0.0 failed 1 times; aborting job
Traceback (most recent call last):
File "C:/Users/mjdbr/Documents/BigData/python-spark-tutorial/rdd/WordCount.py", line 19, in <module>
wordCounts = words.countByValue()
File "C:\Spark\spark-2.4.0-bin-hadoop2.7\python\lib\pyspark.zip\pyspark\rdd.py", line 1261, in countByValue
File "C:\Spark\spark-2.4.0-bin-hadoop2.7\python\lib\pyspark.zip\pyspark\rdd.py", line 844, in reduce
File "C:\Spark\spark-2.4.0-bin-hadoop2.7\python\lib\pyspark.zip\pyspark\rdd.py", line 816, in collect
File "C:\Spark\spark-2.4.0-bin-hadoop2.7\python\lib\py4j-0.10.7-src.zip\py4j\java_gateway.py", line 1257, in __call__
File "C:\Spark\spark-2.4.0-bin-hadoop2.7\python\lib\py4j-0.10.7-src.zip\py4j\protocol.py", line 328, in get_return_value
py4j.protocol.Py4JJavaError: An error occurred while calling z:org.apache.spark.api.python.PythonRDD.collectAndServe.
: org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 0.0 failed 1 times, most recent failure:
Lost task 0.0 in stage 0.0 (TID 0, localhost, executor driver): org.apache.spark.SparkException: Python worker failed to connect back.
at org.apache.spark.api.python.PythonWorkerFactory.createSimpleWorker(PythonWorkerFactory.scala:170)
at org.apache.spark.api.python.PythonWorkerFactory.create(PythonWorkerFactory.scala:97)
at org.apache.spark.SparkEnv.createPythonWorker(SparkEnv.scala:117)
at org.apache.spark.api.python.BasePythonRunner.compute(PythonRunner.scala:108)
at org.apache.spark.api.python.PythonRDD.compute(PythonRDD.scala:65)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:288)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90)
at org.apache.spark.scheduler.Task.run(Task.scala:121)
at org.apache.spark.executor.Executor$TaskRunner$$anonfun$10.apply(Executor.scala:402)
at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1360)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:408)
at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
at java.lang.Thread.run(Unknown Source)
Caused by: java.net.SocketTimeoutException: Accept timed out
at java.net.DualStackPlainSocketImpl.waitForNewConnection(Native Method)
at java.net.DualStackPlainSocketImpl.socketAccept(Unknown Source)
at java.net.AbstractPlainSocketImpl.accept(Unknown Source)
at java.net.PlainSocketImpl.accept(Unknown Source)
at java.net.ServerSocket.implAccept(Unknown Source)
at java.net.ServerSocket.accept(Unknown Source)
at org.apache.spark.api.python.PythonWorkerFactory.createSimpleWorker(PythonWorkerFactory.scala:164)
... 14 more
Driver stacktrace:
at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1887)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1875)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1874)
at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48)
at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1874)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:926)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:926)
at scala.Option.foreach(Option.scala:257)
at org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:926)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:2108)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:2057)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:2046)
at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:49)
at org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:737)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:2061)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:2082)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:2101)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:2126)
at org.apache.spark.rdd.RDD$$anonfun$collect$1.apply(RDD.scala:945)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112)
at org.apache.spark.rdd.RDD.withScope(RDD.scala:363)
at org.apache.spark.rdd.RDD.collect(RDD.scala:944)
at org.apache.spark.api.python.PythonRDD$.collectAndServe(PythonRDD.scala:166)
at org.apache.spark.api.python.PythonRDD.collectAndServe(PythonRDD.scala)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source)
at java.lang.reflect.Method.invoke(Unknown Source)
at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244)
at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)
at py4j.Gateway.invoke(Gateway.java:282)
at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)
at py4j.commands.CallCommand.execute(CallCommand.java:79)
at py4j.GatewayConnection.run(GatewayConnection.java:238)
at java.lang.Thread.run(Unknown Source)
Caused by: org.apache.spark.SparkException: Python worker failed to connect back.
at org.apache.spark.api.python.PythonWorkerFactory.createSimpleWorker(PythonWorkerFactory.scala:170)
at org.apache.spark.api.python.PythonWorkerFactory.create(PythonWorkerFactory.scala:97)
at org.apache.spark.SparkEnv.createPythonWorker(SparkEnv.scala:117)
at org.apache.spark.api.python.BasePythonRunner.compute(PythonRunner.scala:108)
at org.apache.spark.api.python.PythonRDD.compute(PythonRDD.scala:65)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:288)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90)
at org.apache.spark.scheduler.Task.run(Task.scala:121)
at org.apache.spark.executor.Executor$TaskRunner$$anonfun$10.apply(Executor.scala:402)
at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1360)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:408)
at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
... 1 more
Caused by: java.net.SocketTimeoutException: Accept timed out
at java.net.DualStackPlainSocketImpl.waitForNewConnection(Native Method)
at java.net.DualStackPlainSocketImpl.socketAccept(Unknown Source)
at java.net.AbstractPlainSocketImpl.accept(Unknown Source)
at java.net.PlainSocketImpl.accept(Unknown Source)
at java.net.ServerSocket.implAccept(Unknown Source)
at java.net.ServerSocket.accept(Unknown Source)
at org.apache.spark.api.python.PythonWorkerFactory.createSimpleWorker(PythonWorkerFactory.scala:164)
... 14 more
```
As suggested by theplatypus - checked if the 'resource' module can be imported directly from terminal - apparently not:
```
>>> import resource
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
ModuleNotFoundError: No module named 'resource'
```
In terms of installation resources - I followed instructions from [this tutorial](https://www.youtube.com/watch?v=iarn1KHeouc&index=3&list=PLot-YkcC7wZ_2sxmRTZr2c121rjcaleqv):
1. downloaded spark-2.4.0-bin-hadoop2.7.tgz from [Apache Spark website](https://spark.apache.org/downloads.html)
2. un-zipped it to my C-drive
3. already had Python\_3 installed (Anaconda distribution) as well as Java
4. created local 'C:\hadoop\bin' folder to store winutils.exe
5. created 'C:\tmp\hive' folder and gave Spark access to it
6. added environment variables (SPARK\_HOME, HADOOP\_HOME etc)
Is there any extra resource I should install?
|
2018/11/11
|
[
"https://Stackoverflow.com/questions/53252181",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/10637265/"
] |
The heart of the problem is the connection between pyspark and python, solved by redefining the environment variable.
I´ve just changed the environment variable's values `PYSPARK_DRIVER_PYTHON` from `ipython` to `jupyter` and `PYSPARK_PYTHON` from `python3` to `python`.
Now I'm using Jupyter Notebook, Python 3.7, Java JDK 11.0.6, Spark 2.4.2
|
Looking at the source of the error ([worker.py#L25](https://github.com/apache/spark/blob/master/python/pyspark/worker.py#L25)), it seems that the python interpreter used to instanciate a pyspark worker doesn't have access to the `resource` module, a built-in module referred in [Python's doc](https://docs.python.org/3.7/library/unix.html) as part of "Unix Specific Services".
Are you sure you can run pyspark on Windows (without some additional software like GOW or MingW at least), and so that you didn't skip some Windows-specific installation steps ?
Could you open a python console (the one used by pyspark) and see if you can `>>> import resource` without getting the same `ModuleNotFoundError` ? If you don't, then could you provide the ressources you used to install it on W10 ?
|
53,252,181
|
I'm a newby with Spark and trying to complete a Spark tutorial:
[link to tutorial](https://www.youtube.com/watch?v=3CPI2D_QD44&index=4&list=PLot-YkcC7wZ_2sxmRTZr2c121rjcaleqv)
After installing it on local machine (Win10 64, Python 3, Spark 2.4.0) and setting all env variables (HADOOP\_HOME, SPARK\_HOME etc) I'm trying to run a simple Spark job via WordCount.py file:
```
from pyspark import SparkContext, SparkConf
if __name__ == "__main__":
conf = SparkConf().setAppName("word count").setMaster("local[2]")
sc = SparkContext(conf = conf)
lines = sc.textFile("C:/Users/mjdbr/Documents/BigData/python-spark-tutorial/in/word_count.text")
words = lines.flatMap(lambda line: line.split(" "))
wordCounts = words.countByValue()
for word, count in wordCounts.items():
print("{} : {}".format(word, count))
```
After running it from terminal:
```
spark-submit WordCount.py
```
I get below error.
I checked (by commenting out line by line) that it crashes at
```
wordCounts = words.countByValue()
```
Any idea what should I check to make it work?
```
Traceback (most recent call last):
File "C:\Users\mjdbr\Anaconda3\lib\runpy.py", line 193, in _run_module_as_main
"__main__", mod_spec)
File "C:\Users\mjdbr\Anaconda3\lib\runpy.py", line 85, in _run_code
exec(code, run_globals)
File "C:\Spark\spark-2.4.0-bin-hadoop2.7\python\lib\pyspark.zip\pyspark\worker.py", line 25, in <module>
ModuleNotFoundError: No module named 'resource'
18/11/10 23:16:58 ERROR Executor: Exception in task 0.0 in stage 0.0 (TID 0)
org.apache.spark.SparkException: Python worker failed to connect back.
at org.apache.spark.api.python.PythonWorkerFactory.createSimpleWorker(PythonWorkerFactory.scala:170)
at org.apache.spark.api.python.PythonWorkerFactory.create(PythonWorkerFactory.scala:97)
at org.apache.spark.SparkEnv.createPythonWorker(SparkEnv.scala:117)
at org.apache.spark.api.python.BasePythonRunner.compute(PythonRunner.scala:108)
at org.apache.spark.api.python.PythonRDD.compute(PythonRDD.scala:65)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:288)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90)
at org.apache.spark.scheduler.Task.run(Task.scala:121)
at org.apache.spark.executor.Executor$TaskRunner$$anonfun$10.apply(Executor.scala:402)
at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1360)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:408)
at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
at java.lang.Thread.run(Unknown Source)
Caused by: java.net.SocketTimeoutException: Accept timed out
at java.net.DualStackPlainSocketImpl.waitForNewConnection(Native Method)
at java.net.DualStackPlainSocketImpl.socketAccept(Unknown Source)
at java.net.AbstractPlainSocketImpl.accept(Unknown Source)
at java.net.PlainSocketImpl.accept(Unknown Source)
at java.net.ServerSocket.implAccept(Unknown Source)
at java.net.ServerSocket.accept(Unknown Source)
at org.apache.spark.api.python.PythonWorkerFactory.createSimpleWorker(PythonWorkerFactory.scala:164)
... 14 more
18/11/10 23:16:58 ERROR TaskSetManager: Task 0 in stage 0.0 failed 1 times; aborting job
Traceback (most recent call last):
File "C:/Users/mjdbr/Documents/BigData/python-spark-tutorial/rdd/WordCount.py", line 19, in <module>
wordCounts = words.countByValue()
File "C:\Spark\spark-2.4.0-bin-hadoop2.7\python\lib\pyspark.zip\pyspark\rdd.py", line 1261, in countByValue
File "C:\Spark\spark-2.4.0-bin-hadoop2.7\python\lib\pyspark.zip\pyspark\rdd.py", line 844, in reduce
File "C:\Spark\spark-2.4.0-bin-hadoop2.7\python\lib\pyspark.zip\pyspark\rdd.py", line 816, in collect
File "C:\Spark\spark-2.4.0-bin-hadoop2.7\python\lib\py4j-0.10.7-src.zip\py4j\java_gateway.py", line 1257, in __call__
File "C:\Spark\spark-2.4.0-bin-hadoop2.7\python\lib\py4j-0.10.7-src.zip\py4j\protocol.py", line 328, in get_return_value
py4j.protocol.Py4JJavaError: An error occurred while calling z:org.apache.spark.api.python.PythonRDD.collectAndServe.
: org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 0.0 failed 1 times, most recent failure:
Lost task 0.0 in stage 0.0 (TID 0, localhost, executor driver): org.apache.spark.SparkException: Python worker failed to connect back.
at org.apache.spark.api.python.PythonWorkerFactory.createSimpleWorker(PythonWorkerFactory.scala:170)
at org.apache.spark.api.python.PythonWorkerFactory.create(PythonWorkerFactory.scala:97)
at org.apache.spark.SparkEnv.createPythonWorker(SparkEnv.scala:117)
at org.apache.spark.api.python.BasePythonRunner.compute(PythonRunner.scala:108)
at org.apache.spark.api.python.PythonRDD.compute(PythonRDD.scala:65)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:288)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90)
at org.apache.spark.scheduler.Task.run(Task.scala:121)
at org.apache.spark.executor.Executor$TaskRunner$$anonfun$10.apply(Executor.scala:402)
at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1360)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:408)
at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
at java.lang.Thread.run(Unknown Source)
Caused by: java.net.SocketTimeoutException: Accept timed out
at java.net.DualStackPlainSocketImpl.waitForNewConnection(Native Method)
at java.net.DualStackPlainSocketImpl.socketAccept(Unknown Source)
at java.net.AbstractPlainSocketImpl.accept(Unknown Source)
at java.net.PlainSocketImpl.accept(Unknown Source)
at java.net.ServerSocket.implAccept(Unknown Source)
at java.net.ServerSocket.accept(Unknown Source)
at org.apache.spark.api.python.PythonWorkerFactory.createSimpleWorker(PythonWorkerFactory.scala:164)
... 14 more
Driver stacktrace:
at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1887)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1875)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1874)
at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48)
at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1874)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:926)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:926)
at scala.Option.foreach(Option.scala:257)
at org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:926)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:2108)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:2057)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:2046)
at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:49)
at org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:737)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:2061)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:2082)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:2101)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:2126)
at org.apache.spark.rdd.RDD$$anonfun$collect$1.apply(RDD.scala:945)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112)
at org.apache.spark.rdd.RDD.withScope(RDD.scala:363)
at org.apache.spark.rdd.RDD.collect(RDD.scala:944)
at org.apache.spark.api.python.PythonRDD$.collectAndServe(PythonRDD.scala:166)
at org.apache.spark.api.python.PythonRDD.collectAndServe(PythonRDD.scala)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source)
at java.lang.reflect.Method.invoke(Unknown Source)
at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244)
at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)
at py4j.Gateway.invoke(Gateway.java:282)
at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)
at py4j.commands.CallCommand.execute(CallCommand.java:79)
at py4j.GatewayConnection.run(GatewayConnection.java:238)
at java.lang.Thread.run(Unknown Source)
Caused by: org.apache.spark.SparkException: Python worker failed to connect back.
at org.apache.spark.api.python.PythonWorkerFactory.createSimpleWorker(PythonWorkerFactory.scala:170)
at org.apache.spark.api.python.PythonWorkerFactory.create(PythonWorkerFactory.scala:97)
at org.apache.spark.SparkEnv.createPythonWorker(SparkEnv.scala:117)
at org.apache.spark.api.python.BasePythonRunner.compute(PythonRunner.scala:108)
at org.apache.spark.api.python.PythonRDD.compute(PythonRDD.scala:65)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:288)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90)
at org.apache.spark.scheduler.Task.run(Task.scala:121)
at org.apache.spark.executor.Executor$TaskRunner$$anonfun$10.apply(Executor.scala:402)
at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1360)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:408)
at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
... 1 more
Caused by: java.net.SocketTimeoutException: Accept timed out
at java.net.DualStackPlainSocketImpl.waitForNewConnection(Native Method)
at java.net.DualStackPlainSocketImpl.socketAccept(Unknown Source)
at java.net.AbstractPlainSocketImpl.accept(Unknown Source)
at java.net.PlainSocketImpl.accept(Unknown Source)
at java.net.ServerSocket.implAccept(Unknown Source)
at java.net.ServerSocket.accept(Unknown Source)
at org.apache.spark.api.python.PythonWorkerFactory.createSimpleWorker(PythonWorkerFactory.scala:164)
... 14 more
```
As suggested by theplatypus - checked if the 'resource' module can be imported directly from terminal - apparently not:
```
>>> import resource
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
ModuleNotFoundError: No module named 'resource'
```
In terms of installation resources - I followed instructions from [this tutorial](https://www.youtube.com/watch?v=iarn1KHeouc&index=3&list=PLot-YkcC7wZ_2sxmRTZr2c121rjcaleqv):
1. downloaded spark-2.4.0-bin-hadoop2.7.tgz from [Apache Spark website](https://spark.apache.org/downloads.html)
2. un-zipped it to my C-drive
3. already had Python\_3 installed (Anaconda distribution) as well as Java
4. created local 'C:\hadoop\bin' folder to store winutils.exe
5. created 'C:\tmp\hive' folder and gave Spark access to it
6. added environment variables (SPARK\_HOME, HADOOP\_HOME etc)
Is there any extra resource I should install?
|
2018/11/11
|
[
"https://Stackoverflow.com/questions/53252181",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/10637265/"
] |
Set Env PYSPARK\_PYTHON=python To Fix It.
|
Downgrading Spark back to 2.3.2 from 2.4.0 was not enough for me. I don't know why but in my case I had to create SparkContext from SparkSession like
```
sc = spark.sparkContext
```
Then the very same error disappeared.
|
53,252,181
|
I'm a newby with Spark and trying to complete a Spark tutorial:
[link to tutorial](https://www.youtube.com/watch?v=3CPI2D_QD44&index=4&list=PLot-YkcC7wZ_2sxmRTZr2c121rjcaleqv)
After installing it on local machine (Win10 64, Python 3, Spark 2.4.0) and setting all env variables (HADOOP\_HOME, SPARK\_HOME etc) I'm trying to run a simple Spark job via WordCount.py file:
```
from pyspark import SparkContext, SparkConf
if __name__ == "__main__":
conf = SparkConf().setAppName("word count").setMaster("local[2]")
sc = SparkContext(conf = conf)
lines = sc.textFile("C:/Users/mjdbr/Documents/BigData/python-spark-tutorial/in/word_count.text")
words = lines.flatMap(lambda line: line.split(" "))
wordCounts = words.countByValue()
for word, count in wordCounts.items():
print("{} : {}".format(word, count))
```
After running it from terminal:
```
spark-submit WordCount.py
```
I get below error.
I checked (by commenting out line by line) that it crashes at
```
wordCounts = words.countByValue()
```
Any idea what should I check to make it work?
```
Traceback (most recent call last):
File "C:\Users\mjdbr\Anaconda3\lib\runpy.py", line 193, in _run_module_as_main
"__main__", mod_spec)
File "C:\Users\mjdbr\Anaconda3\lib\runpy.py", line 85, in _run_code
exec(code, run_globals)
File "C:\Spark\spark-2.4.0-bin-hadoop2.7\python\lib\pyspark.zip\pyspark\worker.py", line 25, in <module>
ModuleNotFoundError: No module named 'resource'
18/11/10 23:16:58 ERROR Executor: Exception in task 0.0 in stage 0.0 (TID 0)
org.apache.spark.SparkException: Python worker failed to connect back.
at org.apache.spark.api.python.PythonWorkerFactory.createSimpleWorker(PythonWorkerFactory.scala:170)
at org.apache.spark.api.python.PythonWorkerFactory.create(PythonWorkerFactory.scala:97)
at org.apache.spark.SparkEnv.createPythonWorker(SparkEnv.scala:117)
at org.apache.spark.api.python.BasePythonRunner.compute(PythonRunner.scala:108)
at org.apache.spark.api.python.PythonRDD.compute(PythonRDD.scala:65)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:288)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90)
at org.apache.spark.scheduler.Task.run(Task.scala:121)
at org.apache.spark.executor.Executor$TaskRunner$$anonfun$10.apply(Executor.scala:402)
at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1360)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:408)
at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
at java.lang.Thread.run(Unknown Source)
Caused by: java.net.SocketTimeoutException: Accept timed out
at java.net.DualStackPlainSocketImpl.waitForNewConnection(Native Method)
at java.net.DualStackPlainSocketImpl.socketAccept(Unknown Source)
at java.net.AbstractPlainSocketImpl.accept(Unknown Source)
at java.net.PlainSocketImpl.accept(Unknown Source)
at java.net.ServerSocket.implAccept(Unknown Source)
at java.net.ServerSocket.accept(Unknown Source)
at org.apache.spark.api.python.PythonWorkerFactory.createSimpleWorker(PythonWorkerFactory.scala:164)
... 14 more
18/11/10 23:16:58 ERROR TaskSetManager: Task 0 in stage 0.0 failed 1 times; aborting job
Traceback (most recent call last):
File "C:/Users/mjdbr/Documents/BigData/python-spark-tutorial/rdd/WordCount.py", line 19, in <module>
wordCounts = words.countByValue()
File "C:\Spark\spark-2.4.0-bin-hadoop2.7\python\lib\pyspark.zip\pyspark\rdd.py", line 1261, in countByValue
File "C:\Spark\spark-2.4.0-bin-hadoop2.7\python\lib\pyspark.zip\pyspark\rdd.py", line 844, in reduce
File "C:\Spark\spark-2.4.0-bin-hadoop2.7\python\lib\pyspark.zip\pyspark\rdd.py", line 816, in collect
File "C:\Spark\spark-2.4.0-bin-hadoop2.7\python\lib\py4j-0.10.7-src.zip\py4j\java_gateway.py", line 1257, in __call__
File "C:\Spark\spark-2.4.0-bin-hadoop2.7\python\lib\py4j-0.10.7-src.zip\py4j\protocol.py", line 328, in get_return_value
py4j.protocol.Py4JJavaError: An error occurred while calling z:org.apache.spark.api.python.PythonRDD.collectAndServe.
: org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 0.0 failed 1 times, most recent failure:
Lost task 0.0 in stage 0.0 (TID 0, localhost, executor driver): org.apache.spark.SparkException: Python worker failed to connect back.
at org.apache.spark.api.python.PythonWorkerFactory.createSimpleWorker(PythonWorkerFactory.scala:170)
at org.apache.spark.api.python.PythonWorkerFactory.create(PythonWorkerFactory.scala:97)
at org.apache.spark.SparkEnv.createPythonWorker(SparkEnv.scala:117)
at org.apache.spark.api.python.BasePythonRunner.compute(PythonRunner.scala:108)
at org.apache.spark.api.python.PythonRDD.compute(PythonRDD.scala:65)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:288)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90)
at org.apache.spark.scheduler.Task.run(Task.scala:121)
at org.apache.spark.executor.Executor$TaskRunner$$anonfun$10.apply(Executor.scala:402)
at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1360)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:408)
at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
at java.lang.Thread.run(Unknown Source)
Caused by: java.net.SocketTimeoutException: Accept timed out
at java.net.DualStackPlainSocketImpl.waitForNewConnection(Native Method)
at java.net.DualStackPlainSocketImpl.socketAccept(Unknown Source)
at java.net.AbstractPlainSocketImpl.accept(Unknown Source)
at java.net.PlainSocketImpl.accept(Unknown Source)
at java.net.ServerSocket.implAccept(Unknown Source)
at java.net.ServerSocket.accept(Unknown Source)
at org.apache.spark.api.python.PythonWorkerFactory.createSimpleWorker(PythonWorkerFactory.scala:164)
... 14 more
Driver stacktrace:
at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1887)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1875)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1874)
at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48)
at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1874)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:926)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:926)
at scala.Option.foreach(Option.scala:257)
at org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:926)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:2108)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:2057)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:2046)
at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:49)
at org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:737)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:2061)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:2082)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:2101)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:2126)
at org.apache.spark.rdd.RDD$$anonfun$collect$1.apply(RDD.scala:945)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112)
at org.apache.spark.rdd.RDD.withScope(RDD.scala:363)
at org.apache.spark.rdd.RDD.collect(RDD.scala:944)
at org.apache.spark.api.python.PythonRDD$.collectAndServe(PythonRDD.scala:166)
at org.apache.spark.api.python.PythonRDD.collectAndServe(PythonRDD.scala)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source)
at java.lang.reflect.Method.invoke(Unknown Source)
at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244)
at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)
at py4j.Gateway.invoke(Gateway.java:282)
at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)
at py4j.commands.CallCommand.execute(CallCommand.java:79)
at py4j.GatewayConnection.run(GatewayConnection.java:238)
at java.lang.Thread.run(Unknown Source)
Caused by: org.apache.spark.SparkException: Python worker failed to connect back.
at org.apache.spark.api.python.PythonWorkerFactory.createSimpleWorker(PythonWorkerFactory.scala:170)
at org.apache.spark.api.python.PythonWorkerFactory.create(PythonWorkerFactory.scala:97)
at org.apache.spark.SparkEnv.createPythonWorker(SparkEnv.scala:117)
at org.apache.spark.api.python.BasePythonRunner.compute(PythonRunner.scala:108)
at org.apache.spark.api.python.PythonRDD.compute(PythonRDD.scala:65)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:288)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90)
at org.apache.spark.scheduler.Task.run(Task.scala:121)
at org.apache.spark.executor.Executor$TaskRunner$$anonfun$10.apply(Executor.scala:402)
at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1360)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:408)
at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
... 1 more
Caused by: java.net.SocketTimeoutException: Accept timed out
at java.net.DualStackPlainSocketImpl.waitForNewConnection(Native Method)
at java.net.DualStackPlainSocketImpl.socketAccept(Unknown Source)
at java.net.AbstractPlainSocketImpl.accept(Unknown Source)
at java.net.PlainSocketImpl.accept(Unknown Source)
at java.net.ServerSocket.implAccept(Unknown Source)
at java.net.ServerSocket.accept(Unknown Source)
at org.apache.spark.api.python.PythonWorkerFactory.createSimpleWorker(PythonWorkerFactory.scala:164)
... 14 more
```
As suggested by theplatypus - checked if the 'resource' module can be imported directly from terminal - apparently not:
```
>>> import resource
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
ModuleNotFoundError: No module named 'resource'
```
In terms of installation resources - I followed instructions from [this tutorial](https://www.youtube.com/watch?v=iarn1KHeouc&index=3&list=PLot-YkcC7wZ_2sxmRTZr2c121rjcaleqv):
1. downloaded spark-2.4.0-bin-hadoop2.7.tgz from [Apache Spark website](https://spark.apache.org/downloads.html)
2. un-zipped it to my C-drive
3. already had Python\_3 installed (Anaconda distribution) as well as Java
4. created local 'C:\hadoop\bin' folder to store winutils.exe
5. created 'C:\tmp\hive' folder and gave Spark access to it
6. added environment variables (SPARK\_HOME, HADOOP\_HOME etc)
Is there any extra resource I should install?
|
2018/11/11
|
[
"https://Stackoverflow.com/questions/53252181",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/10637265/"
] |
I had the same issue. I had set all the `environment variables` correctly but still wasn't able to resolve it
In my case,
```
import findspark
findspark.init()
```
adding this before even creating the sparkSession helped.
I was using `Visual Studio Code` on `Windows 10` and spark version was `3.2.0`. Python version is `3.9` .
Note: Initially check if the paths for `HADOOP_HOME` `SPARK_HOME` `PYSPARK_PYTHON` have been set correctly
|
Downgrading Spark back to 2.3.2 from 2.4.0 was not enough for me. I don't know why but in my case I had to create SparkContext from SparkSession like
```
sc = spark.sparkContext
```
Then the very same error disappeared.
|
53,252,181
|
I'm a newby with Spark and trying to complete a Spark tutorial:
[link to tutorial](https://www.youtube.com/watch?v=3CPI2D_QD44&index=4&list=PLot-YkcC7wZ_2sxmRTZr2c121rjcaleqv)
After installing it on local machine (Win10 64, Python 3, Spark 2.4.0) and setting all env variables (HADOOP\_HOME, SPARK\_HOME etc) I'm trying to run a simple Spark job via WordCount.py file:
```
from pyspark import SparkContext, SparkConf
if __name__ == "__main__":
conf = SparkConf().setAppName("word count").setMaster("local[2]")
sc = SparkContext(conf = conf)
lines = sc.textFile("C:/Users/mjdbr/Documents/BigData/python-spark-tutorial/in/word_count.text")
words = lines.flatMap(lambda line: line.split(" "))
wordCounts = words.countByValue()
for word, count in wordCounts.items():
print("{} : {}".format(word, count))
```
After running it from terminal:
```
spark-submit WordCount.py
```
I get below error.
I checked (by commenting out line by line) that it crashes at
```
wordCounts = words.countByValue()
```
Any idea what should I check to make it work?
```
Traceback (most recent call last):
File "C:\Users\mjdbr\Anaconda3\lib\runpy.py", line 193, in _run_module_as_main
"__main__", mod_spec)
File "C:\Users\mjdbr\Anaconda3\lib\runpy.py", line 85, in _run_code
exec(code, run_globals)
File "C:\Spark\spark-2.4.0-bin-hadoop2.7\python\lib\pyspark.zip\pyspark\worker.py", line 25, in <module>
ModuleNotFoundError: No module named 'resource'
18/11/10 23:16:58 ERROR Executor: Exception in task 0.0 in stage 0.0 (TID 0)
org.apache.spark.SparkException: Python worker failed to connect back.
at org.apache.spark.api.python.PythonWorkerFactory.createSimpleWorker(PythonWorkerFactory.scala:170)
at org.apache.spark.api.python.PythonWorkerFactory.create(PythonWorkerFactory.scala:97)
at org.apache.spark.SparkEnv.createPythonWorker(SparkEnv.scala:117)
at org.apache.spark.api.python.BasePythonRunner.compute(PythonRunner.scala:108)
at org.apache.spark.api.python.PythonRDD.compute(PythonRDD.scala:65)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:288)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90)
at org.apache.spark.scheduler.Task.run(Task.scala:121)
at org.apache.spark.executor.Executor$TaskRunner$$anonfun$10.apply(Executor.scala:402)
at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1360)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:408)
at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
at java.lang.Thread.run(Unknown Source)
Caused by: java.net.SocketTimeoutException: Accept timed out
at java.net.DualStackPlainSocketImpl.waitForNewConnection(Native Method)
at java.net.DualStackPlainSocketImpl.socketAccept(Unknown Source)
at java.net.AbstractPlainSocketImpl.accept(Unknown Source)
at java.net.PlainSocketImpl.accept(Unknown Source)
at java.net.ServerSocket.implAccept(Unknown Source)
at java.net.ServerSocket.accept(Unknown Source)
at org.apache.spark.api.python.PythonWorkerFactory.createSimpleWorker(PythonWorkerFactory.scala:164)
... 14 more
18/11/10 23:16:58 ERROR TaskSetManager: Task 0 in stage 0.0 failed 1 times; aborting job
Traceback (most recent call last):
File "C:/Users/mjdbr/Documents/BigData/python-spark-tutorial/rdd/WordCount.py", line 19, in <module>
wordCounts = words.countByValue()
File "C:\Spark\spark-2.4.0-bin-hadoop2.7\python\lib\pyspark.zip\pyspark\rdd.py", line 1261, in countByValue
File "C:\Spark\spark-2.4.0-bin-hadoop2.7\python\lib\pyspark.zip\pyspark\rdd.py", line 844, in reduce
File "C:\Spark\spark-2.4.0-bin-hadoop2.7\python\lib\pyspark.zip\pyspark\rdd.py", line 816, in collect
File "C:\Spark\spark-2.4.0-bin-hadoop2.7\python\lib\py4j-0.10.7-src.zip\py4j\java_gateway.py", line 1257, in __call__
File "C:\Spark\spark-2.4.0-bin-hadoop2.7\python\lib\py4j-0.10.7-src.zip\py4j\protocol.py", line 328, in get_return_value
py4j.protocol.Py4JJavaError: An error occurred while calling z:org.apache.spark.api.python.PythonRDD.collectAndServe.
: org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 0.0 failed 1 times, most recent failure:
Lost task 0.0 in stage 0.0 (TID 0, localhost, executor driver): org.apache.spark.SparkException: Python worker failed to connect back.
at org.apache.spark.api.python.PythonWorkerFactory.createSimpleWorker(PythonWorkerFactory.scala:170)
at org.apache.spark.api.python.PythonWorkerFactory.create(PythonWorkerFactory.scala:97)
at org.apache.spark.SparkEnv.createPythonWorker(SparkEnv.scala:117)
at org.apache.spark.api.python.BasePythonRunner.compute(PythonRunner.scala:108)
at org.apache.spark.api.python.PythonRDD.compute(PythonRDD.scala:65)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:288)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90)
at org.apache.spark.scheduler.Task.run(Task.scala:121)
at org.apache.spark.executor.Executor$TaskRunner$$anonfun$10.apply(Executor.scala:402)
at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1360)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:408)
at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
at java.lang.Thread.run(Unknown Source)
Caused by: java.net.SocketTimeoutException: Accept timed out
at java.net.DualStackPlainSocketImpl.waitForNewConnection(Native Method)
at java.net.DualStackPlainSocketImpl.socketAccept(Unknown Source)
at java.net.AbstractPlainSocketImpl.accept(Unknown Source)
at java.net.PlainSocketImpl.accept(Unknown Source)
at java.net.ServerSocket.implAccept(Unknown Source)
at java.net.ServerSocket.accept(Unknown Source)
at org.apache.spark.api.python.PythonWorkerFactory.createSimpleWorker(PythonWorkerFactory.scala:164)
... 14 more
Driver stacktrace:
at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1887)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1875)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1874)
at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48)
at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1874)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:926)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:926)
at scala.Option.foreach(Option.scala:257)
at org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:926)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:2108)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:2057)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:2046)
at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:49)
at org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:737)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:2061)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:2082)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:2101)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:2126)
at org.apache.spark.rdd.RDD$$anonfun$collect$1.apply(RDD.scala:945)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112)
at org.apache.spark.rdd.RDD.withScope(RDD.scala:363)
at org.apache.spark.rdd.RDD.collect(RDD.scala:944)
at org.apache.spark.api.python.PythonRDD$.collectAndServe(PythonRDD.scala:166)
at org.apache.spark.api.python.PythonRDD.collectAndServe(PythonRDD.scala)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source)
at java.lang.reflect.Method.invoke(Unknown Source)
at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244)
at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)
at py4j.Gateway.invoke(Gateway.java:282)
at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)
at py4j.commands.CallCommand.execute(CallCommand.java:79)
at py4j.GatewayConnection.run(GatewayConnection.java:238)
at java.lang.Thread.run(Unknown Source)
Caused by: org.apache.spark.SparkException: Python worker failed to connect back.
at org.apache.spark.api.python.PythonWorkerFactory.createSimpleWorker(PythonWorkerFactory.scala:170)
at org.apache.spark.api.python.PythonWorkerFactory.create(PythonWorkerFactory.scala:97)
at org.apache.spark.SparkEnv.createPythonWorker(SparkEnv.scala:117)
at org.apache.spark.api.python.BasePythonRunner.compute(PythonRunner.scala:108)
at org.apache.spark.api.python.PythonRDD.compute(PythonRDD.scala:65)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:288)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90)
at org.apache.spark.scheduler.Task.run(Task.scala:121)
at org.apache.spark.executor.Executor$TaskRunner$$anonfun$10.apply(Executor.scala:402)
at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1360)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:408)
at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
... 1 more
Caused by: java.net.SocketTimeoutException: Accept timed out
at java.net.DualStackPlainSocketImpl.waitForNewConnection(Native Method)
at java.net.DualStackPlainSocketImpl.socketAccept(Unknown Source)
at java.net.AbstractPlainSocketImpl.accept(Unknown Source)
at java.net.PlainSocketImpl.accept(Unknown Source)
at java.net.ServerSocket.implAccept(Unknown Source)
at java.net.ServerSocket.accept(Unknown Source)
at org.apache.spark.api.python.PythonWorkerFactory.createSimpleWorker(PythonWorkerFactory.scala:164)
... 14 more
```
As suggested by theplatypus - checked if the 'resource' module can be imported directly from terminal - apparently not:
```
>>> import resource
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
ModuleNotFoundError: No module named 'resource'
```
In terms of installation resources - I followed instructions from [this tutorial](https://www.youtube.com/watch?v=iarn1KHeouc&index=3&list=PLot-YkcC7wZ_2sxmRTZr2c121rjcaleqv):
1. downloaded spark-2.4.0-bin-hadoop2.7.tgz from [Apache Spark website](https://spark.apache.org/downloads.html)
2. un-zipped it to my C-drive
3. already had Python\_3 installed (Anaconda distribution) as well as Java
4. created local 'C:\hadoop\bin' folder to store winutils.exe
5. created 'C:\tmp\hive' folder and gave Spark access to it
6. added environment variables (SPARK\_HOME, HADOOP\_HOME etc)
Is there any extra resource I should install?
|
2018/11/11
|
[
"https://Stackoverflow.com/questions/53252181",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/10637265/"
] |
The heart of the problem is the connection between pyspark and python, solved by redefining the environment variable.
I´ve just changed the environment variable's values `PYSPARK_DRIVER_PYTHON` from `ipython` to `jupyter` and `PYSPARK_PYTHON` from `python3` to `python`.
Now I'm using Jupyter Notebook, Python 3.7, Java JDK 11.0.6, Spark 2.4.2
|
Downgrading Spark back to 2.3.2 from 2.4.0 was not enough for me. I don't know why but in my case I had to create SparkContext from SparkSession like
```
sc = spark.sparkContext
```
Then the very same error disappeared.
|
53,252,181
|
I'm a newby with Spark and trying to complete a Spark tutorial:
[link to tutorial](https://www.youtube.com/watch?v=3CPI2D_QD44&index=4&list=PLot-YkcC7wZ_2sxmRTZr2c121rjcaleqv)
After installing it on local machine (Win10 64, Python 3, Spark 2.4.0) and setting all env variables (HADOOP\_HOME, SPARK\_HOME etc) I'm trying to run a simple Spark job via WordCount.py file:
```
from pyspark import SparkContext, SparkConf
if __name__ == "__main__":
conf = SparkConf().setAppName("word count").setMaster("local[2]")
sc = SparkContext(conf = conf)
lines = sc.textFile("C:/Users/mjdbr/Documents/BigData/python-spark-tutorial/in/word_count.text")
words = lines.flatMap(lambda line: line.split(" "))
wordCounts = words.countByValue()
for word, count in wordCounts.items():
print("{} : {}".format(word, count))
```
After running it from terminal:
```
spark-submit WordCount.py
```
I get below error.
I checked (by commenting out line by line) that it crashes at
```
wordCounts = words.countByValue()
```
Any idea what should I check to make it work?
```
Traceback (most recent call last):
File "C:\Users\mjdbr\Anaconda3\lib\runpy.py", line 193, in _run_module_as_main
"__main__", mod_spec)
File "C:\Users\mjdbr\Anaconda3\lib\runpy.py", line 85, in _run_code
exec(code, run_globals)
File "C:\Spark\spark-2.4.0-bin-hadoop2.7\python\lib\pyspark.zip\pyspark\worker.py", line 25, in <module>
ModuleNotFoundError: No module named 'resource'
18/11/10 23:16:58 ERROR Executor: Exception in task 0.0 in stage 0.0 (TID 0)
org.apache.spark.SparkException: Python worker failed to connect back.
at org.apache.spark.api.python.PythonWorkerFactory.createSimpleWorker(PythonWorkerFactory.scala:170)
at org.apache.spark.api.python.PythonWorkerFactory.create(PythonWorkerFactory.scala:97)
at org.apache.spark.SparkEnv.createPythonWorker(SparkEnv.scala:117)
at org.apache.spark.api.python.BasePythonRunner.compute(PythonRunner.scala:108)
at org.apache.spark.api.python.PythonRDD.compute(PythonRDD.scala:65)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:288)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90)
at org.apache.spark.scheduler.Task.run(Task.scala:121)
at org.apache.spark.executor.Executor$TaskRunner$$anonfun$10.apply(Executor.scala:402)
at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1360)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:408)
at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
at java.lang.Thread.run(Unknown Source)
Caused by: java.net.SocketTimeoutException: Accept timed out
at java.net.DualStackPlainSocketImpl.waitForNewConnection(Native Method)
at java.net.DualStackPlainSocketImpl.socketAccept(Unknown Source)
at java.net.AbstractPlainSocketImpl.accept(Unknown Source)
at java.net.PlainSocketImpl.accept(Unknown Source)
at java.net.ServerSocket.implAccept(Unknown Source)
at java.net.ServerSocket.accept(Unknown Source)
at org.apache.spark.api.python.PythonWorkerFactory.createSimpleWorker(PythonWorkerFactory.scala:164)
... 14 more
18/11/10 23:16:58 ERROR TaskSetManager: Task 0 in stage 0.0 failed 1 times; aborting job
Traceback (most recent call last):
File "C:/Users/mjdbr/Documents/BigData/python-spark-tutorial/rdd/WordCount.py", line 19, in <module>
wordCounts = words.countByValue()
File "C:\Spark\spark-2.4.0-bin-hadoop2.7\python\lib\pyspark.zip\pyspark\rdd.py", line 1261, in countByValue
File "C:\Spark\spark-2.4.0-bin-hadoop2.7\python\lib\pyspark.zip\pyspark\rdd.py", line 844, in reduce
File "C:\Spark\spark-2.4.0-bin-hadoop2.7\python\lib\pyspark.zip\pyspark\rdd.py", line 816, in collect
File "C:\Spark\spark-2.4.0-bin-hadoop2.7\python\lib\py4j-0.10.7-src.zip\py4j\java_gateway.py", line 1257, in __call__
File "C:\Spark\spark-2.4.0-bin-hadoop2.7\python\lib\py4j-0.10.7-src.zip\py4j\protocol.py", line 328, in get_return_value
py4j.protocol.Py4JJavaError: An error occurred while calling z:org.apache.spark.api.python.PythonRDD.collectAndServe.
: org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 0.0 failed 1 times, most recent failure:
Lost task 0.0 in stage 0.0 (TID 0, localhost, executor driver): org.apache.spark.SparkException: Python worker failed to connect back.
at org.apache.spark.api.python.PythonWorkerFactory.createSimpleWorker(PythonWorkerFactory.scala:170)
at org.apache.spark.api.python.PythonWorkerFactory.create(PythonWorkerFactory.scala:97)
at org.apache.spark.SparkEnv.createPythonWorker(SparkEnv.scala:117)
at org.apache.spark.api.python.BasePythonRunner.compute(PythonRunner.scala:108)
at org.apache.spark.api.python.PythonRDD.compute(PythonRDD.scala:65)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:288)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90)
at org.apache.spark.scheduler.Task.run(Task.scala:121)
at org.apache.spark.executor.Executor$TaskRunner$$anonfun$10.apply(Executor.scala:402)
at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1360)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:408)
at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
at java.lang.Thread.run(Unknown Source)
Caused by: java.net.SocketTimeoutException: Accept timed out
at java.net.DualStackPlainSocketImpl.waitForNewConnection(Native Method)
at java.net.DualStackPlainSocketImpl.socketAccept(Unknown Source)
at java.net.AbstractPlainSocketImpl.accept(Unknown Source)
at java.net.PlainSocketImpl.accept(Unknown Source)
at java.net.ServerSocket.implAccept(Unknown Source)
at java.net.ServerSocket.accept(Unknown Source)
at org.apache.spark.api.python.PythonWorkerFactory.createSimpleWorker(PythonWorkerFactory.scala:164)
... 14 more
Driver stacktrace:
at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1887)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1875)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1874)
at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48)
at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1874)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:926)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:926)
at scala.Option.foreach(Option.scala:257)
at org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:926)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:2108)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:2057)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:2046)
at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:49)
at org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:737)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:2061)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:2082)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:2101)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:2126)
at org.apache.spark.rdd.RDD$$anonfun$collect$1.apply(RDD.scala:945)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112)
at org.apache.spark.rdd.RDD.withScope(RDD.scala:363)
at org.apache.spark.rdd.RDD.collect(RDD.scala:944)
at org.apache.spark.api.python.PythonRDD$.collectAndServe(PythonRDD.scala:166)
at org.apache.spark.api.python.PythonRDD.collectAndServe(PythonRDD.scala)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source)
at java.lang.reflect.Method.invoke(Unknown Source)
at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244)
at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)
at py4j.Gateway.invoke(Gateway.java:282)
at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)
at py4j.commands.CallCommand.execute(CallCommand.java:79)
at py4j.GatewayConnection.run(GatewayConnection.java:238)
at java.lang.Thread.run(Unknown Source)
Caused by: org.apache.spark.SparkException: Python worker failed to connect back.
at org.apache.spark.api.python.PythonWorkerFactory.createSimpleWorker(PythonWorkerFactory.scala:170)
at org.apache.spark.api.python.PythonWorkerFactory.create(PythonWorkerFactory.scala:97)
at org.apache.spark.SparkEnv.createPythonWorker(SparkEnv.scala:117)
at org.apache.spark.api.python.BasePythonRunner.compute(PythonRunner.scala:108)
at org.apache.spark.api.python.PythonRDD.compute(PythonRDD.scala:65)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:288)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90)
at org.apache.spark.scheduler.Task.run(Task.scala:121)
at org.apache.spark.executor.Executor$TaskRunner$$anonfun$10.apply(Executor.scala:402)
at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1360)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:408)
at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
... 1 more
Caused by: java.net.SocketTimeoutException: Accept timed out
at java.net.DualStackPlainSocketImpl.waitForNewConnection(Native Method)
at java.net.DualStackPlainSocketImpl.socketAccept(Unknown Source)
at java.net.AbstractPlainSocketImpl.accept(Unknown Source)
at java.net.PlainSocketImpl.accept(Unknown Source)
at java.net.ServerSocket.implAccept(Unknown Source)
at java.net.ServerSocket.accept(Unknown Source)
at org.apache.spark.api.python.PythonWorkerFactory.createSimpleWorker(PythonWorkerFactory.scala:164)
... 14 more
```
As suggested by theplatypus - checked if the 'resource' module can be imported directly from terminal - apparently not:
```
>>> import resource
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
ModuleNotFoundError: No module named 'resource'
```
In terms of installation resources - I followed instructions from [this tutorial](https://www.youtube.com/watch?v=iarn1KHeouc&index=3&list=PLot-YkcC7wZ_2sxmRTZr2c121rjcaleqv):
1. downloaded spark-2.4.0-bin-hadoop2.7.tgz from [Apache Spark website](https://spark.apache.org/downloads.html)
2. un-zipped it to my C-drive
3. already had Python\_3 installed (Anaconda distribution) as well as Java
4. created local 'C:\hadoop\bin' folder to store winutils.exe
5. created 'C:\tmp\hive' folder and gave Spark access to it
6. added environment variables (SPARK\_HOME, HADOOP\_HOME etc)
Is there any extra resource I should install?
|
2018/11/11
|
[
"https://Stackoverflow.com/questions/53252181",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/10637265/"
] |
The heart of the problem is the connection between pyspark and python, solved by redefining the environment variable.
I´ve just changed the environment variable's values `PYSPARK_DRIVER_PYTHON` from `ipython` to `jupyter` and `PYSPARK_PYTHON` from `python3` to `python`.
Now I'm using Jupyter Notebook, Python 3.7, Java JDK 11.0.6, Spark 2.4.2
|
There seems to be many reasons for this error to occur. I still had the same problem despite my `environmental variables` being all correctly set.
In my case adding this
```
import findspark
findspark.init()
```
solved the problem.
I'am with`jupyter notebook` on `windows10` with `spark-3.1.2, python3.6`.
|
53,252,181
|
I'm a newby with Spark and trying to complete a Spark tutorial:
[link to tutorial](https://www.youtube.com/watch?v=3CPI2D_QD44&index=4&list=PLot-YkcC7wZ_2sxmRTZr2c121rjcaleqv)
After installing it on local machine (Win10 64, Python 3, Spark 2.4.0) and setting all env variables (HADOOP\_HOME, SPARK\_HOME etc) I'm trying to run a simple Spark job via WordCount.py file:
```
from pyspark import SparkContext, SparkConf
if __name__ == "__main__":
conf = SparkConf().setAppName("word count").setMaster("local[2]")
sc = SparkContext(conf = conf)
lines = sc.textFile("C:/Users/mjdbr/Documents/BigData/python-spark-tutorial/in/word_count.text")
words = lines.flatMap(lambda line: line.split(" "))
wordCounts = words.countByValue()
for word, count in wordCounts.items():
print("{} : {}".format(word, count))
```
After running it from terminal:
```
spark-submit WordCount.py
```
I get below error.
I checked (by commenting out line by line) that it crashes at
```
wordCounts = words.countByValue()
```
Any idea what should I check to make it work?
```
Traceback (most recent call last):
File "C:\Users\mjdbr\Anaconda3\lib\runpy.py", line 193, in _run_module_as_main
"__main__", mod_spec)
File "C:\Users\mjdbr\Anaconda3\lib\runpy.py", line 85, in _run_code
exec(code, run_globals)
File "C:\Spark\spark-2.4.0-bin-hadoop2.7\python\lib\pyspark.zip\pyspark\worker.py", line 25, in <module>
ModuleNotFoundError: No module named 'resource'
18/11/10 23:16:58 ERROR Executor: Exception in task 0.0 in stage 0.0 (TID 0)
org.apache.spark.SparkException: Python worker failed to connect back.
at org.apache.spark.api.python.PythonWorkerFactory.createSimpleWorker(PythonWorkerFactory.scala:170)
at org.apache.spark.api.python.PythonWorkerFactory.create(PythonWorkerFactory.scala:97)
at org.apache.spark.SparkEnv.createPythonWorker(SparkEnv.scala:117)
at org.apache.spark.api.python.BasePythonRunner.compute(PythonRunner.scala:108)
at org.apache.spark.api.python.PythonRDD.compute(PythonRDD.scala:65)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:288)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90)
at org.apache.spark.scheduler.Task.run(Task.scala:121)
at org.apache.spark.executor.Executor$TaskRunner$$anonfun$10.apply(Executor.scala:402)
at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1360)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:408)
at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
at java.lang.Thread.run(Unknown Source)
Caused by: java.net.SocketTimeoutException: Accept timed out
at java.net.DualStackPlainSocketImpl.waitForNewConnection(Native Method)
at java.net.DualStackPlainSocketImpl.socketAccept(Unknown Source)
at java.net.AbstractPlainSocketImpl.accept(Unknown Source)
at java.net.PlainSocketImpl.accept(Unknown Source)
at java.net.ServerSocket.implAccept(Unknown Source)
at java.net.ServerSocket.accept(Unknown Source)
at org.apache.spark.api.python.PythonWorkerFactory.createSimpleWorker(PythonWorkerFactory.scala:164)
... 14 more
18/11/10 23:16:58 ERROR TaskSetManager: Task 0 in stage 0.0 failed 1 times; aborting job
Traceback (most recent call last):
File "C:/Users/mjdbr/Documents/BigData/python-spark-tutorial/rdd/WordCount.py", line 19, in <module>
wordCounts = words.countByValue()
File "C:\Spark\spark-2.4.0-bin-hadoop2.7\python\lib\pyspark.zip\pyspark\rdd.py", line 1261, in countByValue
File "C:\Spark\spark-2.4.0-bin-hadoop2.7\python\lib\pyspark.zip\pyspark\rdd.py", line 844, in reduce
File "C:\Spark\spark-2.4.0-bin-hadoop2.7\python\lib\pyspark.zip\pyspark\rdd.py", line 816, in collect
File "C:\Spark\spark-2.4.0-bin-hadoop2.7\python\lib\py4j-0.10.7-src.zip\py4j\java_gateway.py", line 1257, in __call__
File "C:\Spark\spark-2.4.0-bin-hadoop2.7\python\lib\py4j-0.10.7-src.zip\py4j\protocol.py", line 328, in get_return_value
py4j.protocol.Py4JJavaError: An error occurred while calling z:org.apache.spark.api.python.PythonRDD.collectAndServe.
: org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 0.0 failed 1 times, most recent failure:
Lost task 0.0 in stage 0.0 (TID 0, localhost, executor driver): org.apache.spark.SparkException: Python worker failed to connect back.
at org.apache.spark.api.python.PythonWorkerFactory.createSimpleWorker(PythonWorkerFactory.scala:170)
at org.apache.spark.api.python.PythonWorkerFactory.create(PythonWorkerFactory.scala:97)
at org.apache.spark.SparkEnv.createPythonWorker(SparkEnv.scala:117)
at org.apache.spark.api.python.BasePythonRunner.compute(PythonRunner.scala:108)
at org.apache.spark.api.python.PythonRDD.compute(PythonRDD.scala:65)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:288)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90)
at org.apache.spark.scheduler.Task.run(Task.scala:121)
at org.apache.spark.executor.Executor$TaskRunner$$anonfun$10.apply(Executor.scala:402)
at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1360)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:408)
at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
at java.lang.Thread.run(Unknown Source)
Caused by: java.net.SocketTimeoutException: Accept timed out
at java.net.DualStackPlainSocketImpl.waitForNewConnection(Native Method)
at java.net.DualStackPlainSocketImpl.socketAccept(Unknown Source)
at java.net.AbstractPlainSocketImpl.accept(Unknown Source)
at java.net.PlainSocketImpl.accept(Unknown Source)
at java.net.ServerSocket.implAccept(Unknown Source)
at java.net.ServerSocket.accept(Unknown Source)
at org.apache.spark.api.python.PythonWorkerFactory.createSimpleWorker(PythonWorkerFactory.scala:164)
... 14 more
Driver stacktrace:
at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1887)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1875)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1874)
at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48)
at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1874)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:926)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:926)
at scala.Option.foreach(Option.scala:257)
at org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:926)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:2108)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:2057)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:2046)
at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:49)
at org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:737)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:2061)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:2082)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:2101)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:2126)
at org.apache.spark.rdd.RDD$$anonfun$collect$1.apply(RDD.scala:945)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112)
at org.apache.spark.rdd.RDD.withScope(RDD.scala:363)
at org.apache.spark.rdd.RDD.collect(RDD.scala:944)
at org.apache.spark.api.python.PythonRDD$.collectAndServe(PythonRDD.scala:166)
at org.apache.spark.api.python.PythonRDD.collectAndServe(PythonRDD.scala)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source)
at java.lang.reflect.Method.invoke(Unknown Source)
at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244)
at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)
at py4j.Gateway.invoke(Gateway.java:282)
at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)
at py4j.commands.CallCommand.execute(CallCommand.java:79)
at py4j.GatewayConnection.run(GatewayConnection.java:238)
at java.lang.Thread.run(Unknown Source)
Caused by: org.apache.spark.SparkException: Python worker failed to connect back.
at org.apache.spark.api.python.PythonWorkerFactory.createSimpleWorker(PythonWorkerFactory.scala:170)
at org.apache.spark.api.python.PythonWorkerFactory.create(PythonWorkerFactory.scala:97)
at org.apache.spark.SparkEnv.createPythonWorker(SparkEnv.scala:117)
at org.apache.spark.api.python.BasePythonRunner.compute(PythonRunner.scala:108)
at org.apache.spark.api.python.PythonRDD.compute(PythonRDD.scala:65)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:288)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90)
at org.apache.spark.scheduler.Task.run(Task.scala:121)
at org.apache.spark.executor.Executor$TaskRunner$$anonfun$10.apply(Executor.scala:402)
at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1360)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:408)
at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
... 1 more
Caused by: java.net.SocketTimeoutException: Accept timed out
at java.net.DualStackPlainSocketImpl.waitForNewConnection(Native Method)
at java.net.DualStackPlainSocketImpl.socketAccept(Unknown Source)
at java.net.AbstractPlainSocketImpl.accept(Unknown Source)
at java.net.PlainSocketImpl.accept(Unknown Source)
at java.net.ServerSocket.implAccept(Unknown Source)
at java.net.ServerSocket.accept(Unknown Source)
at org.apache.spark.api.python.PythonWorkerFactory.createSimpleWorker(PythonWorkerFactory.scala:164)
... 14 more
```
As suggested by theplatypus - checked if the 'resource' module can be imported directly from terminal - apparently not:
```
>>> import resource
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
ModuleNotFoundError: No module named 'resource'
```
In terms of installation resources - I followed instructions from [this tutorial](https://www.youtube.com/watch?v=iarn1KHeouc&index=3&list=PLot-YkcC7wZ_2sxmRTZr2c121rjcaleqv):
1. downloaded spark-2.4.0-bin-hadoop2.7.tgz from [Apache Spark website](https://spark.apache.org/downloads.html)
2. un-zipped it to my C-drive
3. already had Python\_3 installed (Anaconda distribution) as well as Java
4. created local 'C:\hadoop\bin' folder to store winutils.exe
5. created 'C:\tmp\hive' folder and gave Spark access to it
6. added environment variables (SPARK\_HOME, HADOOP\_HOME etc)
Is there any extra resource I should install?
|
2018/11/11
|
[
"https://Stackoverflow.com/questions/53252181",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/10637265/"
] |
I had the same issue. I had set all the `environment variables` correctly but still wasn't able to resolve it
In my case,
```
import findspark
findspark.init()
```
adding this before even creating the sparkSession helped.
I was using `Visual Studio Code` on `Windows 10` and spark version was `3.2.0`. Python version is `3.9` .
Note: Initially check if the paths for `HADOOP_HOME` `SPARK_HOME` `PYSPARK_PYTHON` have been set correctly
|
When you run the python installer, on the Customize Python section, make sure that the option Add python.exe to Path is selected. If this option is not selected, some of the PySpark utilities such as pyspark and spark-submit might not work. This worked for me! Happy Sharing :)
|
25,484,269
|
I am not an experienced programmer, I have a problem with my code, I think it's a logical mistake of mine but I couldn't find an answer at <http://anh.cs.luc.edu/python/hands-on/3.1/handsonHtml/whilestatements.html> .
What I want is to check if the serial device is locked, and the different between conditions that "it is locked" and "it isn't locked" is that there are 4 commas `,,,,` in the line which contains `GPGGA` letters. So I want my code to start if there isn't `,,,,` but I guess my loop is wrong. Any suggestions will be appreciated. Thanks in advance.
```
import serial
import time
import subprocess
file = open("/home/pi/allofthedatacollected.csv", "w") #"w" will be "a" later
file.write('\n')
while True:
ser = serial.Serial("/dev/ttyUSB0", 4800, timeout =1)
checking = ser.readline();
if checking.find(",,,,"):
print "not locked yet"
True
else:
False
print "locked and loaded"
```
.
.
.
|
2014/08/25
|
[
"https://Stackoverflow.com/questions/25484269",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3918035/"
] |
Use `break` to exit a loop:
```
while True:
ser = serial.Serial("/dev/ttyUSB0", 4800, timeout =1)
checking = ser.readline();
if checking.find(",,,,"):
print "not locked yet"
else:
print "locked and loaded"
break
```
The `True` and `False` line didn't do anything in your code; they are just referencing the built-in boolean values without assigning them anywhere.
|
You can use a variable as condition for your `while` loop instead of just `while True`. That way you can change the condition.
So instead of having this code:
```
while True:
...
if ...:
True
else:
False
```
... try this:
```
keepGoing = True
while keepGoing:
ser = serial.Serial("/dev/ttyUSB0", 4800, timeout =1)
checking = ser.readline();
if checking.find(",,,,"):
print "not locked yet"
keepGoing = True
else:
keepGoing = False
print "locked and loaded"
```
EDIT:
Or as another answerer suggests, you can just `break` out of the loop :)
|
31,018,497
|
I'm a newbie to python.
I was trying to display the time duration.
What I did was:
```
startTime = datetime.datetime.now().replace(microsecond=0)
... <some more codes> ...
endTime = datetime.datetime.now().replace(microsecond=0)
durationTime = endTime - startTime
print("The duration is " + str(durationTime))
```
The output is => The duration is 0:01:28
Can I know how to remove hour from the result?
I want to display => The duration is 01:28
Thanks in advance!
|
2015/06/24
|
[
"https://Stackoverflow.com/questions/31018497",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4528322/"
] |
You can split your timedelta as follows:
```
>>> hours, remainder = divmod(durationTime.total_seconds(), 3600)
>>> minutes, seconds = divmod(remainder, 60)
>>> print '%s:%s' % (minutes, seconds)
```
This will use python's builtin divmod to convert the number of seconds in your timedelta to hours, and the remainder will then be used to calculate the minutes and seconds. You can then explicitly print the units of time you want.
|
You can do this by converting `durationTime` which is a `datetime.timedelta` object to a `datetime.time` object and then using `strftime`.
```
print datetime.time(0, 0, durationTime.seconds).strftime("%M:%S")
```
Another way would be to manipulate the string:
```
print ':'.join(str(durationTime).split(':')[1:])
```
|
42,157,406
|
I'm a beginner to mongodb and python, and i'm trying to write python code to delete the documents of multiple collections older than 30 days based on date field which is of NumberLong type and also has to export the collections to CSV before deleting. I'm using below simple code to print the records as a first step by using new Date(). It is working in Mongo Shell but fails in python stating syntax error.please help.
Sample data:
```
{ "_id" : ObjectId("589d6eb390cc70b775892ae1"), "trgtm" : NumberLong("1486280499661") }
{ "_id" : ObjectId("589d602d2fa2fa6687bc7293"), "trgtm" : NumberLong("1486276781059") }
{ "_id" : ObjectId("589d701f90cc70b775892ae2"), "trgtm" : NumberLong("1486194463192") }
{ "_id" : ObjectId("589d702390cc70b775892ae3"), "trgtm" : NumberLong("1486108067444") }
```
Code
```
import pymongo
from pymongo import MongoClient
conn=MongoClient('localhost',27017)
db=conn.mydb
col=db.test
query= { "date": { "$lt": new Date(new Date()).getTime() - 30 * 24 * 60 * 60 * 1000 } }
cursor=col.find(query)
slice=cursor[0:100]
for doc in slice:
print doc
```
|
2017/02/10
|
[
"https://Stackoverflow.com/questions/42157406",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/7544956/"
] |
I use datetime for dates in python..
This is an example:
```
import datetime
date = datetime.date(2017,2,10)
otherDate = datetime.date(1999,1,1)
date < otherDate # False
```
|
You have to use [datetime.datetime](https://docs.python.org/3/library/datetime.html#datetime.datetime) like this:
`query = {"date": {"$lt": datetime.datetime(2021, 6, 14, 0, 0, 0, 0)}}`
To use the current time you can do:
`query = {"date": {"$lt": datetime.now(timezone.utc)}}`
Source: <https://pymongo.readthedocs.io/en/stable/examples/datetimes.html>
|
42,157,406
|
I'm a beginner to mongodb and python, and i'm trying to write python code to delete the documents of multiple collections older than 30 days based on date field which is of NumberLong type and also has to export the collections to CSV before deleting. I'm using below simple code to print the records as a first step by using new Date(). It is working in Mongo Shell but fails in python stating syntax error.please help.
Sample data:
```
{ "_id" : ObjectId("589d6eb390cc70b775892ae1"), "trgtm" : NumberLong("1486280499661") }
{ "_id" : ObjectId("589d602d2fa2fa6687bc7293"), "trgtm" : NumberLong("1486276781059") }
{ "_id" : ObjectId("589d701f90cc70b775892ae2"), "trgtm" : NumberLong("1486194463192") }
{ "_id" : ObjectId("589d702390cc70b775892ae3"), "trgtm" : NumberLong("1486108067444") }
```
Code
```
import pymongo
from pymongo import MongoClient
conn=MongoClient('localhost',27017)
db=conn.mydb
col=db.test
query= { "date": { "$lt": new Date(new Date()).getTime() - 30 * 24 * 60 * 60 * 1000 } }
cursor=col.find(query)
slice=cursor[0:100]
for doc in slice:
print doc
```
|
2017/02/10
|
[
"https://Stackoverflow.com/questions/42157406",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/7544956/"
] |
I use datetime for dates in python..
This is an example:
```
import datetime
date = datetime.date(2017,2,10)
otherDate = datetime.date(1999,1,1)
date < otherDate # False
```
|
Use bson.Int64() to query for timestamp in NumberLong format. It worked for me.
Customizing it for you below:
```
import bson
import pymongo
from pymongo import MongoClient
conn=MongoClient('localhost',27017)
db=conn.mydb
col=db.test
timestamp_value= <value> #calculate epoch time stamp 30 days earlier and variable should be of type String
query= { "date": { "$lt": bson.Int64(timestamp_value) } }
cursor=col.find(query)
slice=cursor[0:100]
for doc in slice:
print doc
```
|
41,403,465
|
My flask app is outputting 'no content' for the `for()` block and i dont know why.
I tested my query in `app.py` , here is `app.py`:
```
# mysql config
app.config['MYSQL_DATABASE_USER'] = 'user'
app.config['MYSQL_DATABASE_PASSWORD'] = 'mypass'
app.config['MYSQL_DATABASE_DB'] = 'mydbname'
app.config['MYSQL_DATABASE_HOST'] = 'xxx.xxx.xxx.xxx'
mysql = MySQL()
mysql.init_app(app)
c = mysql.connect().cursor()
blogposts = c.execute("SELECT post_title, post_name FROM mydbname.wp_posts WHERE post_status='publish' ORDER BY RAND() LIMIT 3")
@app.route('/', methods=('GET', 'POST'))
def email():
form = EmailForm()
if request.method == 'POST':
if form.validate() == False:
return 'Please fill in all fields <p><a href="/">Try Again</a></p>'
else:
msg = Message("Message from your visitor",
sender='contact@site.com',
recipients=['contact@site.com'])
msg.body = """
From: %s
""" % (form.email.data)
mail.send(msg)
#return "Successfully sent message!"
return render_template('email_submit_thankyou.html')
elif request.method == 'GET':
return render_template('index.html', blogposts)
```
This is what my `app.py` looks like.
Below is my `index.html` in `templates/`:
```
<ul>
{% for blogpost in blogposts %}
<li><a href="{{blogpost[1]}}">{{blogpost[0]}}</a></li>
{% else %}
<li>no content...</li>
{% endfor %}
<div class="clearL"> </div>
</ul>
```
I checked the query, and it returns desired output like:
```
ID post_title post_name
1 a title here a-title-here
2 a title here2 a-title-here2
3 a title here3 a-title-here3
```
If i try to restart the dev server by running `export FLASK APP=app.py', then`flask run`, i get an error of:
```
Error: The file/path provided (app) does not appear to exist. Please verify the path is correct. If app is not on PYTHONPATH, ensure the extension is .py
```
I've also tried running via `export FLASK_APP=app.py` then `python -m flask run` - this also gives the same error.
Thoughts on how to resolve?
|
2016/12/30
|
[
"https://Stackoverflow.com/questions/41403465",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/700070/"
] |
You haven't got anything in your template called `blogposts`. You need to use keyword arguments to pass the data:
```
return render_template('index.html', blogposts=blogposts)
```
Also note you should really do that query inside the function, otherwise it will only ever execute on process start and you'll always have the same three posts.
|
I had same problem. I solved it as change directory of terminal to the folder containing app.py and then run
```
export FLASK_APP=app.py
```
then, you should run
```
python -m flask run
```
|
35,538,814
|
I have a bunch of `.java` files in a directory and I want to compile all of them to `.class` files via python code.
As you know, the `Javac` command line tool is the tool that I must use and it require the name of `.java` files to be equal with the class name. Unfortunately for my `.java` files, it isn't. I mean they have different random names that are not equal with their class names.
So I need to extract the name of classes from the contents of `.java` files. It was simple if the line of class definition was specified, but it isn't. The `.java` file may contains some comments in the top that may contain *class* or *package* words too.
The question is how I can extract package and class name of each file?
For example this is contents of one of them:
```java
//This is a sample package that its class name is HelloWorldApplet. in this package we blah blah blah and this class blah blah blah.
package helloWorldPackage;
//This is another comment that may or may not have the word "package" and "class" inside.
import javacard.framework.APDU;
import javacard.framework.Applet;
import javacard.framework.ISO7816;
import javacard.framework.ISOException;
import javacard.framework.Util;
/* this is also a multi line comment. blah blah blah package, blah blah blah package ... */
public class HelloWorldApplet extends Applet
{
private static final byte[] helloWorld = {(byte)'H',(byte)'e',(byte)'l',(byte)'l',(byte)'o',(byte)' ',(byte)'W',(byte)'o',(byte)'r',(byte)'l',(byte)'d',};
private static final byte HW_CLA = (byte)0x80;
private static final byte HW_INS = (byte)0x00;
public static void install(byte[] bArray, short bOffset, byte bLength)
{
new HelloWorldApplet().register(bArray, (short) (bOffset + 1), bArray[bOffset]);
}
public void process(APDU apdu)
{
if (selectingApplet())
{
return;
}
byte[] buffer = apdu.getBuffer();
byte CLA = (byte) (buffer[ISO7816.OFFSET_CLA] & 0xFF);
byte INS = (byte) (buffer[ISO7816.OFFSET_INS] & 0xFF);
if (CLA != HW_CLA)
{
ISOException.throwIt(ISO7816.SW_CLA_NOT_SUPPORTED);
}
switch ( INS )
{
case HW_INS:
getHelloWorld( apdu );
break;
default:
ISOException.throwIt(ISO7816.SW_INS_NOT_SUPPORTED);
}
}
private void getHelloWorld( APDU apdu)
{
byte[] buffer = apdu.getBuffer();
short length = (short) helloWorld.length;
Util.arrayCopyNonAtomic(helloWorld, (short)0, buffer, (short)0, (short) length);
apdu.setOutgoingAndSend((short)0, length);
}
}
```
How can I extract package name (i.e. `helloWorldPackage`) and class name(i.e `HelloWorldApplet`) of each file?
Note that, the `.java` files may have different classes inside, but I need the name of that class that extends `Applet` only.
**Update:**
I tried the followings, but they didn't worked (Python 2.7.10):
```
import re
prgFile = open(r"yourFile\New Text Document.txt","r")
contents = prgFile.read()
x = re.match(r"(?<=class)\b.*\b(?=extends Applet)",contents)
print x
x = re.match(r"^(public)+",contents)
print x
x = re.match(r"^package ([^;\n]+)",contents)
print x
x = re.match(r"(?<=^public class )\b.*\b(?= extends Applet)",contents)
print x
```
Output:
```
>>> ================================ RESTART ================================
>>>
None
None
None
None
>>>
```
|
2016/02/21
|
[
"https://Stackoverflow.com/questions/35538814",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3580433/"
] |
In many cases a simple regex will work.
If you want to be 100% certain I suggest using a full-blown Java parser like [javalang](https://github.com/c2nes/javalang) to parse each file, then walk the AST to pull out the class name.
Something like
```
import glob
import javalang
# look at all .java files in the working directory
for fname in glob.glob("*.java"):
# load the sourcecode
with open(fname) as inf:
sourcecode = inf.read()
try:
# parse it to an Abstract Syntax Tree
tree = javalang.parse.parse(sourcecode)
# get package name
pkg = tree.package.name
# look at all class declarations
for path, node in tree.filter(javalang.tree.ClassDeclaration):
# if class extends Applet
if node.extends.name == 'Applet':
# print the class name
print("{}: package {}, main class is {}".format(fname, pkg, node.name))
except javalang.parser.JavaSyntaxError as je:
# report any files which don't parse properly
print("Error parsing {}: {}".format(fname, je))
```
which gives
```
sample.java: package helloWorldPackage, main class is HelloWorldApplet
```
|
This regex works for me. `(?<=^public class )\b.*\b(?= extends Applet)`.
The way to use it correctly:
```
re.compile(ur'(?<=^public class )\b.*\b(?= extends Applet)', re.MULTILINE)
```
|
35,538,814
|
I have a bunch of `.java` files in a directory and I want to compile all of them to `.class` files via python code.
As you know, the `Javac` command line tool is the tool that I must use and it require the name of `.java` files to be equal with the class name. Unfortunately for my `.java` files, it isn't. I mean they have different random names that are not equal with their class names.
So I need to extract the name of classes from the contents of `.java` files. It was simple if the line of class definition was specified, but it isn't. The `.java` file may contains some comments in the top that may contain *class* or *package* words too.
The question is how I can extract package and class name of each file?
For example this is contents of one of them:
```java
//This is a sample package that its class name is HelloWorldApplet. in this package we blah blah blah and this class blah blah blah.
package helloWorldPackage;
//This is another comment that may or may not have the word "package" and "class" inside.
import javacard.framework.APDU;
import javacard.framework.Applet;
import javacard.framework.ISO7816;
import javacard.framework.ISOException;
import javacard.framework.Util;
/* this is also a multi line comment. blah blah blah package, blah blah blah package ... */
public class HelloWorldApplet extends Applet
{
private static final byte[] helloWorld = {(byte)'H',(byte)'e',(byte)'l',(byte)'l',(byte)'o',(byte)' ',(byte)'W',(byte)'o',(byte)'r',(byte)'l',(byte)'d',};
private static final byte HW_CLA = (byte)0x80;
private static final byte HW_INS = (byte)0x00;
public static void install(byte[] bArray, short bOffset, byte bLength)
{
new HelloWorldApplet().register(bArray, (short) (bOffset + 1), bArray[bOffset]);
}
public void process(APDU apdu)
{
if (selectingApplet())
{
return;
}
byte[] buffer = apdu.getBuffer();
byte CLA = (byte) (buffer[ISO7816.OFFSET_CLA] & 0xFF);
byte INS = (byte) (buffer[ISO7816.OFFSET_INS] & 0xFF);
if (CLA != HW_CLA)
{
ISOException.throwIt(ISO7816.SW_CLA_NOT_SUPPORTED);
}
switch ( INS )
{
case HW_INS:
getHelloWorld( apdu );
break;
default:
ISOException.throwIt(ISO7816.SW_INS_NOT_SUPPORTED);
}
}
private void getHelloWorld( APDU apdu)
{
byte[] buffer = apdu.getBuffer();
short length = (short) helloWorld.length;
Util.arrayCopyNonAtomic(helloWorld, (short)0, buffer, (short)0, (short) length);
apdu.setOutgoingAndSend((short)0, length);
}
}
```
How can I extract package name (i.e. `helloWorldPackage`) and class name(i.e `HelloWorldApplet`) of each file?
Note that, the `.java` files may have different classes inside, but I need the name of that class that extends `Applet` only.
**Update:**
I tried the followings, but they didn't worked (Python 2.7.10):
```
import re
prgFile = open(r"yourFile\New Text Document.txt","r")
contents = prgFile.read()
x = re.match(r"(?<=class)\b.*\b(?=extends Applet)",contents)
print x
x = re.match(r"^(public)+",contents)
print x
x = re.match(r"^package ([^;\n]+)",contents)
print x
x = re.match(r"(?<=^public class )\b.*\b(?= extends Applet)",contents)
print x
```
Output:
```
>>> ================================ RESTART ================================
>>>
None
None
None
None
>>>
```
|
2016/02/21
|
[
"https://Stackoverflow.com/questions/35538814",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3580433/"
] |
In many cases a simple regex will work.
If you want to be 100% certain I suggest using a full-blown Java parser like [javalang](https://github.com/c2nes/javalang) to parse each file, then walk the AST to pull out the class name.
Something like
```
import glob
import javalang
# look at all .java files in the working directory
for fname in glob.glob("*.java"):
# load the sourcecode
with open(fname) as inf:
sourcecode = inf.read()
try:
# parse it to an Abstract Syntax Tree
tree = javalang.parse.parse(sourcecode)
# get package name
pkg = tree.package.name
# look at all class declarations
for path, node in tree.filter(javalang.tree.ClassDeclaration):
# if class extends Applet
if node.extends.name == 'Applet':
# print the class name
print("{}: package {}, main class is {}".format(fname, pkg, node.name))
except javalang.parser.JavaSyntaxError as je:
# report any files which don't parse properly
print("Error parsing {}: {}".format(fname, je))
```
which gives
```
sample.java: package helloWorldPackage, main class is HelloWorldApplet
```
|
You could come up with the following regex:
```
import re
string = your_string_here
classes = [x.strip() for x in re.findall(r'^(?:public class|package) ([^;]+?)(?=extends|;)', string, re.MULTILINE)]
# look for public class or package at the start of the line
# then anything but a semicolon
# make sure the match is immediately followed by extends or a colon
print classes
# ['helloWorldPackage', 'HelloWorldApplet']
```
|
42,501,900
|
I just wanted to ask you all about what is fitfunc, errfunc followed by scipy.optimize.leastsq is intuitively. I am not really used to python but I would like to understand this. Here is the code that I am trying to understand.
```
def optimize_parameters2(p0,mz):
fitfunc = lambda p,p0,mz: calculate_sp2(p, p0, mz)
errfunc = lambda p,p0,mz: exp-fitfunc(p,p0,mz)
return scipy.optimize.leastsq(errfunc, p0, args=(p0,mz))
```
Can someone please explain what this code is saying narratively word by word?
Sorry for being so specific but I really do have trouble understanding what it's saying.
|
2017/02/28
|
[
"https://Stackoverflow.com/questions/42501900",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6646128/"
] |
Inspite of `change` event trigger the modal on `click` event of `select option` like
```
$('select option').on('click',function(){
$("#modelId").modal('show');
});
```
|
You can also use modal function
```
$('select option').on('click'function(){
$('modelId').modal('show');
});
```
|
42,501,900
|
I just wanted to ask you all about what is fitfunc, errfunc followed by scipy.optimize.leastsq is intuitively. I am not really used to python but I would like to understand this. Here is the code that I am trying to understand.
```
def optimize_parameters2(p0,mz):
fitfunc = lambda p,p0,mz: calculate_sp2(p, p0, mz)
errfunc = lambda p,p0,mz: exp-fitfunc(p,p0,mz)
return scipy.optimize.leastsq(errfunc, p0, args=(p0,mz))
```
Can someone please explain what this code is saying narratively word by word?
Sorry for being so specific but I really do have trouble understanding what it's saying.
|
2017/02/28
|
[
"https://Stackoverflow.com/questions/42501900",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6646128/"
] |
Inspite of `change` event trigger the modal on `click` event of `select option` like
```
$('select option').on('click',function(){
$("#modelId").modal('show');
});
```
|
Create a select with a default value option.
```
<select id="slt">
<option value="default">Select Something</option>
<option value="1">1</option>
<option value="2">2</option>
<option value="3">3</option>
</select>
```
Every time, handle the change event for a not defaultValue option, and then reset the default option to selected.
```
var $slt = $('#slt'),
defaultValue = 'default';
function handleChange(e){
var val = $slt.val();
// do nothing for defaultValue being selected
if (val === defaultValue) {
return;
}
// do something else
console.log(val);
// reset the value to defaultValue
$slt.find('option:selected').prop('selected', false);
$lst.find('option').eq(0).prop('selected', true);
}
$slt.bind('change', handleChange);
```
But this will make the select always show as "Select Something".
This is a demo about this: <http://codepen.io/shuizhongyueming/pen/JWYYLJ>
---
I found another solution for this, which is better than mine.
<https://stackoverflow.com/a/12404521/2279763>
It use the `selectedIndex` to reset the selected value on focus, no need for the defaultValue option, and will show the the selected option after click.
This is the demo in the comment: <http://fiddle.jshell.net/ecmanaut/335XK/>
|
42,501,900
|
I just wanted to ask you all about what is fitfunc, errfunc followed by scipy.optimize.leastsq is intuitively. I am not really used to python but I would like to understand this. Here is the code that I am trying to understand.
```
def optimize_parameters2(p0,mz):
fitfunc = lambda p,p0,mz: calculate_sp2(p, p0, mz)
errfunc = lambda p,p0,mz: exp-fitfunc(p,p0,mz)
return scipy.optimize.leastsq(errfunc, p0, args=(p0,mz))
```
Can someone please explain what this code is saying narratively word by word?
Sorry for being so specific but I really do have trouble understanding what it's saying.
|
2017/02/28
|
[
"https://Stackoverflow.com/questions/42501900",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6646128/"
] |
Create a select with a default value option.
```
<select id="slt">
<option value="default">Select Something</option>
<option value="1">1</option>
<option value="2">2</option>
<option value="3">3</option>
</select>
```
Every time, handle the change event for a not defaultValue option, and then reset the default option to selected.
```
var $slt = $('#slt'),
defaultValue = 'default';
function handleChange(e){
var val = $slt.val();
// do nothing for defaultValue being selected
if (val === defaultValue) {
return;
}
// do something else
console.log(val);
// reset the value to defaultValue
$slt.find('option:selected').prop('selected', false);
$lst.find('option').eq(0).prop('selected', true);
}
$slt.bind('change', handleChange);
```
But this will make the select always show as "Select Something".
This is a demo about this: <http://codepen.io/shuizhongyueming/pen/JWYYLJ>
---
I found another solution for this, which is better than mine.
<https://stackoverflow.com/a/12404521/2279763>
It use the `selectedIndex` to reset the selected value on focus, no need for the defaultValue option, and will show the the selected option after click.
This is the demo in the comment: <http://fiddle.jshell.net/ecmanaut/335XK/>
|
You can also use modal function
```
$('select option').on('click'function(){
$('modelId').modal('show');
});
```
|
58,370,832
|
I wanted to include an XML file in another XML file and parse it with python. I am trying to achieve it through Xinclude. There is a file1.xml which looks like
```
<?xml version="1.0"?>
<root>
<document xmlns:xi="http://www.w3.org/2001/XInclude">
<xi:include href="file2.xml" parse="xml" />
</document>
<test>some text</test>
</root>
```
and file2.xml which looks like
```
<para>This is a paragraph.</para>
```
Now in my python code i tried to access it like:
```
from xml.etree import ElementTree, ElementInclude
tree = ElementTree.parse("file1.xml")
root = tree.getroot()
for child in root.getchildren():
print child.tag
```
It prints the tag of all child elements of root
```
document
test
```
Now when i tries to print the child objects directly like
```
print root.document
print root.test
```
It says the root doesnt have children named test or document. Then how am i suppose to access the content in file2.xml?
I know that I can access the XML elements from python with schema like:
```
schema=etree.XMLSchema(objectify.fromstring(configSchema))
xmlParser = objectify.makeparser(schema = schema)
cfg = objectify.fromstring(xmlContents, xmlParser)
print cfg.elemetName # access element
```
But since here one XML file is included in another, I am confused how to write the schema. How can i solve it?
|
2019/10/14
|
[
"https://Stackoverflow.com/questions/58370832",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3186922/"
] |
Not sure why you want to use XInclude, but including an XML file in another one is a basic mechanism of SGML and XML, and can be achieved without XInclude as simple as:
```
<!DOCTYPE root [
<!ENTITY externaldoc SYSTEM "file2.xml">
]>
<root>
<document>
&externaldoc;
</document>
<test>some text</test>
</root>
```
|
You need to make xml.etree to include the files referenced with xi:include.
I have added the key line to your original example:
```
from xml.etree import ElementTree, ElementInclude
tree = ElementTree.parse("file1.xml")
root = tree.getroot()
#here you make the parser actually include every referenced file
ElementInclude.include(root)
#and now you are good to go
for child in root.getchildren():
print child.tag
```
For a detailed reference about includes in python, see the includes section in the official Python documentation <https://docs.python.org/3/library/xml.etree.elementtree.html>
|
58,370,832
|
I wanted to include an XML file in another XML file and parse it with python. I am trying to achieve it through Xinclude. There is a file1.xml which looks like
```
<?xml version="1.0"?>
<root>
<document xmlns:xi="http://www.w3.org/2001/XInclude">
<xi:include href="file2.xml" parse="xml" />
</document>
<test>some text</test>
</root>
```
and file2.xml which looks like
```
<para>This is a paragraph.</para>
```
Now in my python code i tried to access it like:
```
from xml.etree import ElementTree, ElementInclude
tree = ElementTree.parse("file1.xml")
root = tree.getroot()
for child in root.getchildren():
print child.tag
```
It prints the tag of all child elements of root
```
document
test
```
Now when i tries to print the child objects directly like
```
print root.document
print root.test
```
It says the root doesnt have children named test or document. Then how am i suppose to access the content in file2.xml?
I know that I can access the XML elements from python with schema like:
```
schema=etree.XMLSchema(objectify.fromstring(configSchema))
xmlParser = objectify.makeparser(schema = schema)
cfg = objectify.fromstring(xmlContents, xmlParser)
print cfg.elemetName # access element
```
But since here one XML file is included in another, I am confused how to write the schema. How can i solve it?
|
2019/10/14
|
[
"https://Stackoverflow.com/questions/58370832",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3186922/"
] |
Below
```
import xml.etree.ElementTree as ET
xml1 = '''<?xml version="1.0"?>
<root>
<test>some text</test>
</root>'''
xml2 = '''<para>This is a paragraph.</para>'''
root1 = ET.fromstring(xml1)
root2 = ET.fromstring(xml2)
root1.insert(0,root2)
para_value = root1.find('.//para').text
print(para_value)
```
output
```
This is a paragraph.
```
|
You need to make xml.etree to include the files referenced with xi:include.
I have added the key line to your original example:
```
from xml.etree import ElementTree, ElementInclude
tree = ElementTree.parse("file1.xml")
root = tree.getroot()
#here you make the parser actually include every referenced file
ElementInclude.include(root)
#and now you are good to go
for child in root.getchildren():
print child.tag
```
For a detailed reference about includes in python, see the includes section in the official Python documentation <https://docs.python.org/3/library/xml.etree.elementtree.html>
|
48,848,829
|
I'm coming from Python so trying to figure out basic things in Js.
I have the following:
```
{
name: 'Jobs ',
path: '/plat/jobs',
meta: {
label: 'jobs',
link: 'jobs/Basic.vue'
},
```
what I want is to create this block for each element in a list using a for loop (including the brackets)
in python this would be something like
```
for i in items:
{
name: i.name,
path: i.path,
meta: {
label: i.label,
link: i.link
}
```
How do I do this in js? what even is the object type here? Is it just a javascript dictionary?
```
children: [
let new_items = items.map(i => ({
name: i.name,
path: i.path,
meta: {
label: i.label,
link: i.link
}
}));
console.log(new_items);
component: lazyLoading('android/Basic')
}
]
}
```
I don't think this will work because i need each dictionary listed under children.
|
2018/02/18
|
[
"https://Stackoverflow.com/questions/48848829",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2680978/"
] |
Included is an example that operates on an array of objects, returning a new array of objects. An object is overloaded in JavaScript, but in this case it is synonymous in other languages to a hash, object, or dictionary (depending on application).
```js
let items = [{
name: 'foo',
path: 'foopath',
label: 'foolabel',
link: 'foolink'
}, {
name: 'bar',
path: 'barpath',
label: 'barlabel',
link: 'barlink'
}];
let new_items = items.map(i => ({
name: i.name,
path: i.path,
meta: {
label: i.label,
link: i.link
}
}));
console.log(new_items);
```
That said, a new array is not necessary since it's possible to edit the object directly:
```js
let items = [{
name: 'foo',
path: 'foopath',
label: 'foolabel',
link: 'foolink'
}, {
name: 'bar',
path: 'barpath',
label: 'barlabel',
link: 'barlink'
}];
items.forEach(i => {
i.meta = {label:i.label, link:i.link};
delete i.label;
delete i.link;
});
console.log(items);
```
|
I would use the list.map function
```
items.map(i => {
return {
name: i.name,
path: i.path,
meta: {
label: i.label,
link: i.link
}
}))
```
This will return a new list of items as specified
NOTE: this can be simplified further with an implicit return
```
items.map(i => ({
name: i.name,
path: i.path,
meta: {
label: i.label,
link: i.link
}
}))
```
|
45,705,876
|
Say I am working with OpenGL in python.
Often times you make a call such as
```
glutDisplayFunc(display)
```
where display is a function that you have written.
What if I have a class
```
class foo:
#self.x=5
def display(self, maybe some other variables):
#run some code
print("Hooray!, X is:", self.x)
```
and I want to pass that display function and have it print "Hooray!, X is: 5"
Would I pass it as
```
h = foo()
glutDisplayFunc(h.display)
```
Could this work?
|
2017/08/16
|
[
"https://Stackoverflow.com/questions/45705876",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4327053/"
] |
You can call method of another component from a different component but it will not update the value of the calling component without some tweaking like
`Event Emitters` if they have a parent child relationship or `Shared Services` or using `ngrx redux pattern`
How to Call a different component method be like
Component1
```
test(){
console.log("Test");
}
```
Component 2
```
working(){
let component = new Component1();
component.test();
}
```
Now to update the value in component 2 you might have to use any of the above.
For Event Emitters follow this [link](https://rahulrsingh09.github.io/AngularConcepts/inout)
For Shared services follow this [link](https://rahulrsingh09.github.io/AngularConcepts/faq)
For ngrx follow this [link](https://rahulrsingh09.github.io/AngularConcepts/ngrx)
|
You **`cannot`** do that, There are two possible ways you could achieve this,
1. use **[`angular service`](https://angular.io/tutorial/toh-pt4)** to pass the data between two components
2. use **[`Event Emitters`](https://angular.io/api/core/EventEmitter)** to pass the value among the components.
|
5,369,546
|
I tried to install python below way. But this did not work.
This take "error: bad install directory or PYTHONPATH".
[What's the proper way to install pip, virtualenv, and distribute for Python?](https://stackoverflow.com/questions/4324558/whats-the-proper-way-to-install-pip-virtualenv-and-distribute-for-python/4325047#4325047)
Make directory
```
$ mkdir -p ~/.python
```
Add .bashrc
```
#Use local python
export PATH=$HOME/.python/bin:$PATH
export PYTHONPATH=$HOME/.python
```
Create a file ~/.pydistutils.cfg
```
[install]
prefix=~/.python
```
Get install script
```
$ cd ~/src
$ curl -O http://python-distribute.org/distribute_setup.py
```
Execute and Error
```
$ python ./distribute_setup.py
Extracting in /tmp/tmpsT2kdA
Now working in /tmp/tmpsT2kdA/distribute-0.6.15
Installing Distribute
Before install bootstrap.
Scanning installed packages
No setuptools distribution foundrunning install
Checking .pth file support in /home/sane/.python/lib/python2.6/site-packages//usr/bin/python -E -c pass
TEST FAILED: /home/sane/.python/lib/python2.6/site-packages/ does NOT support .pth files
error: bad install directory or PYTHONPATH
You are attempting to install a package to a directory that is not
on PYTHONPATH and which Python does not read ".pth" files from. The
installation directory you specified (via --install-dir, --prefix, or
the distutils default setting) was:
/home/sane/.python/lib/python2.6/site-packages/
and your PYTHONPATH environment variable currently contains:
'/home/sane/.python'
Here are some of your options for correcting the problem:
* You can choose a different installation directory, i.e., one that is
on PYTHONPATH or supports .pth files
* You can add the installation directory to the PYTHONPATH environment
variable. (It must then also be on PYTHONPATH whenever you run
Python and want to use the package(s) you are installing.)
* You can set up the installation directory to support ".pth" files by
using one of the approaches described here:
http://packages.python.org/distribute/easy_install.html#custom-installation-locations
Please make the appropriate changes for your system and try again.
Something went wrong during the installation.
See the error message above.
```
My environment('sane' is my unix user name.)
```
$ python -V
Python 2.6.4
$ which python
/usr/bin/python
$ uname -a
Linux localhost.localdomain 2.6.34.8-68.fc13.x86_64 #1 SMP Thu Feb 17 15:03:58 UTC 2011 x86_64 x86_64 x86_64 GNU/Linux
```
|
2011/03/20
|
[
"https://Stackoverflow.com/questions/5369546",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/104080/"
] |
For completeness, here's another way of causing this:
```
@if(condition)
{
<input type="hidden" value="@value">
}
```
The problem is that the unclosed element makes it not obvious enough that the content is an html block (but we aren't *always* doing xhtml, right?).
In this scenario, you can use:
```
@if(condition)
{
@:<input type="hidden" value="@value">
}
```
or
```
@if(condition)
{
<text><input type="hidden" value="@value"></text>
}
```
|
I've gotten this issue with Razor. I'm not sure if it's a bug in the parser or what, but the way I've solved it is to break up the:
```
@using(Html.BeginForm()) {
<h1>Example</h1>
@foreach (var post in Model.Posts)
{
Html.RenderPartial("ShowPostPartial", post);
}
}
```
into:
```
@{ Html.BeginForm(); }
<h1>Example</h1>
@foreach (var post in Model.Posts)
{
Html.RenderPartial("ShowPostPartial", post);
}
@{ Html.EndForm(); }
```
|
5,369,546
|
I tried to install python below way. But this did not work.
This take "error: bad install directory or PYTHONPATH".
[What's the proper way to install pip, virtualenv, and distribute for Python?](https://stackoverflow.com/questions/4324558/whats-the-proper-way-to-install-pip-virtualenv-and-distribute-for-python/4325047#4325047)
Make directory
```
$ mkdir -p ~/.python
```
Add .bashrc
```
#Use local python
export PATH=$HOME/.python/bin:$PATH
export PYTHONPATH=$HOME/.python
```
Create a file ~/.pydistutils.cfg
```
[install]
prefix=~/.python
```
Get install script
```
$ cd ~/src
$ curl -O http://python-distribute.org/distribute_setup.py
```
Execute and Error
```
$ python ./distribute_setup.py
Extracting in /tmp/tmpsT2kdA
Now working in /tmp/tmpsT2kdA/distribute-0.6.15
Installing Distribute
Before install bootstrap.
Scanning installed packages
No setuptools distribution foundrunning install
Checking .pth file support in /home/sane/.python/lib/python2.6/site-packages//usr/bin/python -E -c pass
TEST FAILED: /home/sane/.python/lib/python2.6/site-packages/ does NOT support .pth files
error: bad install directory or PYTHONPATH
You are attempting to install a package to a directory that is not
on PYTHONPATH and which Python does not read ".pth" files from. The
installation directory you specified (via --install-dir, --prefix, or
the distutils default setting) was:
/home/sane/.python/lib/python2.6/site-packages/
and your PYTHONPATH environment variable currently contains:
'/home/sane/.python'
Here are some of your options for correcting the problem:
* You can choose a different installation directory, i.e., one that is
on PYTHONPATH or supports .pth files
* You can add the installation directory to the PYTHONPATH environment
variable. (It must then also be on PYTHONPATH whenever you run
Python and want to use the package(s) you are installing.)
* You can set up the installation directory to support ".pth" files by
using one of the approaches described here:
http://packages.python.org/distribute/easy_install.html#custom-installation-locations
Please make the appropriate changes for your system and try again.
Something went wrong during the installation.
See the error message above.
```
My environment('sane' is my unix user name.)
```
$ python -V
Python 2.6.4
$ which python
/usr/bin/python
$ uname -a
Linux localhost.localdomain 2.6.34.8-68.fc13.x86_64 #1 SMP Thu Feb 17 15:03:58 UTC 2011 x86_64 x86_64 x86_64 GNU/Linux
```
|
2011/03/20
|
[
"https://Stackoverflow.com/questions/5369546",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/104080/"
] |
This is basically the same answer that Mark Gravell gave, but I think this one is an easy mistake to make if you have a larger view:
Check the html tags to see where they start and end, and notice razor syntax in between, this is wrong:
```
@using (Html.BeginForm())
{
<div class="divClass">
@Html.DisplayFor(c => c.SomeProperty)
}
</div>
```
And this is correct:
```
@using (Html.BeginForm())
{
<div class="divClass">
@Html.DisplayFor(c => c.SomeProperty)
</div>
}
```
Again, almost same as the earlier post about unclosed input element, but just beware, I've placed div's wrong plenty of times when changing a view.
|
I've gotten this issue with Razor. I'm not sure if it's a bug in the parser or what, but the way I've solved it is to break up the:
```
@using(Html.BeginForm()) {
<h1>Example</h1>
@foreach (var post in Model.Posts)
{
Html.RenderPartial("ShowPostPartial", post);
}
}
```
into:
```
@{ Html.BeginForm(); }
<h1>Example</h1>
@foreach (var post in Model.Posts)
{
Html.RenderPartial("ShowPostPartial", post);
}
@{ Html.EndForm(); }
```
|
5,369,546
|
I tried to install python below way. But this did not work.
This take "error: bad install directory or PYTHONPATH".
[What's the proper way to install pip, virtualenv, and distribute for Python?](https://stackoverflow.com/questions/4324558/whats-the-proper-way-to-install-pip-virtualenv-and-distribute-for-python/4325047#4325047)
Make directory
```
$ mkdir -p ~/.python
```
Add .bashrc
```
#Use local python
export PATH=$HOME/.python/bin:$PATH
export PYTHONPATH=$HOME/.python
```
Create a file ~/.pydistutils.cfg
```
[install]
prefix=~/.python
```
Get install script
```
$ cd ~/src
$ curl -O http://python-distribute.org/distribute_setup.py
```
Execute and Error
```
$ python ./distribute_setup.py
Extracting in /tmp/tmpsT2kdA
Now working in /tmp/tmpsT2kdA/distribute-0.6.15
Installing Distribute
Before install bootstrap.
Scanning installed packages
No setuptools distribution foundrunning install
Checking .pth file support in /home/sane/.python/lib/python2.6/site-packages//usr/bin/python -E -c pass
TEST FAILED: /home/sane/.python/lib/python2.6/site-packages/ does NOT support .pth files
error: bad install directory or PYTHONPATH
You are attempting to install a package to a directory that is not
on PYTHONPATH and which Python does not read ".pth" files from. The
installation directory you specified (via --install-dir, --prefix, or
the distutils default setting) was:
/home/sane/.python/lib/python2.6/site-packages/
and your PYTHONPATH environment variable currently contains:
'/home/sane/.python'
Here are some of your options for correcting the problem:
* You can choose a different installation directory, i.e., one that is
on PYTHONPATH or supports .pth files
* You can add the installation directory to the PYTHONPATH environment
variable. (It must then also be on PYTHONPATH whenever you run
Python and want to use the package(s) you are installing.)
* You can set up the installation directory to support ".pth" files by
using one of the approaches described here:
http://packages.python.org/distribute/easy_install.html#custom-installation-locations
Please make the appropriate changes for your system and try again.
Something went wrong during the installation.
See the error message above.
```
My environment('sane' is my unix user name.)
```
$ python -V
Python 2.6.4
$ which python
/usr/bin/python
$ uname -a
Linux localhost.localdomain 2.6.34.8-68.fc13.x86_64 #1 SMP Thu Feb 17 15:03:58 UTC 2011 x86_64 x86_64 x86_64 GNU/Linux
```
|
2011/03/20
|
[
"https://Stackoverflow.com/questions/5369546",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/104080/"
] |
For completeness, here's another way of causing this:
```
@if(condition)
{
<input type="hidden" value="@value">
}
```
The problem is that the unclosed element makes it not obvious enough that the content is an html block (but we aren't *always* doing xhtml, right?).
In this scenario, you can use:
```
@if(condition)
{
@:<input type="hidden" value="@value">
}
```
or
```
@if(condition)
{
<text><input type="hidden" value="@value"></text>
}
```
|
MY bad.
I've got an error in the partial view.
I've written 'class' instead of '@class' in htmlAttributes.
|
5,369,546
|
I tried to install python below way. But this did not work.
This take "error: bad install directory or PYTHONPATH".
[What's the proper way to install pip, virtualenv, and distribute for Python?](https://stackoverflow.com/questions/4324558/whats-the-proper-way-to-install-pip-virtualenv-and-distribute-for-python/4325047#4325047)
Make directory
```
$ mkdir -p ~/.python
```
Add .bashrc
```
#Use local python
export PATH=$HOME/.python/bin:$PATH
export PYTHONPATH=$HOME/.python
```
Create a file ~/.pydistutils.cfg
```
[install]
prefix=~/.python
```
Get install script
```
$ cd ~/src
$ curl -O http://python-distribute.org/distribute_setup.py
```
Execute and Error
```
$ python ./distribute_setup.py
Extracting in /tmp/tmpsT2kdA
Now working in /tmp/tmpsT2kdA/distribute-0.6.15
Installing Distribute
Before install bootstrap.
Scanning installed packages
No setuptools distribution foundrunning install
Checking .pth file support in /home/sane/.python/lib/python2.6/site-packages//usr/bin/python -E -c pass
TEST FAILED: /home/sane/.python/lib/python2.6/site-packages/ does NOT support .pth files
error: bad install directory or PYTHONPATH
You are attempting to install a package to a directory that is not
on PYTHONPATH and which Python does not read ".pth" files from. The
installation directory you specified (via --install-dir, --prefix, or
the distutils default setting) was:
/home/sane/.python/lib/python2.6/site-packages/
and your PYTHONPATH environment variable currently contains:
'/home/sane/.python'
Here are some of your options for correcting the problem:
* You can choose a different installation directory, i.e., one that is
on PYTHONPATH or supports .pth files
* You can add the installation directory to the PYTHONPATH environment
variable. (It must then also be on PYTHONPATH whenever you run
Python and want to use the package(s) you are installing.)
* You can set up the installation directory to support ".pth" files by
using one of the approaches described here:
http://packages.python.org/distribute/easy_install.html#custom-installation-locations
Please make the appropriate changes for your system and try again.
Something went wrong during the installation.
See the error message above.
```
My environment('sane' is my unix user name.)
```
$ python -V
Python 2.6.4
$ which python
/usr/bin/python
$ uname -a
Linux localhost.localdomain 2.6.34.8-68.fc13.x86_64 #1 SMP Thu Feb 17 15:03:58 UTC 2011 x86_64 x86_64 x86_64 GNU/Linux
```
|
2011/03/20
|
[
"https://Stackoverflow.com/questions/5369546",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/104080/"
] |
This is basically the same answer that Mark Gravell gave, but I think this one is an easy mistake to make if you have a larger view:
Check the html tags to see where they start and end, and notice razor syntax in between, this is wrong:
```
@using (Html.BeginForm())
{
<div class="divClass">
@Html.DisplayFor(c => c.SomeProperty)
}
</div>
```
And this is correct:
```
@using (Html.BeginForm())
{
<div class="divClass">
@Html.DisplayFor(c => c.SomeProperty)
</div>
}
```
Again, almost same as the earlier post about unclosed input element, but just beware, I've placed div's wrong plenty of times when changing a view.
|
MY bad.
I've got an error in the partial view.
I've written 'class' instead of '@class' in htmlAttributes.
|
5,369,546
|
I tried to install python below way. But this did not work.
This take "error: bad install directory or PYTHONPATH".
[What's the proper way to install pip, virtualenv, and distribute for Python?](https://stackoverflow.com/questions/4324558/whats-the-proper-way-to-install-pip-virtualenv-and-distribute-for-python/4325047#4325047)
Make directory
```
$ mkdir -p ~/.python
```
Add .bashrc
```
#Use local python
export PATH=$HOME/.python/bin:$PATH
export PYTHONPATH=$HOME/.python
```
Create a file ~/.pydistutils.cfg
```
[install]
prefix=~/.python
```
Get install script
```
$ cd ~/src
$ curl -O http://python-distribute.org/distribute_setup.py
```
Execute and Error
```
$ python ./distribute_setup.py
Extracting in /tmp/tmpsT2kdA
Now working in /tmp/tmpsT2kdA/distribute-0.6.15
Installing Distribute
Before install bootstrap.
Scanning installed packages
No setuptools distribution foundrunning install
Checking .pth file support in /home/sane/.python/lib/python2.6/site-packages//usr/bin/python -E -c pass
TEST FAILED: /home/sane/.python/lib/python2.6/site-packages/ does NOT support .pth files
error: bad install directory or PYTHONPATH
You are attempting to install a package to a directory that is not
on PYTHONPATH and which Python does not read ".pth" files from. The
installation directory you specified (via --install-dir, --prefix, or
the distutils default setting) was:
/home/sane/.python/lib/python2.6/site-packages/
and your PYTHONPATH environment variable currently contains:
'/home/sane/.python'
Here are some of your options for correcting the problem:
* You can choose a different installation directory, i.e., one that is
on PYTHONPATH or supports .pth files
* You can add the installation directory to the PYTHONPATH environment
variable. (It must then also be on PYTHONPATH whenever you run
Python and want to use the package(s) you are installing.)
* You can set up the installation directory to support ".pth" files by
using one of the approaches described here:
http://packages.python.org/distribute/easy_install.html#custom-installation-locations
Please make the appropriate changes for your system and try again.
Something went wrong during the installation.
See the error message above.
```
My environment('sane' is my unix user name.)
```
$ python -V
Python 2.6.4
$ which python
/usr/bin/python
$ uname -a
Linux localhost.localdomain 2.6.34.8-68.fc13.x86_64 #1 SMP Thu Feb 17 15:03:58 UTC 2011 x86_64 x86_64 x86_64 GNU/Linux
```
|
2011/03/20
|
[
"https://Stackoverflow.com/questions/5369546",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/104080/"
] |
For completeness, here's another way of causing this:
```
@if(condition)
{
<input type="hidden" value="@value">
}
```
The problem is that the unclosed element makes it not obvious enough that the content is an html block (but we aren't *always* doing xhtml, right?).
In this scenario, you can use:
```
@if(condition)
{
@:<input type="hidden" value="@value">
}
```
or
```
@if(condition)
{
<text><input type="hidden" value="@value"></text>
}
```
|
The Razor parser of MVC4 is different from MVC3. Razor v3 is having advanced parser features and on the other hand strict parsing compare to MVC3.
--> Avoid using server blocks in views unless there is variable declaration section.
Don’t : \n
`@{if(check){body}}`
Recommended :
`@if(check){body}`
--> Avoid using @ when you are already in server scope.
Don’t : `@if(@variable)`
Recommended : `@if(variable)`
Don't : `@{int a = @Model.Property }`
Recommended : `@{int a = Model.Property }`
Refrence : <https://code-examples.net/en/q/c3767f>
|
5,369,546
|
I tried to install python below way. But this did not work.
This take "error: bad install directory or PYTHONPATH".
[What's the proper way to install pip, virtualenv, and distribute for Python?](https://stackoverflow.com/questions/4324558/whats-the-proper-way-to-install-pip-virtualenv-and-distribute-for-python/4325047#4325047)
Make directory
```
$ mkdir -p ~/.python
```
Add .bashrc
```
#Use local python
export PATH=$HOME/.python/bin:$PATH
export PYTHONPATH=$HOME/.python
```
Create a file ~/.pydistutils.cfg
```
[install]
prefix=~/.python
```
Get install script
```
$ cd ~/src
$ curl -O http://python-distribute.org/distribute_setup.py
```
Execute and Error
```
$ python ./distribute_setup.py
Extracting in /tmp/tmpsT2kdA
Now working in /tmp/tmpsT2kdA/distribute-0.6.15
Installing Distribute
Before install bootstrap.
Scanning installed packages
No setuptools distribution foundrunning install
Checking .pth file support in /home/sane/.python/lib/python2.6/site-packages//usr/bin/python -E -c pass
TEST FAILED: /home/sane/.python/lib/python2.6/site-packages/ does NOT support .pth files
error: bad install directory or PYTHONPATH
You are attempting to install a package to a directory that is not
on PYTHONPATH and which Python does not read ".pth" files from. The
installation directory you specified (via --install-dir, --prefix, or
the distutils default setting) was:
/home/sane/.python/lib/python2.6/site-packages/
and your PYTHONPATH environment variable currently contains:
'/home/sane/.python'
Here are some of your options for correcting the problem:
* You can choose a different installation directory, i.e., one that is
on PYTHONPATH or supports .pth files
* You can add the installation directory to the PYTHONPATH environment
variable. (It must then also be on PYTHONPATH whenever you run
Python and want to use the package(s) you are installing.)
* You can set up the installation directory to support ".pth" files by
using one of the approaches described here:
http://packages.python.org/distribute/easy_install.html#custom-installation-locations
Please make the appropriate changes for your system and try again.
Something went wrong during the installation.
See the error message above.
```
My environment('sane' is my unix user name.)
```
$ python -V
Python 2.6.4
$ which python
/usr/bin/python
$ uname -a
Linux localhost.localdomain 2.6.34.8-68.fc13.x86_64 #1 SMP Thu Feb 17 15:03:58 UTC 2011 x86_64 x86_64 x86_64 GNU/Linux
```
|
2011/03/20
|
[
"https://Stackoverflow.com/questions/5369546",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/104080/"
] |
This is basically the same answer that Mark Gravell gave, but I think this one is an easy mistake to make if you have a larger view:
Check the html tags to see where they start and end, and notice razor syntax in between, this is wrong:
```
@using (Html.BeginForm())
{
<div class="divClass">
@Html.DisplayFor(c => c.SomeProperty)
}
</div>
```
And this is correct:
```
@using (Html.BeginForm())
{
<div class="divClass">
@Html.DisplayFor(c => c.SomeProperty)
</div>
}
```
Again, almost same as the earlier post about unclosed input element, but just beware, I've placed div's wrong plenty of times when changing a view.
|
The Razor parser of MVC4 is different from MVC3. Razor v3 is having advanced parser features and on the other hand strict parsing compare to MVC3.
--> Avoid using server blocks in views unless there is variable declaration section.
Don’t : \n
`@{if(check){body}}`
Recommended :
`@if(check){body}`
--> Avoid using @ when you are already in server scope.
Don’t : `@if(@variable)`
Recommended : `@if(variable)`
Don't : `@{int a = @Model.Property }`
Recommended : `@{int a = Model.Property }`
Refrence : <https://code-examples.net/en/q/c3767f>
|
40,619,916
|
I am supposed to check if a word or sentence is a palindrome using code, and I was able to check for words, but I'm having trouble checking sentences as being a palindrome. Here's my code, it's short but I'm not sure how else to add it to check for sentence palindromes. I'm sort of a beginner at python, and I've already looked at other people's code, and they are too complicated for me to **really** understand. Here's what I wrote:
```
def is_palindrome(s):
if s[::1] == s[::-1]:
return True
else:
return False
```
Here is an example of a sentence palindrome: "Red Roses run no risk, sir, on nurses order." (If you ignore spaces and special characters)
|
2016/11/15
|
[
"https://Stackoverflow.com/questions/40619916",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/7087165/"
] |
```
import string
def is_palindrome(s):
whitelist = set(string.ascii_lowercase)
s = s.lower()
s = ''.join([char for char in s if char in whitelist])
return s == s[::-1]
```
|
Without for:
```
word_pilandrom = str(input("Please enter a word: "))
new_word=word_pilandrom.lower().replace(" ","")
if new_word[::1] == new_word[::-1]:
print("OK")
else:
print("NOT")
```
|
40,619,916
|
I am supposed to check if a word or sentence is a palindrome using code, and I was able to check for words, but I'm having trouble checking sentences as being a palindrome. Here's my code, it's short but I'm not sure how else to add it to check for sentence palindromes. I'm sort of a beginner at python, and I've already looked at other people's code, and they are too complicated for me to **really** understand. Here's what I wrote:
```
def is_palindrome(s):
if s[::1] == s[::-1]:
return True
else:
return False
```
Here is an example of a sentence palindrome: "Red Roses run no risk, sir, on nurses order." (If you ignore spaces and special characters)
|
2016/11/15
|
[
"https://Stackoverflow.com/questions/40619916",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/7087165/"
] |
To check sentence palindrome-ness, the algorithm seems to be:
1. Remove any non-alphabetical character
2. Compare `new_s` to `new_s[::-1]` case-insensitively.
You can do the former by doing:
```
import string
valid = set(string.ascii_letters)
result_s = ''.join([ch for ch in original_s if ch in valid])
```
Then the latter by doing:
```
result_s.casefold() == result_s.casefold()[::-1]
```
Put the whole thing together with:
```
import string
s = "Red roses run no risk, sir, on nurses order"
s2 = "abcba"
s_fail = "blah"
def is_palindrome(s):
valid = set(string.ascii_letters)
result_s = ''.join([ch for ch in s if ch in valid])
cf_s = result_s.casefold()
return cf_s == cf_s[::-1]
assert(is_palindrome(s))
assert(is_palindrome(s2))
assert(is_palindrome(s_fail)) # throws AssertionError
```
|
If by palindrome sentence you mean ignoring spaces, you can do that like this:
```
is_palindrome(sentence.replace(' ', ''))
```
|
40,619,916
|
I am supposed to check if a word or sentence is a palindrome using code, and I was able to check for words, but I'm having trouble checking sentences as being a palindrome. Here's my code, it's short but I'm not sure how else to add it to check for sentence palindromes. I'm sort of a beginner at python, and I've already looked at other people's code, and they are too complicated for me to **really** understand. Here's what I wrote:
```
def is_palindrome(s):
if s[::1] == s[::-1]:
return True
else:
return False
```
Here is an example of a sentence palindrome: "Red Roses run no risk, sir, on nurses order." (If you ignore spaces and special characters)
|
2016/11/15
|
[
"https://Stackoverflow.com/questions/40619916",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/7087165/"
] |
```
import string
def is_palindrome(s):
whitelist = set(string.ascii_lowercase)
s = s.lower()
s = ''.join([char for char in s if char in whitelist])
return s == s[::-1]
```
|
We need to check whether the reverse of a string equals the original string with two additional requirements:
* ignore case
* ignore anything except letters
The solution is:
1. Convert all letters to the same case (e.g. lower case)
2. Filter only letters
3. Check palindrome condition
```py
def is_palindrome(s):
s_lower = s.lower()
letters = [ch for ch in s_lower if ch.isalpha()]
return letters == letters[::-1]
```
|
40,619,916
|
I am supposed to check if a word or sentence is a palindrome using code, and I was able to check for words, but I'm having trouble checking sentences as being a palindrome. Here's my code, it's short but I'm not sure how else to add it to check for sentence palindromes. I'm sort of a beginner at python, and I've already looked at other people's code, and they are too complicated for me to **really** understand. Here's what I wrote:
```
def is_palindrome(s):
if s[::1] == s[::-1]:
return True
else:
return False
```
Here is an example of a sentence palindrome: "Red Roses run no risk, sir, on nurses order." (If you ignore spaces and special characters)
|
2016/11/15
|
[
"https://Stackoverflow.com/questions/40619916",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/7087165/"
] |
```
import string
def is_palindrome(s):
whitelist = set(string.ascii_lowercase)
s = s.lower()
s = ''.join([char for char in s if char in whitelist])
return s == s[::-1]
```
|
To check sentence palindrome-ness, the algorithm seems to be:
1. Remove any non-alphabetical character
2. Compare `new_s` to `new_s[::-1]` case-insensitively.
You can do the former by doing:
```
import string
valid = set(string.ascii_letters)
result_s = ''.join([ch for ch in original_s if ch in valid])
```
Then the latter by doing:
```
result_s.casefold() == result_s.casefold()[::-1]
```
Put the whole thing together with:
```
import string
s = "Red roses run no risk, sir, on nurses order"
s2 = "abcba"
s_fail = "blah"
def is_palindrome(s):
valid = set(string.ascii_letters)
result_s = ''.join([ch for ch in s if ch in valid])
cf_s = result_s.casefold()
return cf_s == cf_s[::-1]
assert(is_palindrome(s))
assert(is_palindrome(s2))
assert(is_palindrome(s_fail)) # throws AssertionError
```
|
40,619,916
|
I am supposed to check if a word or sentence is a palindrome using code, and I was able to check for words, but I'm having trouble checking sentences as being a palindrome. Here's my code, it's short but I'm not sure how else to add it to check for sentence palindromes. I'm sort of a beginner at python, and I've already looked at other people's code, and they are too complicated for me to **really** understand. Here's what I wrote:
```
def is_palindrome(s):
if s[::1] == s[::-1]:
return True
else:
return False
```
Here is an example of a sentence palindrome: "Red Roses run no risk, sir, on nurses order." (If you ignore spaces and special characters)
|
2016/11/15
|
[
"https://Stackoverflow.com/questions/40619916",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/7087165/"
] |
To check sentence palindrome-ness, the algorithm seems to be:
1. Remove any non-alphabetical character
2. Compare `new_s` to `new_s[::-1]` case-insensitively.
You can do the former by doing:
```
import string
valid = set(string.ascii_letters)
result_s = ''.join([ch for ch in original_s if ch in valid])
```
Then the latter by doing:
```
result_s.casefold() == result_s.casefold()[::-1]
```
Put the whole thing together with:
```
import string
s = "Red roses run no risk, sir, on nurses order"
s2 = "abcba"
s_fail = "blah"
def is_palindrome(s):
valid = set(string.ascii_letters)
result_s = ''.join([ch for ch in s if ch in valid])
cf_s = result_s.casefold()
return cf_s == cf_s[::-1]
assert(is_palindrome(s))
assert(is_palindrome(s2))
assert(is_palindrome(s_fail)) # throws AssertionError
```
|
```
str =input("Enter the word")
s =str.lower()
s1 =s.split(" ")
s2 =s1.join()
s3 =''.join(reversed(s2))
if s2 ==s3 :
print("Palindrome:")
else :
print("Not Palindrome:")
```
|
40,619,916
|
I am supposed to check if a word or sentence is a palindrome using code, and I was able to check for words, but I'm having trouble checking sentences as being a palindrome. Here's my code, it's short but I'm not sure how else to add it to check for sentence palindromes. I'm sort of a beginner at python, and I've already looked at other people's code, and they are too complicated for me to **really** understand. Here's what I wrote:
```
def is_palindrome(s):
if s[::1] == s[::-1]:
return True
else:
return False
```
Here is an example of a sentence palindrome: "Red Roses run no risk, sir, on nurses order." (If you ignore spaces and special characters)
|
2016/11/15
|
[
"https://Stackoverflow.com/questions/40619916",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/7087165/"
] |
```
import string
def is_palindrome(s):
whitelist = set(string.ascii_lowercase)
s = s.lower()
s = ''.join([char for char in s if char in whitelist])
return s == s[::-1]
```
|
Based on the title, sentence could be termed as palindrome in 2 ways:
1. **Order of words should be inverted**: Create list of words and check for the inverted list:
```
>>> my_sentence = 'Hello World Hello'
>>> words = my_sentence.split()
>>> words == words[::-1]
True
```
2. **Order of characters should be inverted**: Check for the inverted strings:
```
>>> my_sentence = 'Hello World Hello'
>>> my_sentence == my_sentence[::-1]
False
```
What you want is the way 2, with the exception that your code should discard commas `,` and white-space . And also it should be case insensitive. Firstly remove spaces and commas using `str.replace()`, and convert the string to lower using `str.lower()`. Then make the inverted check as:
```
>>> my_sentence = 'Red Roses run no risk, sir, on nurses order'
>>> my_sentence = my_sentence.replace(' ', '').replace(',', '').lower()
>>> my_sentence == my_sentence[::-1]
True
```
|
40,619,916
|
I am supposed to check if a word or sentence is a palindrome using code, and I was able to check for words, but I'm having trouble checking sentences as being a palindrome. Here's my code, it's short but I'm not sure how else to add it to check for sentence palindromes. I'm sort of a beginner at python, and I've already looked at other people's code, and they are too complicated for me to **really** understand. Here's what I wrote:
```
def is_palindrome(s):
if s[::1] == s[::-1]:
return True
else:
return False
```
Here is an example of a sentence palindrome: "Red Roses run no risk, sir, on nurses order." (If you ignore spaces and special characters)
|
2016/11/15
|
[
"https://Stackoverflow.com/questions/40619916",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/7087165/"
] |
```
import string
def is_palindrome(s):
whitelist = set(string.ascii_lowercase)
s = s.lower()
s = ''.join([char for char in s if char in whitelist])
return s == s[::-1]
```
|
```
str =input("Enter the word")
s =str.lower()
s1 =s.split(" ")
s2 =s1.join()
s3 =''.join(reversed(s2))
if s2 ==s3 :
print("Palindrome:")
else :
print("Not Palindrome:")
```
|
40,619,916
|
I am supposed to check if a word or sentence is a palindrome using code, and I was able to check for words, but I'm having trouble checking sentences as being a palindrome. Here's my code, it's short but I'm not sure how else to add it to check for sentence palindromes. I'm sort of a beginner at python, and I've already looked at other people's code, and they are too complicated for me to **really** understand. Here's what I wrote:
```
def is_palindrome(s):
if s[::1] == s[::-1]:
return True
else:
return False
```
Here is an example of a sentence palindrome: "Red Roses run no risk, sir, on nurses order." (If you ignore spaces and special characters)
|
2016/11/15
|
[
"https://Stackoverflow.com/questions/40619916",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/7087165/"
] |
You could filter the string to get only the letters like this:
```
letters = ''.join(c for c in words if c in string.letters)
is_palindrome(letters)
```
You would also have to call `lower` on it:
```
def is_palindrome(s):
s = ''.join(c for c in s if c in string.letters)
s = s.lower()
return s == s[::-1]
```
|
We need to check whether the reverse of a string equals the original string with two additional requirements:
* ignore case
* ignore anything except letters
The solution is:
1. Convert all letters to the same case (e.g. lower case)
2. Filter only letters
3. Check palindrome condition
```py
def is_palindrome(s):
s_lower = s.lower()
letters = [ch for ch in s_lower if ch.isalpha()]
return letters == letters[::-1]
```
|
40,619,916
|
I am supposed to check if a word or sentence is a palindrome using code, and I was able to check for words, but I'm having trouble checking sentences as being a palindrome. Here's my code, it's short but I'm not sure how else to add it to check for sentence palindromes. I'm sort of a beginner at python, and I've already looked at other people's code, and they are too complicated for me to **really** understand. Here's what I wrote:
```
def is_palindrome(s):
if s[::1] == s[::-1]:
return True
else:
return False
```
Here is an example of a sentence palindrome: "Red Roses run no risk, sir, on nurses order." (If you ignore spaces and special characters)
|
2016/11/15
|
[
"https://Stackoverflow.com/questions/40619916",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/7087165/"
] |
```
import string
def is_palindrome(s):
whitelist = set(string.ascii_lowercase)
s = s.lower()
s = ''.join([char for char in s if char in whitelist])
return s == s[::-1]
```
|
If by palindrome sentence you mean ignoring spaces, you can do that like this:
```
is_palindrome(sentence.replace(' ', ''))
```
|
40,619,916
|
I am supposed to check if a word or sentence is a palindrome using code, and I was able to check for words, but I'm having trouble checking sentences as being a palindrome. Here's my code, it's short but I'm not sure how else to add it to check for sentence palindromes. I'm sort of a beginner at python, and I've already looked at other people's code, and they are too complicated for me to **really** understand. Here's what I wrote:
```
def is_palindrome(s):
if s[::1] == s[::-1]:
return True
else:
return False
```
Here is an example of a sentence palindrome: "Red Roses run no risk, sir, on nurses order." (If you ignore spaces and special characters)
|
2016/11/15
|
[
"https://Stackoverflow.com/questions/40619916",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/7087165/"
] |
You could filter the string to get only the letters like this:
```
letters = ''.join(c for c in words if c in string.letters)
is_palindrome(letters)
```
You would also have to call `lower` on it:
```
def is_palindrome(s):
s = ''.join(c for c in s if c in string.letters)
s = s.lower()
return s == s[::-1]
```
|
Without for:
```
word_pilandrom = str(input("Please enter a word: "))
new_word=word_pilandrom.lower().replace(" ","")
if new_word[::1] == new_word[::-1]:
print("OK")
else:
print("NOT")
```
|
39,834,949
|
I'm aware this is normally done with twistd, but I'm wanting to use iPython to test out code 'live' on twisted code.
[How to start twisted's reactor from ipython](https://stackoverflow.com/questions/4673375/how-to-start-twisteds-reactor-from-ipython) asked basically the same thing but the first solution no longer works with current ipython/twisted, while the second is also unusable (thread raises multiple errors).
<https://gist.github.com/kived/8721434> has something called TPython which purports to do this, but running that seems to work except clients never connect to the server (while running the same clients works in the python shell).
Do I *have* to use Conch Manhole, or is there a way to get iPython to play nice (probably with \_threadedselect).
For reference, I'm asking using ipython 5.0.0, python 2.7.12, twisted 16.4.1
|
2016/10/03
|
[
"https://Stackoverflow.com/questions/39834949",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4443898/"
] |
Async code in general can be troublesome to run in a live interpreter. It's best just to run an async script in the background and do your iPython stuff in a separate interpreter. You can intercommunicate using files or TCP. If this went over your head, that's because it's not always simple and it might be best to avoid the hassle of possible.
However, you'll be happy to know there is an awesome project called `crochet` for using Twisted in non-async applications. It truly is one of my favorite modules and I'm shocked that it's not more widely used (you can change that ;D though). The `crochet` module has a `run_in_reactor` decorator that runs a Twisted reactor in a separate thread managed by `crochet` itself. Here is a quick class example that executes requests to a Star Wars RESTFul API, then stores the JSON response in a list.
```
from __future__ import print_function
import json
from twisted.internet import defer, task
from twisted.web.client import getPage
from crochet import run_in_reactor, setup as setup_crochet
setup_crochet()
class StarWarsPeople(object):
people_id = [_id for _id in range(1, 89)]
people = []
@run_in_reactor
def requestPeople(self):
"""
Request Star Wars JSON data from the SWAPI site.
This occurs in a Twisted reactor in a separate thread.
"""
for _id in self.people_id:
url = 'http://swapi.co/api/people/{0}'.format(_id).encode('utf-8')
d = getPage(url)
d.addCallback(self.appendJSON)
def appendJSON(self, response):
"""
A callback which will take the response from the getPage() request,
convert it to JSON, then append it to self.people, which can be
accessed outside of the crochet thread.
"""
response_json = json.loads(response.decode('utf-8'))
#print(response_json) # uncomment if you want to see output
self.people.append(response_json)
```
Save this in a file (example: `swapi.py`), open iPython, import the newly created module, then run a quick test like so:
```
from swapi import StarWarsPeople
testing = StarWarsPeople()
testing.requestPeople()
from time import sleep
for x in range(5):
print(len(testing.people))
sleep(2)
```
As you can see it runs in the background and stuff can still occur in the main thread. You can continue using the iPython interpreter as you usually do. You can even have a manhole running in the background for some cool hacking too!
References
==========
* <https://crochet.readthedocs.io/en/1.5.0/introduction.html#crochet-use-twisted-anywhere>
|
While this doesn't answer the question I thought I had, it does answer (sort of) the question I posted. Embedding ipython works in the sense that you get access to business objects with the reactor running.
```
from twisted.internet import reactor
from twisted.internet.endpoints import serverFromString
from myfactory import MyFactory
class MyClass(object):
def __init__(self, **kwargs):
super(MyClass, self).__init__(**kwargs)
server = serverFromString(reactor, 'tcp:12345')
server.list(MyFactory(self))
def interact():
import IPython
IPython.embed()
reactor.callInThread(interact)
if __name__ == "__main__":
myclass = MyClass()
reactor.run()
```
Call the above with `python myclass.py` or similar.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.