qid
int64 46k
74.7M
| question
stringlengths 54
37.8k
| date
stringlengths 10
10
| metadata
listlengths 3
3
| response_j
stringlengths 17
26k
| response_k
stringlengths 26
26k
|
|---|---|---|---|---|---|
14,657,433
|
How do I calculate correlation matrix in python? I have an n-dimensional vector in which each element has 5 dimension. For example my vector looks like
```
[
[0.1, .32, .2, 0.4, 0.8],
[.23, .18, .56, .61, .12],
[.9, .3, .6, .5, .3],
[.34, .75, .91, .19, .21]
]
```
In this case dimension of the vector is 4 and each element of this vector have 5 dimension. How to construct the matrix in the easiest way?
Thanks
|
2013/02/02
|
[
"https://Stackoverflow.com/questions/14657433",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1964587/"
] |
You can also use np.array if you don't want to write your matrix all over again.
```
import numpy as np
a = np.array([ [0.1, .32, .2, 0.4, 0.8], [.23, .18, .56, .61, .12], [.9, .3, .6, .5, .3], [.34, .75, .91, .19, .21]])
b = np.corrcoef(a)
print b
```
|
As I almost missed that comment by @Anton Tarasenko, I'll provide a new answer. So given your array:
```
a = np.array([[0.1, .32, .2, 0.4, 0.8],
[.23, .18, .56, .61, .12],
[.9, .3, .6, .5, .3],
[.34, .75, .91, .19, .21]])
```
If you want the correlation matrix of your dimensions (columns), which I assume, you can use numpy (note the transpose!):
```
import numpy as np
print(np.corrcoef(a.T))
```
Or if you have it in Pandas anyhow:
```
import pandas as pd
print(pd.DataFrame(a).corr())
```
Both print
```
array([[ 1. , -0.03783885, 0.34905716, 0.14648975, -0.34945863],
[-0.03783885, 1. , 0.67888519, -0.96102583, -0.12757741],
[ 0.34905716, 0.67888519, 1. , -0.45104803, -0.80429469],
[ 0.14648975, -0.96102583, -0.45104803, 1. , -0.15132323],
[-0.34945863, -0.12757741, -0.80429469, -0.15132323, 1. ]])
```
|
6,213,336
|
I'm reading lines from a file to then work with them. Each line is composed solely by float numbers.
I have pretty much everything sorted up to convert the lines into arrays.
I basically do (pseudopython code)
```
line=file.readlines()
line=line.split(' ') # Or whatever separator
array=np.array(line)
#And then iterate over every value casting them as floats
newarray[i]=array.float(array[i])
```
This works, buts seems a bit counterintuitive and antipythonic, I wanted to know if there is a better way to handle the inputs from a file to have at the end an array full of floats.
|
2011/06/02
|
[
"https://Stackoverflow.com/questions/6213336",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/486262/"
] |
Quick answer:
```
arrays = []
for line in open(your_file): # no need to use readlines if you don't want to store them
# use a list comprehension to build your array on the fly
new_array = np.array((array.float(i) for i in line.split(' ')))
arrays.append(new_array)
```
If you process often this kind of data, the csv module will help.
```
import csv
arrays = []
# declare the format of you csv file and Python will turn line into
# lists for you
parser = csv.reader(open(your_file), delimiter=' '))
for l in parser:
arrays.append(np.array((array.float(i) for i in l)))
```
If you feel wild, you can even make this completly declarative:
```
import csv
parser = csv.reader(open(your_file), delimiter=' '))
make_array = lambda row : np.array((array.float(i) for i in row))
arrays = [make_array(row) for row in parser]
```
And if you realy want you colleagues to hate you, you can make a one liner (NOT PYTHONIC AT ALL :-):
```
arrays = [np.array((array.float(i) for i in r)) for r in csv.reader(open(your_file), delimiter=' '))]
```
Stripping all the boiler plate and flexibility, you can end up with a clean and quite readable one liner. I wouldn't use it because I like the refatoring potential of using `csv`, but it can be good enought. It's a grey zone here, so I wouldn't say it's Pythonic, but it's definitly handy.
```
arrays = [np.array((array.float(i) for i in l.split())) for l in open(your_file))]
```
|
How about the following:
```
import numpy as np
arrays = []
for line in open('data.txt'):
arrays.append(np.array([float(val) for val in line.rstrip('\n').split(' ') if val != '']))
```
|
6,213,336
|
I'm reading lines from a file to then work with them. Each line is composed solely by float numbers.
I have pretty much everything sorted up to convert the lines into arrays.
I basically do (pseudopython code)
```
line=file.readlines()
line=line.split(' ') # Or whatever separator
array=np.array(line)
#And then iterate over every value casting them as floats
newarray[i]=array.float(array[i])
```
This works, buts seems a bit counterintuitive and antipythonic, I wanted to know if there is a better way to handle the inputs from a file to have at the end an array full of floats.
|
2011/06/02
|
[
"https://Stackoverflow.com/questions/6213336",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/486262/"
] |
Quick answer:
```
arrays = []
for line in open(your_file): # no need to use readlines if you don't want to store them
# use a list comprehension to build your array on the fly
new_array = np.array((array.float(i) for i in line.split(' ')))
arrays.append(new_array)
```
If you process often this kind of data, the csv module will help.
```
import csv
arrays = []
# declare the format of you csv file and Python will turn line into
# lists for you
parser = csv.reader(open(your_file), delimiter=' '))
for l in parser:
arrays.append(np.array((array.float(i) for i in l)))
```
If you feel wild, you can even make this completly declarative:
```
import csv
parser = csv.reader(open(your_file), delimiter=' '))
make_array = lambda row : np.array((array.float(i) for i in row))
arrays = [make_array(row) for row in parser]
```
And if you realy want you colleagues to hate you, you can make a one liner (NOT PYTHONIC AT ALL :-):
```
arrays = [np.array((array.float(i) for i in r)) for r in csv.reader(open(your_file), delimiter=' '))]
```
Stripping all the boiler plate and flexibility, you can end up with a clean and quite readable one liner. I wouldn't use it because I like the refatoring potential of using `csv`, but it can be good enought. It's a grey zone here, so I wouldn't say it's Pythonic, but it's definitly handy.
```
arrays = [np.array((array.float(i) for i in l.split())) for l in open(your_file))]
```
|
[If you want a numpy array and each row in the text file has the same number of values](http://docs.scipy.org/doc/numpy/reference/generated/numpy.loadtxt.html#numpy.loadtxt):
```
a = numpy.loadtxt('data.txt')
```
Without numpy:
```
with open('data.txt') as f:
arrays = list(csv.reader(f, delimiter=' ', quoting=csv.QUOTE_NONNUMERIC))
```
[Or just](https://stackoverflow.com/a/11052880/4279):
```
with open('data.txt') as f:
arrays = [map(float, line.split()) for line in f]
```
|
6,213,336
|
I'm reading lines from a file to then work with them. Each line is composed solely by float numbers.
I have pretty much everything sorted up to convert the lines into arrays.
I basically do (pseudopython code)
```
line=file.readlines()
line=line.split(' ') # Or whatever separator
array=np.array(line)
#And then iterate over every value casting them as floats
newarray[i]=array.float(array[i])
```
This works, buts seems a bit counterintuitive and antipythonic, I wanted to know if there is a better way to handle the inputs from a file to have at the end an array full of floats.
|
2011/06/02
|
[
"https://Stackoverflow.com/questions/6213336",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/486262/"
] |
Quick answer:
```
arrays = []
for line in open(your_file): # no need to use readlines if you don't want to store them
# use a list comprehension to build your array on the fly
new_array = np.array((array.float(i) for i in line.split(' ')))
arrays.append(new_array)
```
If you process often this kind of data, the csv module will help.
```
import csv
arrays = []
# declare the format of you csv file and Python will turn line into
# lists for you
parser = csv.reader(open(your_file), delimiter=' '))
for l in parser:
arrays.append(np.array((array.float(i) for i in l)))
```
If you feel wild, you can even make this completly declarative:
```
import csv
parser = csv.reader(open(your_file), delimiter=' '))
make_array = lambda row : np.array((array.float(i) for i in row))
arrays = [make_array(row) for row in parser]
```
And if you realy want you colleagues to hate you, you can make a one liner (NOT PYTHONIC AT ALL :-):
```
arrays = [np.array((array.float(i) for i in r)) for r in csv.reader(open(your_file), delimiter=' '))]
```
Stripping all the boiler plate and flexibility, you can end up with a clean and quite readable one liner. I wouldn't use it because I like the refatoring potential of using `csv`, but it can be good enought. It's a grey zone here, so I wouldn't say it's Pythonic, but it's definitly handy.
```
arrays = [np.array((array.float(i) for i in l.split())) for l in open(your_file))]
```
|
One possible one-liner:
```
a_list = [map(float, line.split(' ')) for line in a_file]
```
Note that I used `map()` here instead of a nested list comprehension to aid readability.
If you want a numpy array:
```
an_array = np.array([map(float, line.split(' ')) for line in a_file])
```
|
6,213,336
|
I'm reading lines from a file to then work with them. Each line is composed solely by float numbers.
I have pretty much everything sorted up to convert the lines into arrays.
I basically do (pseudopython code)
```
line=file.readlines()
line=line.split(' ') # Or whatever separator
array=np.array(line)
#And then iterate over every value casting them as floats
newarray[i]=array.float(array[i])
```
This works, buts seems a bit counterintuitive and antipythonic, I wanted to know if there is a better way to handle the inputs from a file to have at the end an array full of floats.
|
2011/06/02
|
[
"https://Stackoverflow.com/questions/6213336",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/486262/"
] |
Quick answer:
```
arrays = []
for line in open(your_file): # no need to use readlines if you don't want to store them
# use a list comprehension to build your array on the fly
new_array = np.array((array.float(i) for i in line.split(' ')))
arrays.append(new_array)
```
If you process often this kind of data, the csv module will help.
```
import csv
arrays = []
# declare the format of you csv file and Python will turn line into
# lists for you
parser = csv.reader(open(your_file), delimiter=' '))
for l in parser:
arrays.append(np.array((array.float(i) for i in l)))
```
If you feel wild, you can even make this completly declarative:
```
import csv
parser = csv.reader(open(your_file), delimiter=' '))
make_array = lambda row : np.array((array.float(i) for i in row))
arrays = [make_array(row) for row in parser]
```
And if you realy want you colleagues to hate you, you can make a one liner (NOT PYTHONIC AT ALL :-):
```
arrays = [np.array((array.float(i) for i in r)) for r in csv.reader(open(your_file), delimiter=' '))]
```
Stripping all the boiler plate and flexibility, you can end up with a clean and quite readable one liner. I wouldn't use it because I like the refatoring potential of using `csv`, but it can be good enought. It's a grey zone here, so I wouldn't say it's Pythonic, but it's definitly handy.
```
arrays = [np.array((array.float(i) for i in l.split())) for l in open(your_file))]
```
|
I would use regular expressions
import re
```
all_lines = ''.join( file.readlines() )
new_array = np.array( re.findall('[\d.E+-]+', all_lines), float)
np.reshape( new_array, (m,n) )
```
First merging the files into one long string, and then extracting only the expressions corresponding to floats ( **'[\d.E+-]'** for scientific notation, but you can also use **'[\d.]'** for only float expressions).
|
6,213,336
|
I'm reading lines from a file to then work with them. Each line is composed solely by float numbers.
I have pretty much everything sorted up to convert the lines into arrays.
I basically do (pseudopython code)
```
line=file.readlines()
line=line.split(' ') # Or whatever separator
array=np.array(line)
#And then iterate over every value casting them as floats
newarray[i]=array.float(array[i])
```
This works, buts seems a bit counterintuitive and antipythonic, I wanted to know if there is a better way to handle the inputs from a file to have at the end an array full of floats.
|
2011/06/02
|
[
"https://Stackoverflow.com/questions/6213336",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/486262/"
] |
[If you want a numpy array and each row in the text file has the same number of values](http://docs.scipy.org/doc/numpy/reference/generated/numpy.loadtxt.html#numpy.loadtxt):
```
a = numpy.loadtxt('data.txt')
```
Without numpy:
```
with open('data.txt') as f:
arrays = list(csv.reader(f, delimiter=' ', quoting=csv.QUOTE_NONNUMERIC))
```
[Or just](https://stackoverflow.com/a/11052880/4279):
```
with open('data.txt') as f:
arrays = [map(float, line.split()) for line in f]
```
|
How about the following:
```
import numpy as np
arrays = []
for line in open('data.txt'):
arrays.append(np.array([float(val) for val in line.rstrip('\n').split(' ') if val != '']))
```
|
6,213,336
|
I'm reading lines from a file to then work with them. Each line is composed solely by float numbers.
I have pretty much everything sorted up to convert the lines into arrays.
I basically do (pseudopython code)
```
line=file.readlines()
line=line.split(' ') # Or whatever separator
array=np.array(line)
#And then iterate over every value casting them as floats
newarray[i]=array.float(array[i])
```
This works, buts seems a bit counterintuitive and antipythonic, I wanted to know if there is a better way to handle the inputs from a file to have at the end an array full of floats.
|
2011/06/02
|
[
"https://Stackoverflow.com/questions/6213336",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/486262/"
] |
How about the following:
```
import numpy as np
arrays = []
for line in open('data.txt'):
arrays.append(np.array([float(val) for val in line.rstrip('\n').split(' ') if val != '']))
```
|
One possible one-liner:
```
a_list = [map(float, line.split(' ')) for line in a_file]
```
Note that I used `map()` here instead of a nested list comprehension to aid readability.
If you want a numpy array:
```
an_array = np.array([map(float, line.split(' ')) for line in a_file])
```
|
6,213,336
|
I'm reading lines from a file to then work with them. Each line is composed solely by float numbers.
I have pretty much everything sorted up to convert the lines into arrays.
I basically do (pseudopython code)
```
line=file.readlines()
line=line.split(' ') # Or whatever separator
array=np.array(line)
#And then iterate over every value casting them as floats
newarray[i]=array.float(array[i])
```
This works, buts seems a bit counterintuitive and antipythonic, I wanted to know if there is a better way to handle the inputs from a file to have at the end an array full of floats.
|
2011/06/02
|
[
"https://Stackoverflow.com/questions/6213336",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/486262/"
] |
How about the following:
```
import numpy as np
arrays = []
for line in open('data.txt'):
arrays.append(np.array([float(val) for val in line.rstrip('\n').split(' ') if val != '']))
```
|
I would use regular expressions
import re
```
all_lines = ''.join( file.readlines() )
new_array = np.array( re.findall('[\d.E+-]+', all_lines), float)
np.reshape( new_array, (m,n) )
```
First merging the files into one long string, and then extracting only the expressions corresponding to floats ( **'[\d.E+-]'** for scientific notation, but you can also use **'[\d.]'** for only float expressions).
|
6,213,336
|
I'm reading lines from a file to then work with them. Each line is composed solely by float numbers.
I have pretty much everything sorted up to convert the lines into arrays.
I basically do (pseudopython code)
```
line=file.readlines()
line=line.split(' ') # Or whatever separator
array=np.array(line)
#And then iterate over every value casting them as floats
newarray[i]=array.float(array[i])
```
This works, buts seems a bit counterintuitive and antipythonic, I wanted to know if there is a better way to handle the inputs from a file to have at the end an array full of floats.
|
2011/06/02
|
[
"https://Stackoverflow.com/questions/6213336",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/486262/"
] |
[If you want a numpy array and each row in the text file has the same number of values](http://docs.scipy.org/doc/numpy/reference/generated/numpy.loadtxt.html#numpy.loadtxt):
```
a = numpy.loadtxt('data.txt')
```
Without numpy:
```
with open('data.txt') as f:
arrays = list(csv.reader(f, delimiter=' ', quoting=csv.QUOTE_NONNUMERIC))
```
[Or just](https://stackoverflow.com/a/11052880/4279):
```
with open('data.txt') as f:
arrays = [map(float, line.split()) for line in f]
```
|
One possible one-liner:
```
a_list = [map(float, line.split(' ')) for line in a_file]
```
Note that I used `map()` here instead of a nested list comprehension to aid readability.
If you want a numpy array:
```
an_array = np.array([map(float, line.split(' ')) for line in a_file])
```
|
6,213,336
|
I'm reading lines from a file to then work with them. Each line is composed solely by float numbers.
I have pretty much everything sorted up to convert the lines into arrays.
I basically do (pseudopython code)
```
line=file.readlines()
line=line.split(' ') # Or whatever separator
array=np.array(line)
#And then iterate over every value casting them as floats
newarray[i]=array.float(array[i])
```
This works, buts seems a bit counterintuitive and antipythonic, I wanted to know if there is a better way to handle the inputs from a file to have at the end an array full of floats.
|
2011/06/02
|
[
"https://Stackoverflow.com/questions/6213336",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/486262/"
] |
[If you want a numpy array and each row in the text file has the same number of values](http://docs.scipy.org/doc/numpy/reference/generated/numpy.loadtxt.html#numpy.loadtxt):
```
a = numpy.loadtxt('data.txt')
```
Without numpy:
```
with open('data.txt') as f:
arrays = list(csv.reader(f, delimiter=' ', quoting=csv.QUOTE_NONNUMERIC))
```
[Or just](https://stackoverflow.com/a/11052880/4279):
```
with open('data.txt') as f:
arrays = [map(float, line.split()) for line in f]
```
|
I would use regular expressions
import re
```
all_lines = ''.join( file.readlines() )
new_array = np.array( re.findall('[\d.E+-]+', all_lines), float)
np.reshape( new_array, (m,n) )
```
First merging the files into one long string, and then extracting only the expressions corresponding to floats ( **'[\d.E+-]'** for scientific notation, but you can also use **'[\d.]'** for only float expressions).
|
6,213,336
|
I'm reading lines from a file to then work with them. Each line is composed solely by float numbers.
I have pretty much everything sorted up to convert the lines into arrays.
I basically do (pseudopython code)
```
line=file.readlines()
line=line.split(' ') # Or whatever separator
array=np.array(line)
#And then iterate over every value casting them as floats
newarray[i]=array.float(array[i])
```
This works, buts seems a bit counterintuitive and antipythonic, I wanted to know if there is a better way to handle the inputs from a file to have at the end an array full of floats.
|
2011/06/02
|
[
"https://Stackoverflow.com/questions/6213336",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/486262/"
] |
One possible one-liner:
```
a_list = [map(float, line.split(' ')) for line in a_file]
```
Note that I used `map()` here instead of a nested list comprehension to aid readability.
If you want a numpy array:
```
an_array = np.array([map(float, line.split(' ')) for line in a_file])
```
|
I would use regular expressions
import re
```
all_lines = ''.join( file.readlines() )
new_array = np.array( re.findall('[\d.E+-]+', all_lines), float)
np.reshape( new_array, (m,n) )
```
First merging the files into one long string, and then extracting only the expressions corresponding to floats ( **'[\d.E+-]'** for scientific notation, but you can also use **'[\d.]'** for only float expressions).
|
47,057,572
|
I tried to use pytesseract:
```
import pytesseract
from PIL import Image
pytesseract.pytesseract.tesseract_cmd = 'C:\\Python27\\scripts\\pytesseract.exe'
im = Image.open('Download.png')
print pytesseract.image_to_string(im)
```
But I got this error:
```
Traceback (most recent call last):
File "C:/Python27/ocr.py", line 11, in <module>
print pytesseract.image_to_string(im)
File "C:\Python27\lib\site-packages\pytesseract\pytesseract.py", line
125, in image_to_string
raise TesseractError(status, errors)
TesseractError: (2, u'Usage: python pytesseract.py [-l lang] input_file')
```
What is wrong?
|
2017/11/01
|
[
"https://Stackoverflow.com/questions/47057572",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/8500407/"
] |
You need to install tesseract using windows installer available [here](https://github.com/UB-Mannheim/tesseract/wiki). Then you should install the python wrapper as:
```
pip install pytesseract
```
Then you should also set the tesseract path in your script after importing pytesseract library as below (Please do not forget that installation path might be modified in your case!):
```
pytesseract.pytesseract.tesseract_cmd = r'C:\Program Files (x86)\Tesseract-OCR\tesseract.exe'
```
Note: It is tested on Anaconda3, Anaconda2, Py3 and Py2 without any issues.
|
I think there is something wrong with your path 'C:\Python27\scripts\pytesseract.exe', This seems to point to the pytessaract.py code (hence the error has pytessaract.py on it - the exact error is mentioned in the main function of pytessaract.py which runs only if **name** == "**main**" ).
The path must actually point to tessaract.exe, downloaded separately. Look at the 3rd point under installation in the link (<https://pypi.python.org/pypi/pytesseract>).
This has to be done because pytesseract is only a python wrapper around the tessaract program, so it calls tessaract.exe on your local machine for doing the actual ocr work.
|
65,040,971
|
I setup a new Debian 10 (Buster) instance on AWS EC2, and was able to install a pip3 package that depended on netifaces, but when I came back to it the next day the package is breaking reporting an error in netifaces. If I try to run pip3 install netifaces I get the same error:
```
~$ pip3 install netifaces
Collecting netifaces
Using cached https://files.pythonhosted.org/packages/0d/18/fd6e9c71a35b67a73160ec80a49da63d1eed2d2055054cc2995714949132/netifaces-0.10.9.tar.gz
Complete output from command python setup.py egg_info:
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "/usr/lib/python3/dist-packages/setuptools/__init__.py", line 20, in <module>
from setuptools.dist import Distribution, Feature
File "/usr/lib/python3/dist-packages/setuptools/dist.py", line 35, in <module>
from setuptools.depends import Require
File "/usr/lib/python3/dist-packages/setuptools/depends.py", line 7, in <module>
from .py33compat import Bytecode
File "/usr/lib/python3/dist-packages/setuptools/py33compat.py", line 55, in <module>
unescape = getattr(html, 'unescape', html_parser.HTMLParser().unescape)
AttributeError: 'HTMLParser' object has no attribute 'unescape'
```
|
2020/11/27
|
[
"https://Stackoverflow.com/questions/65040971",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/439005/"
] |
`HTMLParser().unescape` was removed in Python 3.9. Compare [the code in Python 3.8](https://github.com/python/cpython/blob/v3.8.0/Lib/html/parser.py#L466) vs [Python 3.9](https://github.com/python/cpython/blob/v3.9.0/Lib/html/parser.py).
The error seems to be a bug in `setuptools`. Try to upgrade `setuptools`. Or use Python 3.8.
|
I was facing this issue in PyCharm 2018. Apart from upgrading `setuptools` as mentioned above, I also had to upgrade to `PyCharm 2020.3.4` to solve this issue. Related bug on PyCharm issue tracker: <https://youtrack.jetbrains.com/issue/PY-39579>
Hope this helps someone avoid spending hours trying to debug this.
|
65,040,971
|
I setup a new Debian 10 (Buster) instance on AWS EC2, and was able to install a pip3 package that depended on netifaces, but when I came back to it the next day the package is breaking reporting an error in netifaces. If I try to run pip3 install netifaces I get the same error:
```
~$ pip3 install netifaces
Collecting netifaces
Using cached https://files.pythonhosted.org/packages/0d/18/fd6e9c71a35b67a73160ec80a49da63d1eed2d2055054cc2995714949132/netifaces-0.10.9.tar.gz
Complete output from command python setup.py egg_info:
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "/usr/lib/python3/dist-packages/setuptools/__init__.py", line 20, in <module>
from setuptools.dist import Distribution, Feature
File "/usr/lib/python3/dist-packages/setuptools/dist.py", line 35, in <module>
from setuptools.depends import Require
File "/usr/lib/python3/dist-packages/setuptools/depends.py", line 7, in <module>
from .py33compat import Bytecode
File "/usr/lib/python3/dist-packages/setuptools/py33compat.py", line 55, in <module>
unescape = getattr(html, 'unescape', html_parser.HTMLParser().unescape)
AttributeError: 'HTMLParser' object has no attribute 'unescape'
```
|
2020/11/27
|
[
"https://Stackoverflow.com/questions/65040971",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/439005/"
] |
`HTMLParser().unescape` was removed in Python 3.9. Compare [the code in Python 3.8](https://github.com/python/cpython/blob/v3.8.0/Lib/html/parser.py#L466) vs [Python 3.9](https://github.com/python/cpython/blob/v3.9.0/Lib/html/parser.py).
The error seems to be a bug in `setuptools`. Try to upgrade `setuptools`. Or use Python 3.8.
|
Downgrading to any older python3 version is not the solution and most of the time upgrading setuptools won't fix the issue. The proper solution that worked for me to work with pip using python3.9 is the following on Ubuntu18:
locate /usr/lib/python3/dist-packages/setuptools/py33compact.py33
and change
```
# unescape = getattr(html, 'unescape', html_parser.HTMLParser().unescape) # comment out this line
unescape = getattr(html, 'unescape', None)
if unescape is None:
# HTMLParser.unescape is deprecated since Python 3.4, and will be removed
# from 3.9.
unescape = html_parser.HTMLParser().unescape
```
|
65,040,971
|
I setup a new Debian 10 (Buster) instance on AWS EC2, and was able to install a pip3 package that depended on netifaces, but when I came back to it the next day the package is breaking reporting an error in netifaces. If I try to run pip3 install netifaces I get the same error:
```
~$ pip3 install netifaces
Collecting netifaces
Using cached https://files.pythonhosted.org/packages/0d/18/fd6e9c71a35b67a73160ec80a49da63d1eed2d2055054cc2995714949132/netifaces-0.10.9.tar.gz
Complete output from command python setup.py egg_info:
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "/usr/lib/python3/dist-packages/setuptools/__init__.py", line 20, in <module>
from setuptools.dist import Distribution, Feature
File "/usr/lib/python3/dist-packages/setuptools/dist.py", line 35, in <module>
from setuptools.depends import Require
File "/usr/lib/python3/dist-packages/setuptools/depends.py", line 7, in <module>
from .py33compat import Bytecode
File "/usr/lib/python3/dist-packages/setuptools/py33compat.py", line 55, in <module>
unescape = getattr(html, 'unescape', html_parser.HTMLParser().unescape)
AttributeError: 'HTMLParser' object has no attribute 'unescape'
```
|
2020/11/27
|
[
"https://Stackoverflow.com/questions/65040971",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/439005/"
] |
`HTMLParser().unescape` was removed in Python 3.9. Compare [the code in Python 3.8](https://github.com/python/cpython/blob/v3.8.0/Lib/html/parser.py#L466) vs [Python 3.9](https://github.com/python/cpython/blob/v3.9.0/Lib/html/parser.py).
The error seems to be a bug in `setuptools`. Try to upgrade `setuptools`. Or use Python 3.8.
|
I had python3.6 and related packages through deb management.
Needed python3.9 for side project and the solution to fix pip and `AttributeError: 'HTMLParser' object has no attribute 'unescape'`
was to update pip for python3.9 locally for one user:
```
python3.9 -m pip install --upgrade pip
```
now installing python3.9 version of the pip-packages work:
```
python3.9 -m pip install --target=~/.local/lib/python3.9/site-packages numpy
```
|
65,040,971
|
I setup a new Debian 10 (Buster) instance on AWS EC2, and was able to install a pip3 package that depended on netifaces, but when I came back to it the next day the package is breaking reporting an error in netifaces. If I try to run pip3 install netifaces I get the same error:
```
~$ pip3 install netifaces
Collecting netifaces
Using cached https://files.pythonhosted.org/packages/0d/18/fd6e9c71a35b67a73160ec80a49da63d1eed2d2055054cc2995714949132/netifaces-0.10.9.tar.gz
Complete output from command python setup.py egg_info:
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "/usr/lib/python3/dist-packages/setuptools/__init__.py", line 20, in <module>
from setuptools.dist import Distribution, Feature
File "/usr/lib/python3/dist-packages/setuptools/dist.py", line 35, in <module>
from setuptools.depends import Require
File "/usr/lib/python3/dist-packages/setuptools/depends.py", line 7, in <module>
from .py33compat import Bytecode
File "/usr/lib/python3/dist-packages/setuptools/py33compat.py", line 55, in <module>
unescape = getattr(html, 'unescape', html_parser.HTMLParser().unescape)
AttributeError: 'HTMLParser' object has no attribute 'unescape'
```
|
2020/11/27
|
[
"https://Stackoverflow.com/questions/65040971",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/439005/"
] |
I was facing this issue in PyCharm 2018. Apart from upgrading `setuptools` as mentioned above, I also had to upgrade to `PyCharm 2020.3.4` to solve this issue. Related bug on PyCharm issue tracker: <https://youtrack.jetbrains.com/issue/PY-39579>
Hope this helps someone avoid spending hours trying to debug this.
|
Downgrading to any older python3 version is not the solution and most of the time upgrading setuptools won't fix the issue. The proper solution that worked for me to work with pip using python3.9 is the following on Ubuntu18:
locate /usr/lib/python3/dist-packages/setuptools/py33compact.py33
and change
```
# unescape = getattr(html, 'unescape', html_parser.HTMLParser().unescape) # comment out this line
unescape = getattr(html, 'unescape', None)
if unescape is None:
# HTMLParser.unescape is deprecated since Python 3.4, and will be removed
# from 3.9.
unescape = html_parser.HTMLParser().unescape
```
|
65,040,971
|
I setup a new Debian 10 (Buster) instance on AWS EC2, and was able to install a pip3 package that depended on netifaces, but when I came back to it the next day the package is breaking reporting an error in netifaces. If I try to run pip3 install netifaces I get the same error:
```
~$ pip3 install netifaces
Collecting netifaces
Using cached https://files.pythonhosted.org/packages/0d/18/fd6e9c71a35b67a73160ec80a49da63d1eed2d2055054cc2995714949132/netifaces-0.10.9.tar.gz
Complete output from command python setup.py egg_info:
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "/usr/lib/python3/dist-packages/setuptools/__init__.py", line 20, in <module>
from setuptools.dist import Distribution, Feature
File "/usr/lib/python3/dist-packages/setuptools/dist.py", line 35, in <module>
from setuptools.depends import Require
File "/usr/lib/python3/dist-packages/setuptools/depends.py", line 7, in <module>
from .py33compat import Bytecode
File "/usr/lib/python3/dist-packages/setuptools/py33compat.py", line 55, in <module>
unescape = getattr(html, 'unescape', html_parser.HTMLParser().unescape)
AttributeError: 'HTMLParser' object has no attribute 'unescape'
```
|
2020/11/27
|
[
"https://Stackoverflow.com/questions/65040971",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/439005/"
] |
I had python3.6 and related packages through deb management.
Needed python3.9 for side project and the solution to fix pip and `AttributeError: 'HTMLParser' object has no attribute 'unescape'`
was to update pip for python3.9 locally for one user:
```
python3.9 -m pip install --upgrade pip
```
now installing python3.9 version of the pip-packages work:
```
python3.9 -m pip install --target=~/.local/lib/python3.9/site-packages numpy
```
|
Downgrading to any older python3 version is not the solution and most of the time upgrading setuptools won't fix the issue. The proper solution that worked for me to work with pip using python3.9 is the following on Ubuntu18:
locate /usr/lib/python3/dist-packages/setuptools/py33compact.py33
and change
```
# unescape = getattr(html, 'unescape', html_parser.HTMLParser().unescape) # comment out this line
unescape = getattr(html, 'unescape', None)
if unescape is None:
# HTMLParser.unescape is deprecated since Python 3.4, and will be removed
# from 3.9.
unescape = html_parser.HTMLParser().unescape
```
|
8,778,865
|
I was writing a program in python
```
import sys
def func(N, M):
if N == M:
return 0.00
else:
if M == 0:
return pow(2, N+1) - 2.00
else :
return 1.00 + (0.5)*func(N, M+1) + 0.5*func(N, 0)
def main(*args):
test_cases = int(raw_input())
while test_cases:
string = raw_input()
a = string.split(" ")
N = int(a[0])
M = int(a[1])
test_cases = test_cases -1
result = func(N, M)
print("%.2f" % round(result, 2))
if __name__ == '__main__':
sys.setrecursionlimit(1500)
sys.exit(main(*sys.argv))
```
It gives the same answer for N = 1000 ,M = 1 and N = 1000 , M = 2
On searching I found that limit of float expires over 10^400. My question is how to overcome it
|
2012/01/08
|
[
"https://Stackoverflow.com/questions/8778865",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1032610/"
] |
Floats in Python are IEEE doubles: they are not unlimited precision. But if your computation only needs integers, then just use integers: they are unlimited precision. Unfortunately, I think your computation does not stay within the integers.
There are third-party packages built on GMP that provide arbitrary-precision floats: <https://www.google.com/search?q=python%20gmp>
|
Consider using an arbitrary precision floating-point library, for example the [bigfloat](http://packages.python.org/bigfloat/) package, or [mpmath](http://code.google.com/p/mpmath/).
|
8,778,865
|
I was writing a program in python
```
import sys
def func(N, M):
if N == M:
return 0.00
else:
if M == 0:
return pow(2, N+1) - 2.00
else :
return 1.00 + (0.5)*func(N, M+1) + 0.5*func(N, 0)
def main(*args):
test_cases = int(raw_input())
while test_cases:
string = raw_input()
a = string.split(" ")
N = int(a[0])
M = int(a[1])
test_cases = test_cases -1
result = func(N, M)
print("%.2f" % round(result, 2))
if __name__ == '__main__':
sys.setrecursionlimit(1500)
sys.exit(main(*sys.argv))
```
It gives the same answer for N = 1000 ,M = 1 and N = 1000 , M = 2
On searching I found that limit of float expires over 10^400. My question is how to overcome it
|
2012/01/08
|
[
"https://Stackoverflow.com/questions/8778865",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1032610/"
] |
I maintain one of the Python to GMP/MPFR libraries and I tested your function. After checking the results and looking at your function, I think your function remains entirely in the integers. The following function returns the same values:
```
def func(N, M):
if M == 0:
return 2**(N+1) - 2
elif N == M:
return 0
else:
return func(N, M+1)//2 + 2**N
```
The limiting factor with Python's builtin float is not the exponent range (roughly 10\*\*308) but the precision (53 bits). You need around N bits of precision to distinguish between func(N,1) and func(N,2)
|
Consider using an arbitrary precision floating-point library, for example the [bigfloat](http://packages.python.org/bigfloat/) package, or [mpmath](http://code.google.com/p/mpmath/).
|
63,002,403
|
Is this as expected? I thought in Python, variables are pointers to objects in memory. If I modify the python list that a variable points to once, the memory reference changes. But if I modify it again, the memory reference is the same?
```
>>> id(mylist)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
NameError: name 'mylist' is not defined
>>> mylist = [0]
>>> id(mylist)
4417893152
>>> mylist = [0, 1]
>>> id(mylist)
4418202992 # ID changes
>>> mylist.append(3)
>>> mylist
[0, 1, 3]
>>> id(mylist)
4418202992 # ID stays the same
>>> mylist.append(4)
>>> mylist
[0, 1, 3, 4]
>>> id(mylist)
4418202992 # ID stays the same
>>>
```
|
2020/07/20
|
[
"https://Stackoverflow.com/questions/63002403",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3552698/"
] |
You are correct in that the memory references should change. Take a careful look at the memory addresses: they're not identical.
Edit: Regarding your edit, the memory address only changes on reassignment of the variable. The memory of the variable stays the same if you mutate the list.
|
Take a look of the id's you provided . There are completely different. 4338643744 != 4338953744. Look the first 5 numbers: 43386 != 43389. Everything is working as expects due to memory reference is changing properly.
|
63,002,403
|
Is this as expected? I thought in Python, variables are pointers to objects in memory. If I modify the python list that a variable points to once, the memory reference changes. But if I modify it again, the memory reference is the same?
```
>>> id(mylist)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
NameError: name 'mylist' is not defined
>>> mylist = [0]
>>> id(mylist)
4417893152
>>> mylist = [0, 1]
>>> id(mylist)
4418202992 # ID changes
>>> mylist.append(3)
>>> mylist
[0, 1, 3]
>>> id(mylist)
4418202992 # ID stays the same
>>> mylist.append(4)
>>> mylist
[0, 1, 3, 4]
>>> id(mylist)
4418202992 # ID stays the same
>>>
```
|
2020/07/20
|
[
"https://Stackoverflow.com/questions/63002403",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3552698/"
] |
You are correct in that the memory references should change. Take a careful look at the memory addresses: they're not identical.
Edit: Regarding your edit, the memory address only changes on reassignment of the variable. The memory of the variable stays the same if you mutate the list.
|
The memory addresses are not identical. It is behaving as expected -
`mylist = ['123']` --> Points to memory address **4338643744** (See 8643 in between)
[](https://i.stack.imgur.com/APDkj.png)
`mylist = ['123','a']` --> Points to memory address **4338953744**
[](https://i.stack.imgur.com/H6Tuq.png)
|
63,002,403
|
Is this as expected? I thought in Python, variables are pointers to objects in memory. If I modify the python list that a variable points to once, the memory reference changes. But if I modify it again, the memory reference is the same?
```
>>> id(mylist)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
NameError: name 'mylist' is not defined
>>> mylist = [0]
>>> id(mylist)
4417893152
>>> mylist = [0, 1]
>>> id(mylist)
4418202992 # ID changes
>>> mylist.append(3)
>>> mylist
[0, 1, 3]
>>> id(mylist)
4418202992 # ID stays the same
>>> mylist.append(4)
>>> mylist
[0, 1, 3, 4]
>>> id(mylist)
4418202992 # ID stays the same
>>>
```
|
2020/07/20
|
[
"https://Stackoverflow.com/questions/63002403",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3552698/"
] |
You are correct in that the memory references should change. Take a careful look at the memory addresses: they're not identical.
Edit: Regarding your edit, the memory address only changes on reassignment of the variable. The memory of the variable stays the same if you mutate the list.
|
For a mutable object, the `id` which is a `unique identifier for the specified object` will be based on the value itself.
[](https://i.stack.imgur.com/eJqDt.png)refere[](https://i.stack.imgur.com/zOk1a.png)
|
63,002,403
|
Is this as expected? I thought in Python, variables are pointers to objects in memory. If I modify the python list that a variable points to once, the memory reference changes. But if I modify it again, the memory reference is the same?
```
>>> id(mylist)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
NameError: name 'mylist' is not defined
>>> mylist = [0]
>>> id(mylist)
4417893152
>>> mylist = [0, 1]
>>> id(mylist)
4418202992 # ID changes
>>> mylist.append(3)
>>> mylist
[0, 1, 3]
>>> id(mylist)
4418202992 # ID stays the same
>>> mylist.append(4)
>>> mylist
[0, 1, 3, 4]
>>> id(mylist)
4418202992 # ID stays the same
>>>
```
|
2020/07/20
|
[
"https://Stackoverflow.com/questions/63002403",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3552698/"
] |
The memory addresses are not identical. It is behaving as expected -
`mylist = ['123']` --> Points to memory address **4338643744** (See 8643 in between)
[](https://i.stack.imgur.com/APDkj.png)
`mylist = ['123','a']` --> Points to memory address **4338953744**
[](https://i.stack.imgur.com/H6Tuq.png)
|
Take a look of the id's you provided . There are completely different. 4338643744 != 4338953744. Look the first 5 numbers: 43386 != 43389. Everything is working as expects due to memory reference is changing properly.
|
63,002,403
|
Is this as expected? I thought in Python, variables are pointers to objects in memory. If I modify the python list that a variable points to once, the memory reference changes. But if I modify it again, the memory reference is the same?
```
>>> id(mylist)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
NameError: name 'mylist' is not defined
>>> mylist = [0]
>>> id(mylist)
4417893152
>>> mylist = [0, 1]
>>> id(mylist)
4418202992 # ID changes
>>> mylist.append(3)
>>> mylist
[0, 1, 3]
>>> id(mylist)
4418202992 # ID stays the same
>>> mylist.append(4)
>>> mylist
[0, 1, 3, 4]
>>> id(mylist)
4418202992 # ID stays the same
>>>
```
|
2020/07/20
|
[
"https://Stackoverflow.com/questions/63002403",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3552698/"
] |
Take a look of the id's you provided . There are completely different. 4338643744 != 4338953744. Look the first 5 numbers: 43386 != 43389. Everything is working as expects due to memory reference is changing properly.
|
For a mutable object, the `id` which is a `unique identifier for the specified object` will be based on the value itself.
[](https://i.stack.imgur.com/eJqDt.png)refere[](https://i.stack.imgur.com/zOk1a.png)
|
63,002,403
|
Is this as expected? I thought in Python, variables are pointers to objects in memory. If I modify the python list that a variable points to once, the memory reference changes. But if I modify it again, the memory reference is the same?
```
>>> id(mylist)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
NameError: name 'mylist' is not defined
>>> mylist = [0]
>>> id(mylist)
4417893152
>>> mylist = [0, 1]
>>> id(mylist)
4418202992 # ID changes
>>> mylist.append(3)
>>> mylist
[0, 1, 3]
>>> id(mylist)
4418202992 # ID stays the same
>>> mylist.append(4)
>>> mylist
[0, 1, 3, 4]
>>> id(mylist)
4418202992 # ID stays the same
>>>
```
|
2020/07/20
|
[
"https://Stackoverflow.com/questions/63002403",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3552698/"
] |
The memory addresses are not identical. It is behaving as expected -
`mylist = ['123']` --> Points to memory address **4338643744** (See 8643 in between)
[](https://i.stack.imgur.com/APDkj.png)
`mylist = ['123','a']` --> Points to memory address **4338953744**
[](https://i.stack.imgur.com/H6Tuq.png)
|
For a mutable object, the `id` which is a `unique identifier for the specified object` will be based on the value itself.
[](https://i.stack.imgur.com/eJqDt.png)refere[](https://i.stack.imgur.com/zOk1a.png)
|
73,648,264
|
I have a json like this but much longer:
```
[
{
"id": "123",
"name": "home network configuration",
"description": "home utilities",
"definedRanges": [
{
"id": "6500b67e",
"name": "100-200",
"beginIPv4Address": "192.168.090.100",
"endIPv4Address": "192.168.090.200",
"state": "UNALLOCATED"
}
]
},
{
"id": "456",
"name": "lab network configuration",
"description": "lab experiments",
"definedRanges": [
{
"id": "1209b90d",
"name": "100-200",
"beginIPv4Address": "192.168.090.100",
"endIPv4Address": "192.168.090.200",
"state": "ALLOCATED"
},
{
"id": "99e08ca4",
"name": "100-200",
"beginIPv4Address": "192.168.090.100",
"endIPv4Address": "192.168.090.200",
"state": "UNALLOCATED"
}
]
}
]
```
I'd like to query with jq and obtain the following:
```
[
{
"name": "home network configuration"
"definedRanges": [
{
"name": "100-200",
"beginIPv4Address": "192.168.090.100",
"endIPv4Address": "192.168.090.200",
}
]
},
{
"name": "lab network configuration",
"definedRanges": [
{
"name": "100-200",
"beginIPv4Address": "192.168.090.100",
"endIPv4Address": "192.168.090.200",
},
{
"name": "100-200",
"beginIPv4Address": "192.168.090.100",
"endIPv4Address": "192.168.090.200",
}
]
}
]
```
or even this:
```
[
{
"name": "home network configuration",
"definedRanges.name": "100-200",
"definedRanges.beginIPv4Address": "192.168.090.100",
"definedRanges.endIPv4Address": "192.168.090.200",
},
{
"name": "lab network configuration",
"definedRanges.name": "100-200",
"definedRanges.beginIPv4Address": "192.168.090.100",
"definedRanges.endIPv4Address": "192.168.090.200",
},
{
"name": "lab network configuration",
"definedRanges.name": "100-200",
"definedRanges.beginIPv4Address": "192.168.090.100",
"definedRanges.endIPv4Address": "192.168.090.200",
}
]
```
So far I was able to extract the network name at the first level with:
```
.[] | {name}
```
I could also extract the definedRanges with:
```
.[].definedRanges[] | {name,beginIPv4Address,endIPv4Address}
```
But I can't figure out how to merge the two with jq.
I solved the problem with a very simple python script (7 lines of code) but now I'd like to understand how to do the same with jq, out of curiosity.
|
2022/09/08
|
[
"https://Stackoverflow.com/questions/73648264",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1066865/"
] |
Well, you were close. Here's how you put those together:
```
map({name, definedRanges: .definedRanges | map({name, beginIPv4Address, endIPv4Address})})
```
[Online demo](https://jqplay.org/s/tHkRlLUV3K-)
|
Here's my shot at solutions to produce one or the other desired output:
```
map(
{ name }
+ (.definedRanges[] | {
"definedRanges.name": .name,
"definedRanges.beginIPv4Address": .beginIPv4Address,
"definedRanges.endIPv4Address": .endIPv4Address
}))
```
Output:
```json
[
{
"name": "home network configuration",
"definedRanges.name": "100-200",
"definedRanges.beginIPv4Address": "192.168.090.100",
"definedRanges.endIPv4Address": "192.168.090.200"
},
{
"name": "lab network configuration",
"definedRanges.name": "100-200",
"definedRanges.beginIPv4Address": "192.168.090.100",
"definedRanges.endIPv4Address": "192.168.090.200"
},
{
"name": "lab network configuration",
"definedRanges.name": "100-200",
"definedRanges.beginIPv4Address": "192.168.090.100",
"definedRanges.endIPv4Address": "192.168.090.200"
}
]
```
Producing the first kind of output is even simpler (IMHO it reads a bit more straightforward):
```
map({
name,
definedRanges: .definedRanges | map({ name, beginIPv4Address, endIPv4Address })
})
```
Output:
```json
[
{
"name": "home network configuration",
"definedRanges": [
{
"name": "100-2001",
"beginIPv4Address": "192.168.090.101",
"endIPv4Address": "192.168.090.201"
}
]
},
{
"name": "lab network configuration",
"definedRanges": [
{
"name": "100-2002",
"beginIPv4Address": "192.168.090.102",
"endIPv4Address": "192.168.090.202"
},
{
"name": "100-2003",
"beginIPv4Address": "192.168.090.103",
"endIPv4Address": "192.168.090.203"
}
]
}
]
```
|
73,648,264
|
I have a json like this but much longer:
```
[
{
"id": "123",
"name": "home network configuration",
"description": "home utilities",
"definedRanges": [
{
"id": "6500b67e",
"name": "100-200",
"beginIPv4Address": "192.168.090.100",
"endIPv4Address": "192.168.090.200",
"state": "UNALLOCATED"
}
]
},
{
"id": "456",
"name": "lab network configuration",
"description": "lab experiments",
"definedRanges": [
{
"id": "1209b90d",
"name": "100-200",
"beginIPv4Address": "192.168.090.100",
"endIPv4Address": "192.168.090.200",
"state": "ALLOCATED"
},
{
"id": "99e08ca4",
"name": "100-200",
"beginIPv4Address": "192.168.090.100",
"endIPv4Address": "192.168.090.200",
"state": "UNALLOCATED"
}
]
}
]
```
I'd like to query with jq and obtain the following:
```
[
{
"name": "home network configuration"
"definedRanges": [
{
"name": "100-200",
"beginIPv4Address": "192.168.090.100",
"endIPv4Address": "192.168.090.200",
}
]
},
{
"name": "lab network configuration",
"definedRanges": [
{
"name": "100-200",
"beginIPv4Address": "192.168.090.100",
"endIPv4Address": "192.168.090.200",
},
{
"name": "100-200",
"beginIPv4Address": "192.168.090.100",
"endIPv4Address": "192.168.090.200",
}
]
}
]
```
or even this:
```
[
{
"name": "home network configuration",
"definedRanges.name": "100-200",
"definedRanges.beginIPv4Address": "192.168.090.100",
"definedRanges.endIPv4Address": "192.168.090.200",
},
{
"name": "lab network configuration",
"definedRanges.name": "100-200",
"definedRanges.beginIPv4Address": "192.168.090.100",
"definedRanges.endIPv4Address": "192.168.090.200",
},
{
"name": "lab network configuration",
"definedRanges.name": "100-200",
"definedRanges.beginIPv4Address": "192.168.090.100",
"definedRanges.endIPv4Address": "192.168.090.200",
}
]
```
So far I was able to extract the network name at the first level with:
```
.[] | {name}
```
I could also extract the definedRanges with:
```
.[].definedRanges[] | {name,beginIPv4Address,endIPv4Address}
```
But I can't figure out how to merge the two with jq.
I solved the problem with a very simple python script (7 lines of code) but now I'd like to understand how to do the same with jq, out of curiosity.
|
2022/09/08
|
[
"https://Stackoverflow.com/questions/73648264",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1066865/"
] |
The '*or even this*' can be achieved using:
```
map(.name as $name | .definedRanges[] | { name, beginIPv4Address, endIPv4Address} | with_entries(.key = "definedRanges." + .key) | .name = $name)
```
Which yields:
```json
[
{
"definedRanges.name": "100-200",
"definedRanges.beginIPv4Address": "192.168.090.100",
"definedRanges.endIPv4Address": "192.168.090.200",
"name": "home network configuration"
},
{
"definedRanges.name": "100-200",
"definedRanges.beginIPv4Address": "192.168.090.100",
"definedRanges.endIPv4Address": "192.168.090.200",
"name": "lab network configuration"
},
{
"definedRanges.name": "100-200",
"definedRanges.beginIPv4Address": "192.168.090.100",
"definedRanges.endIPv4Address": "192.168.090.200",
"name": "lab network configuration"
}
]
```
[Demo](https://jqplay.org/s/m7gEiLLqkE4)
|
Here's my shot at solutions to produce one or the other desired output:
```
map(
{ name }
+ (.definedRanges[] | {
"definedRanges.name": .name,
"definedRanges.beginIPv4Address": .beginIPv4Address,
"definedRanges.endIPv4Address": .endIPv4Address
}))
```
Output:
```json
[
{
"name": "home network configuration",
"definedRanges.name": "100-200",
"definedRanges.beginIPv4Address": "192.168.090.100",
"definedRanges.endIPv4Address": "192.168.090.200"
},
{
"name": "lab network configuration",
"definedRanges.name": "100-200",
"definedRanges.beginIPv4Address": "192.168.090.100",
"definedRanges.endIPv4Address": "192.168.090.200"
},
{
"name": "lab network configuration",
"definedRanges.name": "100-200",
"definedRanges.beginIPv4Address": "192.168.090.100",
"definedRanges.endIPv4Address": "192.168.090.200"
}
]
```
Producing the first kind of output is even simpler (IMHO it reads a bit more straightforward):
```
map({
name,
definedRanges: .definedRanges | map({ name, beginIPv4Address, endIPv4Address })
})
```
Output:
```json
[
{
"name": "home network configuration",
"definedRanges": [
{
"name": "100-2001",
"beginIPv4Address": "192.168.090.101",
"endIPv4Address": "192.168.090.201"
}
]
},
{
"name": "lab network configuration",
"definedRanges": [
{
"name": "100-2002",
"beginIPv4Address": "192.168.090.102",
"endIPv4Address": "192.168.090.202"
},
{
"name": "100-2003",
"beginIPv4Address": "192.168.090.103",
"endIPv4Address": "192.168.090.203"
}
]
}
]
```
|
295,028
|
I have a very tricky situation (for my standards) in hand. I have a script that needs to read a script variable name from [ConfigParser](https://docs.python.org/2/library/configparser.html). For example, I need to read
```
self.post.id
```
from a .cfg file and use it as a variable in the script. How do I achieve this?
I suppose I was unclear in my query. The .cfg file looks something like:
```
[head]
test: me
some variable : self.post.id
```
This self.post.id is to be replaced at the run time, taking values from the script.
|
2008/11/17
|
[
"https://Stackoverflow.com/questions/295028",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2220518/"
] |
test.ini:
```
[head]
var: self.post.id
```
python:
```
import ConfigParser
class Test:
def __init__(self):
self.post = TestPost(5)
def getPost(self):
config = ConfigParser.ConfigParser()
config.read('/path/to/test.ini')
newvar = config.get('head', 'var')
print eval(newvar)
class TestPost:
def __init__(self, id):
self.id = id
test = Test()
test.getPost() # prints 5
```
|
This is a bit silly.
You have a dynamic language, distributed in source form.
You're trying to make what amounts to a change to the source. Which is easy-to-read, plain text Python.
Why not just change the Python source and stop messing about with a configuration file?
It's a lot easier to have a block of code like this
```
# Change this for some reason or another
x = self.post.id # Standard Configuration
# x = self.post.somethingElse # Another Configuration
# x = self.post.yetAnotherCase # A third configuration
```
it's just as complex to change this as it is to change a configuration file. And your Python program is simpler and more clear.
|
45,533,019
|
I was making a permutation script with python an i looked for how to make a multidimensional array, but the only way i could find was `array3
= [ [ "" for i in range(12) ] for j in range(4) ]` Is there any way i can make it so it's defined as multidimensional but not the size of it? I also found that it's possible to make it like `array = [[]]`but i cant find the way to put anything inside.
I'm trying to put letters and words inside the array so i think i cant use numpy.
For the other problem, the index out of range, I'm trying this:
```
array = [ ["a","b","c","d","e","f"],["7","8","9","0","11","12"]]
array2 = [ ["1","2","3","4","5","6"],["g","h","i","j","k","l"]]
array3 = [ [ "" for i in range(12) ] for j in range(4) ]
i,j = 0,0
print(array[0][0] + array2[0][1])
for k in range(3):
for l in range(2):
for m in range(4):
for n in range(7):
if j > 5:
j = 0
i += 1
print(m,n,k,l,i,j)
array3[m][n] =array[k][l] + array2[i][j]
j += 1
print(array3)
```
I was trying to put the first multidimensional array and the second together with a permutation algorithm but it says that the index is out of range...
What i want it to print is: a1, a2, a3, a4, a5, a6, ag, ah, aj, ak, al, b1, b2...
|
2017/08/06
|
[
"https://Stackoverflow.com/questions/45533019",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/7848729/"
] |
Use `itertools.chain.from_iterable` and `itertools.product`:
```
from itertools import chain, product
for first, second in product(chain.from_iterable(array),
chain.from_iterable(array2)):
print("{}{}".format(first, second))
```
Since `array` and `array2` are lists and not arbitrary iterables, you can shorten this by using `chain` itself with argument unpacking:
```
for first, second in product(chain(*array), chain(*array2)):
print("{}{}".format(first, second))
```
---
`array3` can be created as a flat list:
```
array3 = ["{}{}".format(first, second) for first, second in product(chain(*array), chain(array2))]
```
or as a nested list:
```
array3 = [["{}{}".format(first, second) for second in chain(*array2)] for first in chain(*array)]
```
|
If I understand you correctly You'll need a 4th degree nesting:
```
array = [["a","b","c","d","e","f"],["7","8","9","0","11","12"]]
array2 = [["1","2","3","4","5","6"],["g","h","i","j","k","l"]]
for row in array: # iterate "rows"
for cell in row: # iterate "cells" in a specific "row"
for row_2 in array2:
for cell_2 in row_2:
print '{}{}'.format(cell, cell_2)
```
which will give you:
```
a1
a2
a3
a4
a5
a6
ag
ah
ai
aj
ak
al
b1
b2
b3
b4
b5
...
```
|
40,032,276
|
I have a dataframe similar to:
```
id | date | value
--- | ---------- | ------
1 | 2016-01-07 | 13.90
1 | 2016-01-16 | 14.50
2 | 2016-01-09 | 10.50
2 | 2016-01-28 | 5.50
3 | 2016-01-05 | 1.50
```
I am trying to keep the most recent values for each id, like this:
```
id | date | value
--- | ---------- | ------
1 | 2016-01-16 | 14.50
2 | 2016-01-28 | 5.50
3 | 2016-01-05 | 1.50
```
I have tried sort by date desc and after drop duplicates:
```
new_df = df.orderBy(df.date.desc()).dropDuplicates(['id'])
```
My questions are, `dropDuplicates()` will keep the first duplicate value that it finds? and is there a better way to accomplish what I want to do? By the way, I'm using python.
Thank you.
|
2016/10/13
|
[
"https://Stackoverflow.com/questions/40032276",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4641956/"
] |
The window operator as suggested works very well to solve this problem:
```
from datetime import date
rdd = sc.parallelize([
[1, date(2016, 1, 7), 13.90],
[1, date(2016, 1, 16), 14.50],
[2, date(2016, 1, 9), 10.50],
[2, date(2016, 1, 28), 5.50],
[3, date(2016, 1, 5), 1.50]
])
df = rdd.toDF(['id','date','price'])
df.show()
+---+----------+-----+
| id| date|price|
+---+----------+-----+
| 1|2016-01-07| 13.9|
| 1|2016-01-16| 14.5|
| 2|2016-01-09| 10.5|
| 2|2016-01-28| 5.5|
| 3|2016-01-05| 1.5|
+---+----------+-----+
df.registerTempTable("entries") // Replaced by createOrReplaceTempView in Spark 2.0
output = sqlContext.sql('''
SELECT
*
FROM (
SELECT
*,
dense_rank() OVER (PARTITION BY id ORDER BY date DESC) AS rank
FROM entries
) vo WHERE rank = 1
''');
output.show();
+---+----------+-----+----+
| id| date|price|rank|
+---+----------+-----+----+
| 1|2016-01-16| 14.5| 1|
| 2|2016-01-28| 5.5| 1|
| 3|2016-01-05| 1.5| 1|
+---+----------+-----+----+
```
|
If you have items with the same date then you will get duplicates with the dense\_rank. You should use row\_number:
```py
from pyspark.sql.window import Window
from datetime import date
import pyspark.sql.functions as F
rdd = spark.sparkContext.parallelize([
[1, date(2016, 1, 7), 13.90],
[1, date(2016, 1, 7), 10.0 ], # I added this row to show the effect of duplicate
[1, date(2016, 1, 16), 14.50],
[2, date(2016, 1, 9), 10.50],
[2, date(2016, 1, 28), 5.50],
[3, date(2016, 1, 5), 1.50]]
)
df = rdd.toDF(['id','date','price'])
df.show(10)
+---+----------+-----+
| id| date|price|
+---+----------+-----+
| 1|2016-01-07| 13.9|
| 1|2016-01-07| 10.0|
| 1|2016-01-16| 14.5|
| 2|2016-01-09| 10.5|
| 2|2016-01-28| 5.5|
| 3|2016-01-05| 1.5|
+---+----------+-----+
# row_number
df.withColumn("row_number",F.row_number().over(Window.partitionBy(df.id).orderBy(df.date))).filter(F.col("row_number")==1).show()
+---+----------+-----+----------+
| id| date|price|row_number|
+---+----------+-----+----------+
| 3|2016-01-05| 1.5| 1|
| 1|2016-01-07| 13.9| 1|
| 2|2016-01-09| 10.5| 1|
+---+----------+-----+----------+
# dense_rank
df.withColumn("dense_rank",F.dense_rank().over(Window.partitionBy(df.id).orderBy(df.date))).filter(F.col("dense_rank")==1).show()
+---+----------+-----+----------+
| id| date|price|dense_rank|
+---+----------+-----+----------+
| 3|2016-01-05| 1.5| 1|
| 1|2016-01-07| 13.9| 1|
| 1|2016-01-07| 10.0| 1|
| 2|2016-01-09| 10.5| 1|
+---+----------+-----+----------+
```
|
40,032,276
|
I have a dataframe similar to:
```
id | date | value
--- | ---------- | ------
1 | 2016-01-07 | 13.90
1 | 2016-01-16 | 14.50
2 | 2016-01-09 | 10.50
2 | 2016-01-28 | 5.50
3 | 2016-01-05 | 1.50
```
I am trying to keep the most recent values for each id, like this:
```
id | date | value
--- | ---------- | ------
1 | 2016-01-16 | 14.50
2 | 2016-01-28 | 5.50
3 | 2016-01-05 | 1.50
```
I have tried sort by date desc and after drop duplicates:
```
new_df = df.orderBy(df.date.desc()).dropDuplicates(['id'])
```
My questions are, `dropDuplicates()` will keep the first duplicate value that it finds? and is there a better way to accomplish what I want to do? By the way, I'm using python.
Thank you.
|
2016/10/13
|
[
"https://Stackoverflow.com/questions/40032276",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4641956/"
] |
The window operator as suggested works very well to solve this problem:
```
from datetime import date
rdd = sc.parallelize([
[1, date(2016, 1, 7), 13.90],
[1, date(2016, 1, 16), 14.50],
[2, date(2016, 1, 9), 10.50],
[2, date(2016, 1, 28), 5.50],
[3, date(2016, 1, 5), 1.50]
])
df = rdd.toDF(['id','date','price'])
df.show()
+---+----------+-----+
| id| date|price|
+---+----------+-----+
| 1|2016-01-07| 13.9|
| 1|2016-01-16| 14.5|
| 2|2016-01-09| 10.5|
| 2|2016-01-28| 5.5|
| 3|2016-01-05| 1.5|
+---+----------+-----+
df.registerTempTable("entries") // Replaced by createOrReplaceTempView in Spark 2.0
output = sqlContext.sql('''
SELECT
*
FROM (
SELECT
*,
dense_rank() OVER (PARTITION BY id ORDER BY date DESC) AS rank
FROM entries
) vo WHERE rank = 1
''');
output.show();
+---+----------+-----+----+
| id| date|price|rank|
+---+----------+-----+----+
| 1|2016-01-16| 14.5| 1|
| 2|2016-01-28| 5.5| 1|
| 3|2016-01-05| 1.5| 1|
+---+----------+-----+----+
```
|
You can use row\_number to get the record with latest date:
```
import pyspark.sql.functions as F
from pyspark.sql.window import Window
new_df = df.withColumn("row_number",F.row_number().over(Window.partitionBy(df.id).orderBy(df.date.desc()))).filter(F.col("row_number")==1).drop("row_number")
```
|
40,032,276
|
I have a dataframe similar to:
```
id | date | value
--- | ---------- | ------
1 | 2016-01-07 | 13.90
1 | 2016-01-16 | 14.50
2 | 2016-01-09 | 10.50
2 | 2016-01-28 | 5.50
3 | 2016-01-05 | 1.50
```
I am trying to keep the most recent values for each id, like this:
```
id | date | value
--- | ---------- | ------
1 | 2016-01-16 | 14.50
2 | 2016-01-28 | 5.50
3 | 2016-01-05 | 1.50
```
I have tried sort by date desc and after drop duplicates:
```
new_df = df.orderBy(df.date.desc()).dropDuplicates(['id'])
```
My questions are, `dropDuplicates()` will keep the first duplicate value that it finds? and is there a better way to accomplish what I want to do? By the way, I'm using python.
Thank you.
|
2016/10/13
|
[
"https://Stackoverflow.com/questions/40032276",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4641956/"
] |
If you have items with the same date then you will get duplicates with the dense\_rank. You should use row\_number:
```py
from pyspark.sql.window import Window
from datetime import date
import pyspark.sql.functions as F
rdd = spark.sparkContext.parallelize([
[1, date(2016, 1, 7), 13.90],
[1, date(2016, 1, 7), 10.0 ], # I added this row to show the effect of duplicate
[1, date(2016, 1, 16), 14.50],
[2, date(2016, 1, 9), 10.50],
[2, date(2016, 1, 28), 5.50],
[3, date(2016, 1, 5), 1.50]]
)
df = rdd.toDF(['id','date','price'])
df.show(10)
+---+----------+-----+
| id| date|price|
+---+----------+-----+
| 1|2016-01-07| 13.9|
| 1|2016-01-07| 10.0|
| 1|2016-01-16| 14.5|
| 2|2016-01-09| 10.5|
| 2|2016-01-28| 5.5|
| 3|2016-01-05| 1.5|
+---+----------+-----+
# row_number
df.withColumn("row_number",F.row_number().over(Window.partitionBy(df.id).orderBy(df.date))).filter(F.col("row_number")==1).show()
+---+----------+-----+----------+
| id| date|price|row_number|
+---+----------+-----+----------+
| 3|2016-01-05| 1.5| 1|
| 1|2016-01-07| 13.9| 1|
| 2|2016-01-09| 10.5| 1|
+---+----------+-----+----------+
# dense_rank
df.withColumn("dense_rank",F.dense_rank().over(Window.partitionBy(df.id).orderBy(df.date))).filter(F.col("dense_rank")==1).show()
+---+----------+-----+----------+
| id| date|price|dense_rank|
+---+----------+-----+----------+
| 3|2016-01-05| 1.5| 1|
| 1|2016-01-07| 13.9| 1|
| 1|2016-01-07| 10.0| 1|
| 2|2016-01-09| 10.5| 1|
+---+----------+-----+----------+
```
|
You can use row\_number to get the record with latest date:
```
import pyspark.sql.functions as F
from pyspark.sql.window import Window
new_df = df.withColumn("row_number",F.row_number().over(Window.partitionBy(df.id).orderBy(df.date.desc()))).filter(F.col("row_number")==1).drop("row_number")
```
|
36,341,553
|
Lets say I want to use `gcc` from the command line in order to compile a C extension of Python. I'd structure the call something like this:
```
gcc -o applesauce.pyd -I C:/Python35/include -L C:/Python35/libs -l python35 applesauce.c
```
I noticed that the `-I`, `-L`, and `-l` options are absolutely necessary, or else you will get an error that looks something like [this](https://stackoverflow.com/questions/6985109/how-to-compile-c-code-from-cython-with-gcc). These commands tell gcc where to look for the headers (`-I`), where to look for the static libraries (`-L`), and which static library to actually use (`python35`, which actually translates to `libpython35.a`).
Now, this is obviously really easy to get the `libs` and `include` directories if its your machine, as they never change if you don't want them to. However, I was writing a program that calls `gcc` from the command line, that *other people will be using*. The line where this call occurs looks something like this:
```
from subprocess import call
import sys
filename = applesauce.c
include_directory = os.path.join(sys.exec_prefix, 'include')
libs_directory = os.path.join(sys.exec_prefix, 'libs')
call(['gcc', ..., '-I', include_direcory, '-L', libs_directory, ...])
```
### However, others will have different platforms and different Python installation structures, so just joining the paths won't always work.
Instead, I need a solution *from within Python* that will reliably return the `include` and `libs` directories.
Edit:
-----
I looked at the module `distutils.ccompiler`, and found many useful functions that would in part use distutils, but make it customizable for me to make my compiler entirely cross platform. The only thing is, I need to pass it the include and runtime libraries...
Edit 2:
-------
I looked at `distutils.sysconfig` an I am able to reliably return the 'include' directory including all the header files. I still have no idea how to get the runtime library.
The `distutils.ccompiler` docs are [here](https://docs.python.org/3.5/distutils/apiref.html#distutils.ccompiler.CCompiler)
The program that needs this functionality is named [Cyther](https://pypi.python.org/pypi/Cyther)
|
2016/03/31
|
[
"https://Stackoverflow.com/questions/36341553",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3689198/"
] |
The easiest way to compile extensions is to use `distutils`, [like this](https://docs.python.org/2/distutils/examples.html);
```
from distutils.core import setup
from distutils.extension import Extension
setup(name='foobar',
version='1.0',
ext_modules=[Extension('foo', ['foo.c'])],
)
```
Keep in mind that compared to unix/linux, [compiling extensions on ms-windows is not straightforward](https://blog.ionelmc.ro/2014/12/21/compiling-python-extensions-on-windows/) because different versions of the ms compiler are tightly coupled with the corresponding C library. So on ms-windows you have to compile extensions for Python x.y with the same compiler that Python x.y was compiled with. This means using old and unsupported tools for e.g. Python 2.7, 3.3 and 3.4. The situation is changing ([part 1](http://stevedower.id.au/blog/building-for-python-3-5/), [part 2](http://stevedower.id.au/blog/building-for-python-3-5-part-two/)) with Python 3.5.
**Edit:**
Having looked in the problem more, I doubt that what you want is 100% achievable. Given that the tools needed for compiling and linking are by definition outside of Python and that they are not even necessarily the tools that Python was compiled with, the information in `sys` and `sysconfig` is not guaranteed to an accurate representation of the system that Python is actually installed on. E.g. most ms-windows machines will not have developer tools installed. And even on POSIX platforms there can be a difference between the installed C compiler and the compiler that built Python especially when it is installed as a binary package.
|
If you look at the source of [`build_ext.py`](http://svn.python.org/projects/python/trunk/Lib/distutils/command/build_ext.py) from `distutils` in the method `finalize_options` you will find code for different platforms used to locate libs.
|
65,365,486
|
I'm trying to locate an element using python selenium, and have the html below:
```
<input class="form-control" type="text" placeholder="University Search">
```
I couldn't locate where to type what I want to type.
```
from selenium import webdriver
import time
driver = webdriver.Chrome(executable_path=r"D:\Python\Lib\site-packages\selenium\chromedriver.exe")
driver.get('https://www.topuniversities.com/university-rankings/university-subject-rankings/2020/engineering-technology')
#<input class="form-control" type="text" placeholder="University Search">
text_area = driver.find_element_by_name('University Search')
text_area.send_keys("oxford university")
```
|
2020/12/19
|
[
"https://Stackoverflow.com/questions/65365486",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/11405347/"
] |
You are attempting to use [find\_element\_by\_name](https://selenium-python.readthedocs.io/locating-elements.html#locating-by-name) and yet this element has no `name` attribute defined. You need to look for the element with the specific `placeholder` attribute you are interested in - you can use [find\_element\_by\_xpath](https://selenium-python.readthedocs.io/locating-elements.html#locating-by-xpath) for this:
```
text_area = driver.find_element_by_xpath("//input[@placeholder='University Search']")
```
Also aside: When I open my browser, I don't see an element with "University Search" in the placeholder, only a search bar with "Site Search" -- but this might be a regional and/or browser difference.
|
Very close, try the XPath when all else fails:
```
text_area = driver.find_element_by_xpath("//*[@id='qs-rankings']/thead/tr[3]/td[2]/div/input")
```
You can copy the full/relative XPath to clipboard if you're inspecting the webpage's html.
|
65,365,486
|
I'm trying to locate an element using python selenium, and have the html below:
```
<input class="form-control" type="text" placeholder="University Search">
```
I couldn't locate where to type what I want to type.
```
from selenium import webdriver
import time
driver = webdriver.Chrome(executable_path=r"D:\Python\Lib\site-packages\selenium\chromedriver.exe")
driver.get('https://www.topuniversities.com/university-rankings/university-subject-rankings/2020/engineering-technology')
#<input class="form-control" type="text" placeholder="University Search">
text_area = driver.find_element_by_name('University Search')
text_area.send_keys("oxford university")
```
|
2020/12/19
|
[
"https://Stackoverflow.com/questions/65365486",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/11405347/"
] |
Make sure you wait for the page to load using webdriver waits,click the popup and then proceed to target the element to send keys to.
```
driver.get('https://www.topuniversities.com/university-rankings/university-subject-rankings/2020/engineering-technology')
driver.maximize_window()
wait=WebDriverWait(driver, 10)
wait.until(EC.element_to_be_clickable((By.XPATH, "//button[text()='OK, I agree']"))).click()
text_area=wait.until(EC.element_to_be_clickable((By.CSS_SELECTOR ,"td.uni-search.uni.sorting_disabled > div > input")))
text_area.send_keys("oxford university")
```
Imports
```
from selenium.webdriver.common.by import By
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
```
Element to target
```
<input class="form-control" type="text" placeholder="University Search">
```
|
Very close, try the XPath when all else fails:
```
text_area = driver.find_element_by_xpath("//*[@id='qs-rankings']/thead/tr[3]/td[2]/div/input")
```
You can copy the full/relative XPath to clipboard if you're inspecting the webpage's html.
|
65,365,486
|
I'm trying to locate an element using python selenium, and have the html below:
```
<input class="form-control" type="text" placeholder="University Search">
```
I couldn't locate where to type what I want to type.
```
from selenium import webdriver
import time
driver = webdriver.Chrome(executable_path=r"D:\Python\Lib\site-packages\selenium\chromedriver.exe")
driver.get('https://www.topuniversities.com/university-rankings/university-subject-rankings/2020/engineering-technology')
#<input class="form-control" type="text" placeholder="University Search">
text_area = driver.find_element_by_name('University Search')
text_area.send_keys("oxford university")
```
|
2020/12/19
|
[
"https://Stackoverflow.com/questions/65365486",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/11405347/"
] |
To send a *character sequence* to the element you can use either of the following [Locator Strategies](https://stackoverflow.com/questions/48369043/official-locator-strategies-for-the-webdriver/48376890#48376890):
* Using `css_selector`:
```
driver.find_element_by_css_selector("input.form-control[placeholder='University search']").send_keys("oxford university")
```
* Using `xpath`:
```
driver.find_element_by_xpath("//input[@class='form-control' and @placeholder='University search']").send_keys("oxford university")
```
---
Ideally, to send a *character sequence* to the element you need to induce [WebDriverWait](https://stackoverflow.com/questions/59130200/selenium-wait-until-element-is-present-visible-and-interactable/59130336#59130336) for the `element_to_be_clickable()` and you can use either of the following [Locator Strategies](https://stackoverflow.com/questions/48369043/official-locator-strategies-for-the-webdriver/48376890#48376890):
* Using `CSS_SELECTOR`:
```
driver.get("https://www.topuniversities.com/university-rankings/university-subject-rankings/2020/engineering-technology")
WebDriverWait(driver, 20).until(EC.element_to_be_clickable((By.CSS_SELECTOR, "input.form-control[placeholder='University search']"))).send_keys("oxford university")
```
* Using `XPATH`:
```
driver.get("https://www.topuniversities.com/university-rankings/university-subject-rankings/2020/engineering-technology")
WebDriverWait(driver, 20).until(EC.element_to_be_clickable((By.XPATH, "//input[@class='form-control' and @placeholder='University search']"))).send_keys("oxford university")
```
* **Note**: You have to add the following imports :
```
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.common.by import By
from selenium.webdriver.support import expected_conditions as EC
```
* Browser Snapshot:
[](https://i.stack.imgur.com/wYBRf.png)
---
References
----------
You can find a couple of relevant discussions on [NoSuchElementException](https://stackoverflow.com/questions/47993443/selenium-selenium-common-exceptions-nosuchelementexception-when-using-chrome/47995294#47995294) in:
* [selenium.common.exceptions.NoSuchElementException: Message: no such element: Unable to locate element while trying to click Next button with selenium](https://stackoverflow.com/questions/50315587/selenium-common-exceptions-nosuchelementexception-message-no-such-element-una/50315715#50315715)
* [selenium in python : NoSuchElementException: Message: no such element: Unable to locate element](https://stackoverflow.com/questions/53441658/selenium-in-python-nosuchelementexception-message-no-such-element-unable-to/53442511#53442511)
|
Very close, try the XPath when all else fails:
```
text_area = driver.find_element_by_xpath("//*[@id='qs-rankings']/thead/tr[3]/td[2]/div/input")
```
You can copy the full/relative XPath to clipboard if you're inspecting the webpage's html.
|
10,264,460
|
I am looking for some words in a file in python. After I find each word I need to read the next two words from the file. I've looked for some solution but I could not find reading just the next words.
```
# offsetFile - file pointer
# searchTerms - list of words
for line in offsetFile:
for word in searchTerms:
if word in line:
# here get the next two terms after the word
```
Thank you for your time.
Update: Only the first appearance is necessary. Actually only one appearance of the word is possible in this case.
file:
```
accept 42 2820 access 183 3145 accid 1 4589 algebra 153 16272 algem 4 17439 algol 202 6530
```
word: ['access', 'algebra']
Searching the file when I encounter 'access' and 'algebra', I need the values of 183 3145 and 153 16272 respectively.
|
2012/04/22
|
[
"https://Stackoverflow.com/questions/10264460",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/776614/"
] |
An easy way to deal with this is to read the file using a generator that yields one word at a time from the file.
```
def words(fileobj):
for line in fileobj:
for word in line.split():
yield word
```
Then to find the word you're interested in and read the next two words:
```
with open("offsetfile.txt") as wordfile:
wordgen = words(wordfile)
for word in wordgen:
if word in searchterms: # searchterms should be a set() to make this fast
break
else:
word = None # makes sure word is None if the word wasn't found
foundwords = [word, next(wordgen, None), next(wordgen, None)]
```
Now `foundwords[0]` is the word you found, `foundwords[1]` is the word after that, and `foundwords[2]` is the second word after it. If there aren't enough words, then one or more elements of the list will be `None`.
It is a little more complex if you want to *force* this to match only within one line, but usually you can get away with considering the file as just a sequence of words.
|
If you need to retrieve only two first words, just do it:
```
offsetFile.readline().split()[:2]
```
|
10,264,460
|
I am looking for some words in a file in python. After I find each word I need to read the next two words from the file. I've looked for some solution but I could not find reading just the next words.
```
# offsetFile - file pointer
# searchTerms - list of words
for line in offsetFile:
for word in searchTerms:
if word in line:
# here get the next two terms after the word
```
Thank you for your time.
Update: Only the first appearance is necessary. Actually only one appearance of the word is possible in this case.
file:
```
accept 42 2820 access 183 3145 accid 1 4589 algebra 153 16272 algem 4 17439 algol 202 6530
```
word: ['access', 'algebra']
Searching the file when I encounter 'access' and 'algebra', I need the values of 183 3145 and 153 16272 respectively.
|
2012/04/22
|
[
"https://Stackoverflow.com/questions/10264460",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/776614/"
] |
An easy way to deal with this is to read the file using a generator that yields one word at a time from the file.
```
def words(fileobj):
for line in fileobj:
for word in line.split():
yield word
```
Then to find the word you're interested in and read the next two words:
```
with open("offsetfile.txt") as wordfile:
wordgen = words(wordfile)
for word in wordgen:
if word in searchterms: # searchterms should be a set() to make this fast
break
else:
word = None # makes sure word is None if the word wasn't found
foundwords = [word, next(wordgen, None), next(wordgen, None)]
```
Now `foundwords[0]` is the word you found, `foundwords[1]` is the word after that, and `foundwords[2]` is the second word after it. If there aren't enough words, then one or more elements of the list will be `None`.
It is a little more complex if you want to *force* this to match only within one line, but usually you can get away with considering the file as just a sequence of words.
|
```
word = '3' #Your word
delim = ',' #Your delim
with open('test_file.txt') as f:
for line in f:
if word in line:
s_line = line.strip().split(delim)
two_words = (s_line[s_line.index(word) + 1],\
s_line[s_line.index(word) + 2])
break
```
|
10,264,460
|
I am looking for some words in a file in python. After I find each word I need to read the next two words from the file. I've looked for some solution but I could not find reading just the next words.
```
# offsetFile - file pointer
# searchTerms - list of words
for line in offsetFile:
for word in searchTerms:
if word in line:
# here get the next two terms after the word
```
Thank you for your time.
Update: Only the first appearance is necessary. Actually only one appearance of the word is possible in this case.
file:
```
accept 42 2820 access 183 3145 accid 1 4589 algebra 153 16272 algem 4 17439 algol 202 6530
```
word: ['access', 'algebra']
Searching the file when I encounter 'access' and 'algebra', I need the values of 183 3145 and 153 16272 respectively.
|
2012/04/22
|
[
"https://Stackoverflow.com/questions/10264460",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/776614/"
] |
An easy way to deal with this is to read the file using a generator that yields one word at a time from the file.
```
def words(fileobj):
for line in fileobj:
for word in line.split():
yield word
```
Then to find the word you're interested in and read the next two words:
```
with open("offsetfile.txt") as wordfile:
wordgen = words(wordfile)
for word in wordgen:
if word in searchterms: # searchterms should be a set() to make this fast
break
else:
word = None # makes sure word is None if the word wasn't found
foundwords = [word, next(wordgen, None), next(wordgen, None)]
```
Now `foundwords[0]` is the word you found, `foundwords[1]` is the word after that, and `foundwords[2]` is the second word after it. If there aren't enough words, then one or more elements of the list will be `None`.
It is a little more complex if you want to *force* this to match only within one line, but usually you can get away with considering the file as just a sequence of words.
|
```
def searchTerm(offsetFile, searchTerms):
# remove any found words from this list; if empty we can exit
searchThese = searchTerms[:]
for line in offsetFile:
words_in_line = line.split()
# Use this list comprehension if always two numbers continue a word.
# Else use words_in_line.
for word in [w for i, w in enumerate(words_in_line) if i % 3 == 0]:
# No more words to search.
if not searchThese:
return
# Search remaining words.
if word in searchThese:
searchThese.remove(word)
i = words_in_line.index(word)
print words_in_line[i:i+3]
```
For 'access', 'algebra' I get this result:
['access', '183', '3145']
['algebra', '153', '16272']
|
67,167,886
|
I have installed `TensorFlow` on an M1 (**ARM**) Mac according to [these instructions](https://github.com/apple/tensorflow_macos/issues/153). Everything works fine.
However, model training is happening on the `CPU`. How do I switch training to the `GPU`?
```
In: tensorflow.config.list_physical_devices()
Out: [PhysicalDevice(name='/physical_device:CPU:0', device_type='CPU')]
```
In the documentation of [Apple's TensorFlow distribution](https://github.com/apple/tensorflow_macos) I found the following slightly confusing [paragraph](https://github.com/apple/tensorflow_macos#additional-information):
>
> It is not necessary to make any changes to your existing TensorFlow scripts to use ML Compute as a backend for TensorFlow and TensorFlow Addons. There is an optional `mlcompute.set_mlc_device(device_name='any')` API for ML Compute device selection. The default value for device\_name is 'any', which means ML Compute will select the best available device on your system, including multiple GPUs on multi-GPU configurations. Other available options are `CPU` and `GPU`. Please note that in eager mode, ML Compute will use the CPU. For example, to choose the CPU device, you may do the following:
>
>
>
```
# Import mlcompute module to use the optional set_mlc_device API for device selection with ML Compute.
from tensorflow.python.compiler.mlcompute import mlcompute
# Select CPU device.
mlcompute.set_mlc_device(device_name='cpu') # Available options are 'cpu', 'gpu', and 'any'.
```
So I try to run:
```
from tensorflow.python.compiler.mlcompute import mlcompute
mlcompute.set_mlc_device(device_name='gpu')
```
and get:
```
WARNING:tensorflow: Eager mode uses the CPU. Switching to the CPU.
```
At this point I am stuck. How can I train `keras` models on the GPU to my MacBook Air?
TensorFlow version: `2.4.0-rc0`
|
2021/04/19
|
[
"https://Stackoverflow.com/questions/67167886",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/626537/"
] |
Update
------
The [tensorflow\_macos tf 2.4](https://github.com/apple/tensorflow_macos) repository has been archived by the owner. For `tf 2.5`, refer to [here](https://developer.apple.com/metal/tensorflow-plugin/).
---
It's probably not useful to disable the eager execution fully but to `tf. functions`. Try this and check your GPU usages, the warning message can be [misleading](https://github.com/apple/tensorflow_macos/issues/71#issuecomment-748444580).
```
import tensorflow as tf
tf.config.run_functions_eagerly(False)
```
---
The current release of [Mac-optimized TensorFlow](https://github.com/apple/tensorflow_macos/releases/tag/v0.1alpha3) has several issues that yet not fixed (`TensorFlow 2.4rc0`). Eventually, the eager mode is the default behavior in `TensorFlow 2.x`, and that is also unchanged in the [TensorFlow-MacOS](https://github.com/apple/tensorflow_macos/releases/tag/v0.1alpha3). But unlike the official, this optimized version uses **CPU** forcibly for eager mode. As they stated [here](https://github.com/apple/tensorflow_macos#device-selection-optional).
>
> ... in eager mode, **ML Compute** will use the **CPU**.
>
>
>
That's why even we set explicitly the `device_name='gpu'`, it switches back to CPU as the eager mode is still on.
```
from tensorflow.python.compiler.mlcompute import mlcompute
mlcompute.set_mlc_device(device_name='gpu')
WARNING:tensorflow: Eager mode uses the CPU. Switching to the CPU.
```
[Disabling the eager mode](https://stackoverflow.com/a/66768341/9215780) may work for the program to utilize the GPU, but it's not a general behavior and can lead to such [puzzling performance on both CPU/GPU](https://github.com/apple/tensorflow_macos/issues/88). For now, the most appropriate approach can be to choose `device_name='any'`, by that the **ML Compute** will query the available devices on the system and selects the best device(s) for training the network.
|
Try with turning off the eager execution...
via following
```
import tensorflow as tf
tf.compat.v1.disable_eager_execution()
```
Let me know if it works.
|
59,520,376
|
I came across a scenario where i need to run the function parallely for a list of values in python. I learnt executor.map from `concurrent.futures` will do the job. And I was able to parallelize the function using the below syntax `executor.map(func,[values])`.
But now, I came across the same scenario (i.e the function has to run parallely), but then the function signature is different from the previous and its given below.
```
def func(search_id,**kwargs):
# somecode
return list
container = []
with concurrent.futures.ProcessPoolExecutor() as executor:
container.extend(executor.map(func, (searchid,sitesearch=site),[list of sites]))
```
I don't know how to achieve the above. Can someone guide me please?
|
2019/12/29
|
[
"https://Stackoverflow.com/questions/59520376",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/9452253/"
] |
If you have an iterable of `sites` that you want to `map` and you want to pass the same `search_term` and `pages` argument to each call. You can use `zip` to create an iterable that returns tuples of 3 elements where the first is your list of sites and the 2nd and 3rd are the other parameters just repeating using `itertools.repeat`
```
def func(site, search_term, pages):
...
from functools import partial
from itertools import repeat
executor.map(func, zip(sites, repeat(search_term), repeat(pages)))
```
|
Here is a useful way of using kwargs in `executor.map`, by simply using a `lambda` function to pass the kwargs with the `**kwargs` notation:
```py
from concurrent.futures import ProcessPoolExecutor
def func(arg1, arg2, ...):
....
items = [
{ 'arg1': 0, 'arg2': 3 },
{ 'arg1': 1, 'arg2': 4 },
{ 'arg1': 2, 'arg2': 5 }
]
with ProcessPoolExecutor() as executor:
result = executor.map(
lambda kwargs: func(**kwargs), items)
```
I found this also useful when using Pandas DataFrames by creating the items with `to_dict` by typing `items = df.to_dict(orient='records')` or loading data from JSON files.
|
1,713,015
|
I installed Yahoo BOSS (it's a Python installation that allows you to use their search features). I followed everything perfectly. However, when I run the example to confirm that it works, I get this:
```
$ python ex3.py
Traceback (most recent call last):
File "ex3.py", line 16, in ?
from yos.yql import db
File "/usr/lib/python2.4/site-packages/yos/yql/db.py", line 44, in ?
from yos.crawl import rest
File "/usr/lib/python2.4/site-packages/yos/crawl/rest.py", line 13, in ?
import xml2dict
File "/usr/lib/python2.4/site-packages/yos/crawl/xml2dict.py", line 6, in ?
import xml.etree.ElementTree as ET
ImportError: No module named etree.ElementTree
```
Is there any way to fix this? I did exactly as stated in the documentation and it was installed on a fresh box.
People have suggested that Python 2.5 should be used, but everything currently uses Python 2.4. What should I do to get this Yahoo BOSS to work?
```
Python 2.4.3 (#1, Sep 3 2009, 15:37:37)
[GCC 4.1.2 20080704 (Red Hat 4.1.2-46)] on linux2
```
|
2009/11/11
|
[
"https://Stackoverflow.com/questions/1713015",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/179736/"
] |
Use Python 2.5 or above: xml.etree.ElementTree was added in 2.5.
<http://docs.python.org/library/xml.etree.elementtree.html>
|
A google search reveals that you need to install the [effbot elementtree](http://effbot.org/downloads/#elementtree) Python module.
|
34,804,612
|
so im writing a login system with python and i would like to know if i can search a text document for the username you put in then have it output the line it was found on and search a password document. if it matches the password that you put in with the string on that line then it prints that you logged in. any and all help is appreciated.in my previous code i have it search line one and if it doesnt find the string it adds one to line then repeats till it finds it. then it checks the password file at the same line
```
def checkuser(user,line): # scan the username file for the username
ulines = u.readlines(line)
if user != ulines:
line = line + 1
checkuser(user)
elif ulines == user:
password(user)
```
|
2016/01/15
|
[
"https://Stackoverflow.com/questions/34804612",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5382035/"
] |
fucntion to get the line number. You can use this how you want
```
def getLineNumber(fileName, searchString):
with open(fileName) as f:
for i,line in enumerate(f, start=1):
if searchString in line:
return i
raise Exception('string not found')
```
|
Pythonic way for your answer
```
f = open(filename)
line_no = [num for num,line in enumerate(f) if 'searchstring' in line][0]
print line_no+1
```
|
45,049,312
|
How do I CONSOLIDATE the following using python COMPREHENSION
**FROM (list of dicts)**
```
[
{'server':'serv1','os':'Linux','archive':'/my/folder1'}
,{'server':'serv2','os':'Linux','archive':'/my/folder1'}
,{'server':'serv3','os':'Linux','archive':'/my/folder2'}
,{'server':'serv4','os':'AIX','archive':'/my/folder1'}
,{'server':'serv5','os':'AIX','archive':'/my/folder1'}
]
```
**TO (list of dicts with tuple as key and list of 'server#'s as value**
```
[
{('Linux','/my/folder1'):['serv1','serv2']}
,('Linux','/my/folder2'):['serv3']}
.{('AIX','/my/folder1'):['serv4','serv5']}
]
```
|
2017/07/12
|
[
"https://Stackoverflow.com/questions/45049312",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2288573/"
] |
the need to be able to set default values to your dictionary and to have the same key several times may make a dict-comprehension a bit clumsy. i'd prefer something like this:
a [`defaultdict`](https://docs.python.org/3/library/collections.html#collections.defaultdict) may help:
```
from collections import defaultdict
lst = [
{'server':'serv1','os':'Linux','archive':'/my/folder1'},
{'server':'serv2','os':'Linux','archive':'/my/folder1'},
{'server':'serv3','os':'Linux','archive':'/my/folder2'},
{'server':'serv4','os':'AIX','archive':'/my/folder1'},
{'server':'serv5','os':'AIX','archive':'/my/folder1'}
]
dct = defaultdict(list)
for d in lst:
key = d['os'], d['archive']
dct[key].append(d['server'])
```
if you prefer to have a standard dictionary in the end (actually i do not really see a good reason for that) you could use [`dict.setdefault`](https://docs.python.org/3/library/stdtypes.html?highlight=setdefault#dict.setdefault) in order to create an empty list where the key does not yet exist:
```
dct = {}
for d in lst:
key = d['os'], d['archive']
dct.setdefault(key, []).append(d['server'])
```
the [documentation on `defaultdict` (vs. `setdefault`)](https://docs.python.org/3/library/collections.html#defaultdict-examples):
>
> This technique is simpler and faster than an equivalent technique
> using dict.setdefault()
>
>
>
|
It's difficult to achieve with list comprehension because of the accumulation effect. However, it's possible using `itertools.groupby` on the list sorted by your keys (use the same `key` function for both sorting and grouping).
Then extract the server info in a list comprehension and prefix by the group key. Pass the resulting (group key, server list) to dictionary comprehension and here you go.
```
import itertools
lst = [
{'server':'serv1','os':'Linux','archive':'/my/folder1'}
,{'server':'serv2','os':'Linux','archive':'/my/folder1'}
,{'server':'serv3','os':'Linux','archive':'/my/folder2'}
,{'server':'serv4','os':'AIX','archive':'/my/folder1'}
,{'server':'serv5','os':'AIX','archive':'/my/folder1'}
]
sortfunc = lambda x : (x['os'],x['archive'])
result = {k:[x['server'] for x in v] for k,v in itertools.groupby(sorted(lst,key=sortfunc),key = sortfunc)}
print(result)
```
I get:
```
{('Linux', '/my/folder1'): ['serv1', 'serv2'], ('AIX', '/my/folder1'): ['serv4', 'serv5'], ('Linux', '/my/folder2'): ['serv3']}
```
Keep in mind that it's not because it can be written in one line that it's more efficient. The `defaultdict` approach doesn't require sorting for instance.
|
1,091,756
|
>
> **Possible Duplicate:**
>
> [How many Python classes should I put in one file?](https://stackoverflow.com/questions/106896/how-many-python-classes-should-i-put-in-one-file)
>
>
>
Coming from a C++ background I've grown accustomed to organizing my classes such that, for the most part, there's a 1:1 ratio between classes and files. By making it so that a single file contains a single class I find the code more navigable. As I introduce myself to Python I'm finding lots of examples where a single file contains multiple classes. Is that the recommended way of doing things in Python? If so, why?
Am I missing this convention in the [PEP8](http://www.python.org/dev/peps/pep-0008/)?
|
2009/07/07
|
[
"https://Stackoverflow.com/questions/1091756",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2494/"
] |
the book [Expert Python Programming](http://www.packtpub.com/expert-python-programming/book) has something related discussion
Chapter 4: Choosing Good Names:"Building the Namespace Tree" and "Splitting the Code"
My line crude summary: collect some related class to one module(source file),and
collect some related module to one package, is helpful for code maintain.
|
There is no specific convention for this - do whatever makes your code the most readable and maintainable.
|
1,091,756
|
>
> **Possible Duplicate:**
>
> [How many Python classes should I put in one file?](https://stackoverflow.com/questions/106896/how-many-python-classes-should-i-put-in-one-file)
>
>
>
Coming from a C++ background I've grown accustomed to organizing my classes such that, for the most part, there's a 1:1 ratio between classes and files. By making it so that a single file contains a single class I find the code more navigable. As I introduce myself to Python I'm finding lots of examples where a single file contains multiple classes. Is that the recommended way of doing things in Python? If so, why?
Am I missing this convention in the [PEP8](http://www.python.org/dev/peps/pep-0008/)?
|
2009/07/07
|
[
"https://Stackoverflow.com/questions/1091756",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2494/"
] |
There's a mantra, "flat is better than nested," that generally discourages an overuse of hierarchy. I'm not sure there's any hard and fast rules as to when you want to create a new module -- for the most part, people just use their discretion to group logically related functionality (classes and functions that pertain to a particular problem domain).
Good [thread from the Python mailing list](http://mail.python.org/pipermail/python-list/2006-December/469439.html), and a quote by Fredrik Lundh:
>
> even more important is that in Python,
> you don't use classes for every-
> thing; if you need factories,
> singletons, multiple ways to create
> objects, polymorphic helpers, etc, you
> use plain functions, not classes or
> static methods.
>
>
> once you've gotten over the "it's all
> classes", use modules to organize
> things in a way that makes sense to
> the code that uses your components.
>
> make the import statements look good.
>
>
>
|
In python, class can also be used for small tasks (just for grouping etc). maintaining a 1:1 relation would result in having too many files with small or little functionality.
|
1,091,756
|
>
> **Possible Duplicate:**
>
> [How many Python classes should I put in one file?](https://stackoverflow.com/questions/106896/how-many-python-classes-should-i-put-in-one-file)
>
>
>
Coming from a C++ background I've grown accustomed to organizing my classes such that, for the most part, there's a 1:1 ratio between classes and files. By making it so that a single file contains a single class I find the code more navigable. As I introduce myself to Python I'm finding lots of examples where a single file contains multiple classes. Is that the recommended way of doing things in Python? If so, why?
Am I missing this convention in the [PEP8](http://www.python.org/dev/peps/pep-0008/)?
|
2009/07/07
|
[
"https://Stackoverflow.com/questions/1091756",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2494/"
] |
Here are some possible reasons:
1. Python is not exclusively class-based - the natural unit of code decomposition in Python is the module. Modules are just as likely to contain functions (which are first-class objects in Python) as classes. In Java, the unit of decomposition is the class. Hence, Python has one module=one file, and Java has one (public) class=one file.
2. Python is much more expressive than Java, and if you restrict yourself to one class per file (which Python does not prevent you from doing) you will end up with lots of very small files - more to keep track of with very little benefit.
An example of roughly equivalent functionality: Java's log4j => a couple of dozen files, ~8000 SLOC. Python logging => 3 files, ~ 2800 SLOC.
|
There's a mantra, "flat is better than nested," that generally discourages an overuse of hierarchy. I'm not sure there's any hard and fast rules as to when you want to create a new module -- for the most part, people just use their discretion to group logically related functionality (classes and functions that pertain to a particular problem domain).
Good [thread from the Python mailing list](http://mail.python.org/pipermail/python-list/2006-December/469439.html), and a quote by Fredrik Lundh:
>
> even more important is that in Python,
> you don't use classes for every-
> thing; if you need factories,
> singletons, multiple ways to create
> objects, polymorphic helpers, etc, you
> use plain functions, not classes or
> static methods.
>
>
> once you've gotten over the "it's all
> classes", use modules to organize
> things in a way that makes sense to
> the code that uses your components.
>
> make the import statements look good.
>
>
>
|
1,091,756
|
>
> **Possible Duplicate:**
>
> [How many Python classes should I put in one file?](https://stackoverflow.com/questions/106896/how-many-python-classes-should-i-put-in-one-file)
>
>
>
Coming from a C++ background I've grown accustomed to organizing my classes such that, for the most part, there's a 1:1 ratio between classes and files. By making it so that a single file contains a single class I find the code more navigable. As I introduce myself to Python I'm finding lots of examples where a single file contains multiple classes. Is that the recommended way of doing things in Python? If so, why?
Am I missing this convention in the [PEP8](http://www.python.org/dev/peps/pep-0008/)?
|
2009/07/07
|
[
"https://Stackoverflow.com/questions/1091756",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2494/"
] |
Here are some possible reasons:
1. Python is not exclusively class-based - the natural unit of code decomposition in Python is the module. Modules are just as likely to contain functions (which are first-class objects in Python) as classes. In Java, the unit of decomposition is the class. Hence, Python has one module=one file, and Java has one (public) class=one file.
2. Python is much more expressive than Java, and if you restrict yourself to one class per file (which Python does not prevent you from doing) you will end up with lots of very small files - more to keep track of with very little benefit.
An example of roughly equivalent functionality: Java's log4j => a couple of dozen files, ~8000 SLOC. Python logging => 3 files, ~ 2800 SLOC.
|
the book [Expert Python Programming](http://www.packtpub.com/expert-python-programming/book) has something related discussion
Chapter 4: Choosing Good Names:"Building the Namespace Tree" and "Splitting the Code"
My line crude summary: collect some related class to one module(source file),and
collect some related module to one package, is helpful for code maintain.
|
1,091,756
|
>
> **Possible Duplicate:**
>
> [How many Python classes should I put in one file?](https://stackoverflow.com/questions/106896/how-many-python-classes-should-i-put-in-one-file)
>
>
>
Coming from a C++ background I've grown accustomed to organizing my classes such that, for the most part, there's a 1:1 ratio between classes and files. By making it so that a single file contains a single class I find the code more navigable. As I introduce myself to Python I'm finding lots of examples where a single file contains multiple classes. Is that the recommended way of doing things in Python? If so, why?
Am I missing this convention in the [PEP8](http://www.python.org/dev/peps/pep-0008/)?
|
2009/07/07
|
[
"https://Stackoverflow.com/questions/1091756",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2494/"
] |
Here are some possible reasons:
1. Python is not exclusively class-based - the natural unit of code decomposition in Python is the module. Modules are just as likely to contain functions (which are first-class objects in Python) as classes. In Java, the unit of decomposition is the class. Hence, Python has one module=one file, and Java has one (public) class=one file.
2. Python is much more expressive than Java, and if you restrict yourself to one class per file (which Python does not prevent you from doing) you will end up with lots of very small files - more to keep track of with very little benefit.
An example of roughly equivalent functionality: Java's log4j => a couple of dozen files, ~8000 SLOC. Python logging => 3 files, ~ 2800 SLOC.
|
A good example of not having seperate files for each class might be the models.py file within a django app. Each django app may have a handful of classes that are related to that app, and putting them into individual files just makes more work.
Similarly, having each view in a different file again is likely to be counterproductive.
|
1,091,756
|
>
> **Possible Duplicate:**
>
> [How many Python classes should I put in one file?](https://stackoverflow.com/questions/106896/how-many-python-classes-should-i-put-in-one-file)
>
>
>
Coming from a C++ background I've grown accustomed to organizing my classes such that, for the most part, there's a 1:1 ratio between classes and files. By making it so that a single file contains a single class I find the code more navigable. As I introduce myself to Python I'm finding lots of examples where a single file contains multiple classes. Is that the recommended way of doing things in Python? If so, why?
Am I missing this convention in the [PEP8](http://www.python.org/dev/peps/pep-0008/)?
|
2009/07/07
|
[
"https://Stackoverflow.com/questions/1091756",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2494/"
] |
Here are some possible reasons:
1. Python is not exclusively class-based - the natural unit of code decomposition in Python is the module. Modules are just as likely to contain functions (which are first-class objects in Python) as classes. In Java, the unit of decomposition is the class. Hence, Python has one module=one file, and Java has one (public) class=one file.
2. Python is much more expressive than Java, and if you restrict yourself to one class per file (which Python does not prevent you from doing) you will end up with lots of very small files - more to keep track of with very little benefit.
An example of roughly equivalent functionality: Java's log4j => a couple of dozen files, ~8000 SLOC. Python logging => 3 files, ~ 2800 SLOC.
|
There is no specific convention for this - do whatever makes your code the most readable and maintainable.
|
1,091,756
|
>
> **Possible Duplicate:**
>
> [How many Python classes should I put in one file?](https://stackoverflow.com/questions/106896/how-many-python-classes-should-i-put-in-one-file)
>
>
>
Coming from a C++ background I've grown accustomed to organizing my classes such that, for the most part, there's a 1:1 ratio between classes and files. By making it so that a single file contains a single class I find the code more navigable. As I introduce myself to Python I'm finding lots of examples where a single file contains multiple classes. Is that the recommended way of doing things in Python? If so, why?
Am I missing this convention in the [PEP8](http://www.python.org/dev/peps/pep-0008/)?
|
2009/07/07
|
[
"https://Stackoverflow.com/questions/1091756",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2494/"
] |
There's a mantra, "flat is better than nested," that generally discourages an overuse of hierarchy. I'm not sure there's any hard and fast rules as to when you want to create a new module -- for the most part, people just use their discretion to group logically related functionality (classes and functions that pertain to a particular problem domain).
Good [thread from the Python mailing list](http://mail.python.org/pipermail/python-list/2006-December/469439.html), and a quote by Fredrik Lundh:
>
> even more important is that in Python,
> you don't use classes for every-
> thing; if you need factories,
> singletons, multiple ways to create
> objects, polymorphic helpers, etc, you
> use plain functions, not classes or
> static methods.
>
>
> once you've gotten over the "it's all
> classes", use modules to organize
> things in a way that makes sense to
> the code that uses your components.
>
> make the import statements look good.
>
>
>
|
There is no specific convention for this - do whatever makes your code the most readable and maintainable.
|
1,091,756
|
>
> **Possible Duplicate:**
>
> [How many Python classes should I put in one file?](https://stackoverflow.com/questions/106896/how-many-python-classes-should-i-put-in-one-file)
>
>
>
Coming from a C++ background I've grown accustomed to organizing my classes such that, for the most part, there's a 1:1 ratio between classes and files. By making it so that a single file contains a single class I find the code more navigable. As I introduce myself to Python I'm finding lots of examples where a single file contains multiple classes. Is that the recommended way of doing things in Python? If so, why?
Am I missing this convention in the [PEP8](http://www.python.org/dev/peps/pep-0008/)?
|
2009/07/07
|
[
"https://Stackoverflow.com/questions/1091756",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2494/"
] |
There's a mantra, "flat is better than nested," that generally discourages an overuse of hierarchy. I'm not sure there's any hard and fast rules as to when you want to create a new module -- for the most part, people just use their discretion to group logically related functionality (classes and functions that pertain to a particular problem domain).
Good [thread from the Python mailing list](http://mail.python.org/pipermail/python-list/2006-December/469439.html), and a quote by Fredrik Lundh:
>
> even more important is that in Python,
> you don't use classes for every-
> thing; if you need factories,
> singletons, multiple ways to create
> objects, polymorphic helpers, etc, you
> use plain functions, not classes or
> static methods.
>
>
> once you've gotten over the "it's all
> classes", use modules to organize
> things in a way that makes sense to
> the code that uses your components.
>
> make the import statements look good.
>
>
>
|
the book [Expert Python Programming](http://www.packtpub.com/expert-python-programming/book) has something related discussion
Chapter 4: Choosing Good Names:"Building the Namespace Tree" and "Splitting the Code"
My line crude summary: collect some related class to one module(source file),and
collect some related module to one package, is helpful for code maintain.
|
1,091,756
|
>
> **Possible Duplicate:**
>
> [How many Python classes should I put in one file?](https://stackoverflow.com/questions/106896/how-many-python-classes-should-i-put-in-one-file)
>
>
>
Coming from a C++ background I've grown accustomed to organizing my classes such that, for the most part, there's a 1:1 ratio between classes and files. By making it so that a single file contains a single class I find the code more navigable. As I introduce myself to Python I'm finding lots of examples where a single file contains multiple classes. Is that the recommended way of doing things in Python? If so, why?
Am I missing this convention in the [PEP8](http://www.python.org/dev/peps/pep-0008/)?
|
2009/07/07
|
[
"https://Stackoverflow.com/questions/1091756",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2494/"
] |
In python, class can also be used for small tasks (just for grouping etc). maintaining a 1:1 relation would result in having too many files with small or little functionality.
|
A good example of not having seperate files for each class might be the models.py file within a django app. Each django app may have a handful of classes that are related to that app, and putting them into individual files just makes more work.
Similarly, having each view in a different file again is likely to be counterproductive.
|
1,091,756
|
>
> **Possible Duplicate:**
>
> [How many Python classes should I put in one file?](https://stackoverflow.com/questions/106896/how-many-python-classes-should-i-put-in-one-file)
>
>
>
Coming from a C++ background I've grown accustomed to organizing my classes such that, for the most part, there's a 1:1 ratio between classes and files. By making it so that a single file contains a single class I find the code more navigable. As I introduce myself to Python I'm finding lots of examples where a single file contains multiple classes. Is that the recommended way of doing things in Python? If so, why?
Am I missing this convention in the [PEP8](http://www.python.org/dev/peps/pep-0008/)?
|
2009/07/07
|
[
"https://Stackoverflow.com/questions/1091756",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2494/"
] |
the book [Expert Python Programming](http://www.packtpub.com/expert-python-programming/book) has something related discussion
Chapter 4: Choosing Good Names:"Building the Namespace Tree" and "Splitting the Code"
My line crude summary: collect some related class to one module(source file),and
collect some related module to one package, is helpful for code maintain.
|
A good example of not having seperate files for each class might be the models.py file within a django app. Each django app may have a handful of classes that are related to that app, and putting them into individual files just makes more work.
Similarly, having each view in a different file again is likely to be counterproductive.
|
45,043,961
|
I'm getting the following error when trying to run
`$ bazel build object_detection/...`
And I'm getting ~20 of the same error (1 for each time it attempts to build that). I think it's something with the way I need to configure bazel to recognize the py\_proto\_library, but I don't know where, or how I would do this.
`/src/github.com/tensorflow/tensorflow_models/object_detection/protos/BUILD:325:1: name 'py_proto_library' is not defined (did you mean 'cc_proto_library'?).`
I also think it could be an issue with the fact that initially I had installed the cpp version of tensorflow, and then I built it for python.
|
2017/07/11
|
[
"https://Stackoverflow.com/questions/45043961",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2544449/"
] |
The solution ended up being running this command, like the instructions say:
```
$ protoc object_detection/protos/*.proto --python_out=.
```
and then running this command, like the instructions say.:
```
$ export PYTHONPATH=$PYTHONPATH:`pwd`:`pwd`/slim
```
|
Are you using `load` in the `BUILD` file you're building?
`load("@protobuf//:protobuf.bzl", "py_proto_library")`?
The error seems to indicate the symbol `py_proto_library` isn't loaded into skylark.
|
68,630,769
|
I have a list of strings in python and want to run a recursive grep on each string in the list. Am using the following code,
```
import subprocess as sp
for python_file in python_files:
out = sp.getoutput("grep -r python_file . | wc -l")
print(out)
```
The output I am getting is the grep of the string "python\_file". What mistake am I committing and what should I do to correct this??
|
2021/08/03
|
[
"https://Stackoverflow.com/questions/68630769",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/16582585/"
] |
Your code has several issues. The immediate answer to what you seem to be asking was given in a comment, but there are more things to fix here.
If you want to pass in a variable instead of a static string, you have to use some sort of string interpolation.
`grep` already knows how to report how many lines matched; use `grep -c`. Or just ask Python to count the number of output lines. Trimming off the pipe to `wc -l` allows you to also avoid invoking a shell, which is a good thing; see also [Actual meaning of `shell=True` in subprocess](https://stackoverflow.com/questions/3172470/actual-meaning-of-shell-true-in-subprocess).
`grep` already knows how to search for multiple expressions. Try passing in the whole list as an input file with `grep -f -`.
```py
import subprocess as sp
out = sp.check_output(
["grep", "-r", "-f", "-", "."],
input="\n".join(python_files), text=True)
print(len(out.splitlines()))
```
If you want to speed up your processing and the patterns are all static strings, try also adding the `-F` option to `grep`.
Of course, all of this is relatively easy to do natively in Python, too. You should easily be able to find examples with `os.walk()`.
|
Your intent isn't totally clear from the way you've written your question, but the first argument to `grep` is the **pattern** (`python_file` in your example), and the second is the **file(s)** `.` in your example
You could write this in native Python or just use grep directly, which is probably easier than using both!
`grep` args
* `--count` will report just the number of matching lines
* `--file` *Read one or more newline separated patterns from file.* (manpage)
```sh
grep --count --file patterns.txt -r .
```
```py
import re
from pathlib import Path
for pattern in patterns:
count = 0
for path_file in Path(".").iterdir():
with open(path_file) as fh:
for line in fh:
if re.match(pattern, line):
count += 1
print(count)
```
NOTE that the behavior in your question would get a separate word count for each pattern, while you may really want a single count
|
62,295,148
|
I am new to lambda functions. I am trying to get the sum of elements in a list, but facing this issue repeatedly.
[](https://i.stack.imgur.com/9k7gu.png)
When following up with tutorials online([Tutorial-link](https://realpython.com/python-lambda/)). The following code is working fine for them. But, I am facing the same problem.
[](https://i.stack.imgur.com/A6syb.png)
**Can someone help me to understand why is this happening?**
|
2020/06/10
|
[
"https://Stackoverflow.com/questions/62295148",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2663091/"
] |
The [findOne](https://mongodb.github.io/node-mongodb-native/3.6/api/Collection.html#findOne) function in the node.js driver has a slightly different definition than the one in the shell.
Try
```
db.collection("inventory").findOne({ _id: 1 }, {projection:{ "size": 0 }}, (err, result) => {
```
|
May you should try this
`db.collection("inventory").findOne({ _id: 1 }). select("-size"). exec((err, ``result) => {//body of the fn}) ;`
|
67,047,424
|
In this code, I am trying to insert a code block using [react-quilljs](https://github.com/gtgalone/react-quilljs#usage)
```
import React, { useState } from 'react';
import hljs from 'highlight.js';
import { useQuill } from 'react-quilljs';
import 'quill/dist/quill.snow.css'; // Add css for snow theme
export default () => {
hljs.configure({
languages: ['javascript', 'ruby', 'python', 'rust'],
});
const theme = 'snow';
const modules = {
toolbar: [['code-block']],
syntax: {
highlight: (text) => hljs.highlightAuto(text).value,
},
};
const placeholder = 'Compose an epic...';
const formats = ['code-block'];
const { quill, quillRef } = useQuill({
theme,
modules,
formats,
placeholder,
});
const [content, setContent] = useState('');
React.useEffect(() => {
if (quill) {
quill.on('text-change', () => {
setContent(quill.root.innerHTML);
});
}
}, [quill]);
const submitHandler = (e) => {};
return (
<div style={{ width: 500, height: 300 }}>
<div ref={quillRef} />
<form onSubmit={submitHandler}>
<button type='submit'>Submit</button>
</form>
{quill && (
<div
className='ql-editor'
dangerouslySetInnerHTML={{ __html: content }}
/>
)}
</div>
);
};
```
Using the above code, I get the following preview of the editor's content
[](https://i.stack.imgur.com/7tvmX.png)
There are two problems with this:
1. There is no code syntax highlighting, as I want to achieve this using the `highlihgt.js` package, inside the code block inside the editor, and
2. The code block is not displayed (with the black background and highlighting syntax when it's working) in the previewing div outside the editor.
How can I fix these two issues?
|
2021/04/11
|
[
"https://Stackoverflow.com/questions/67047424",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/10134463/"
] |
Your code *is* getting marked up by `highlight.js` with CSS classes:
```
<span class="hljs-keyword">const</span>
```
You are not seeing the impact of those CSS classes because you don't have a stylesheet loaded to handle them. You need to choose the theme that you want from [the available styles](https://highlightjs.org/static/demo/) and import the corresponding stylesheet.
```
import 'highlight.js/styles/darcula.css';
```
|
Look at the css in the editor mode. It depends on two class names ql-snow and ql-editor.
You can fix this issue by wrapping it around one more div with className ql-snow.
```
<div className='ql-snow'>
<div className='ql-editor' dangerouslySetInnerHTML={{ __html: content }}>
<div/>
</div>
```
This should work.
|
67,047,424
|
In this code, I am trying to insert a code block using [react-quilljs](https://github.com/gtgalone/react-quilljs#usage)
```
import React, { useState } from 'react';
import hljs from 'highlight.js';
import { useQuill } from 'react-quilljs';
import 'quill/dist/quill.snow.css'; // Add css for snow theme
export default () => {
hljs.configure({
languages: ['javascript', 'ruby', 'python', 'rust'],
});
const theme = 'snow';
const modules = {
toolbar: [['code-block']],
syntax: {
highlight: (text) => hljs.highlightAuto(text).value,
},
};
const placeholder = 'Compose an epic...';
const formats = ['code-block'];
const { quill, quillRef } = useQuill({
theme,
modules,
formats,
placeholder,
});
const [content, setContent] = useState('');
React.useEffect(() => {
if (quill) {
quill.on('text-change', () => {
setContent(quill.root.innerHTML);
});
}
}, [quill]);
const submitHandler = (e) => {};
return (
<div style={{ width: 500, height: 300 }}>
<div ref={quillRef} />
<form onSubmit={submitHandler}>
<button type='submit'>Submit</button>
</form>
{quill && (
<div
className='ql-editor'
dangerouslySetInnerHTML={{ __html: content }}
/>
)}
</div>
);
};
```
Using the above code, I get the following preview of the editor's content
[](https://i.stack.imgur.com/7tvmX.png)
There are two problems with this:
1. There is no code syntax highlighting, as I want to achieve this using the `highlihgt.js` package, inside the code block inside the editor, and
2. The code block is not displayed (with the black background and highlighting syntax when it's working) in the previewing div outside the editor.
How can I fix these two issues?
|
2021/04/11
|
[
"https://Stackoverflow.com/questions/67047424",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/10134463/"
] |
Your code *is* getting marked up by `highlight.js` with CSS classes:
```
<span class="hljs-keyword">const</span>
```
You are not seeing the impact of those CSS classes because you don't have a stylesheet loaded to handle them. You need to choose the theme that you want from [the available styles](https://highlightjs.org/static/demo/) and import the corresponding stylesheet.
```
import 'highlight.js/styles/darcula.css';
```
|
I got the same issue and when I used `hjls` what happened was that I got syntax highlighting in the editor but not in the value.
If you noticed the syntax gets highlighted some seconds after you write the code block, this means that the value gets set before the syntax gets highlighted.
So, I just set the value after 2 seconds using `setTimeout` and this solved my problem
Like this:
```
<ReactQuill
theme="snow"
value={value}
onChange={(content) => {
setTimeout(() => {
setValue(content)
}, 2000)
}}
modules={modules}
formats={formats}
bounds="#editor"
placeholder="Write something..."
className="text-black dark:text-white"
/>
```
|
10,509,293
|
>
> **Possible Duplicate:**
>
> [Why aren't Python's superclass **init** methods automatically invoked?](https://stackoverflow.com/questions/3782827/why-arent-pythons-superclass-init-methods-automatically-invoked)
>
>
>
For example:
```
class Pet(object):
def __init__(self, name, species):
self.name = name
self.species = species
def getName(self):
return self.name
def getSpecies(self):
return self.species
def __str__(self):
return "%s is a %s" % (self.name, self.species)
class Dog(Pet):
def __init__(self, name, chases_cats):
Pet.__init__(self, name, "Dog")
self.chases_cats = chases_cats
def chasesCats(self):
return self.chases_cats
```
As you can see Dog inherits from pet. I understand the code perfectly fine. But why must we call the init for pet in the Dog class? Why isn't just calling it as in the first line of the dog class enough ( class Dog(Pet) )? It seems to only create messier code. It kind of kills the point of inheritance in Python to me.
|
2012/05/09
|
[
"https://Stackoverflow.com/questions/10509293",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1241100/"
] |
Having the language force the super class to initialize before or after means that you lose functionality. A subclass may depend on a superclass's initialization to be run first, or vice versa.
In addition, it wouldn't have any way of knowing what arguments to pass -- subclasses decide what values are passed to an initializer, which gives more flexibility over automatically passing all arguments.
An alternative method of initializing the super-class is by using [super](http://docs.python.org/library/functions.html#super), though this will actually initialize the super-class of `self` dynamically, by looking it up in the object's `__mro__` (method resolution order)
In python 2:
```
super(self, Dog).__init__(self, name, "Dog")
```
In python 3, you can further reduce the syntax, and avoid repeating yourself:
```
super().__init__(self, name, "Dog")
```
**Edit**:
Since you're not actually using the `name` argument passed to dog, you can further reduce some syntax, by accepting arbitrary keyword and positional arguments in the `Dog` initializer, and passing them up the initialization chain:
```
class Dog(Pet):
def __init__(self, chases_cats, *args, **kwargs):
Pet.__init__(self, *args, species="Dog", **kwargs)
self.chases_cats = chases_cats
class Pet(object):
def __init__(self, name, species, *args, **kwargs):
self.name = name
self.species = species
```
In your scenario, the initializer might be simple enough to not need this, but it's useful to know for when you have more arguments, and/or a deeper class hierarchy.
|
You can make species a class attribute like this
```
class Pet(object):
species = None
def __init__(self, name):
self.name = name
def getName(self):
return self.name
def getSpecies(self):
return self.species
def __str__(self):
return "%s is a %s" % (self.name, self.species)
class Dog(Pet):
species = "Dog"
def __init__(self, name, chases_cats):
Pet.__init__(self, name)
self.chases_cats = chases_cats
def chasesCats(self):
return self.chases_cats
```
|
67,377,043
|
What is the reason why I cannot access a specific line number in the already split string?
```
a = "ABCDEFGHIJKLJMNOPRSTCUFSC"
barcode = "2"
import textwrap
prazno = textwrap.fill(a,width=5)
podeli = prazno.splitlines()
```
Here the output is correct:
```
print(podeli)
ABCDE
FGHIJ
KLJMN
OPRST
CUFSC
```
However, when I want to split one of the lines e.g podeli[2] by 3 characters the python just ignores that and gives the same output like that split of podeli[2] (line 2) has not occured.
```
if barcode[0] == '1':
podeli[1] += ' MATA'
elif barcode[0] == '2':
podeli[1] += ' MATA'
for podeli[2] in podeli:
textwrap.fill(podeli[2], width=3)
podeli[2].splitlines()
podeli[2] += ' MATA'
```
The expected output would be:
```
ABCDE MATA
FGH MATA
IJ
KLJMN
OPRST
CUFSC
```
Is there a way to split the line by a certain length and its order number?
Thank you, guys!
|
2021/05/03
|
[
"https://Stackoverflow.com/questions/67377043",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/15827381/"
] |
You can solve your immediate problem by rebuilding the list, but I fear you have a more general problem that you haven't told us.
```
if barcode[0] == '1':
podeli[1] += ' MATA'
elif barcode[0] == '2':
podeli[1] += ' MATA'
line2 = textwrap.fill(podeli[2], width=3).splitlines()
podeli = podeli[0:2] + line2 + podeli[3:]
podeli[2] += ' MATA'
```
|
Try this, it should split a string you specified by a width of 3:
```
n = 3
[((podeli[2])[i:i+n]) for i in range(0, len(podeli[2]), n)]
```
|
40,882,899
|
I am using python+bs4+pyside in the code ,please look the part of the code below:
```
enter code here
#coding:gb2312
import urllib2
import sys
import urllib
import urlparse
import random
import time
from datetime import datetime, timedelta
import socket
from bs4 import BeautifulSoup
import lxml.html
from PySide.QtGui import *
from PySide.QtCore import *
from PySide.QtWebKit import *
def download(self, url, headers, proxy, num_retries, data=None):
print 'Downloading:', url
request = urllib2.Request(url, data, headers or {})
opener = self.opener or urllib2.build_opener()
if proxy:
proxy_params = {urlparse.urlparse(url).scheme: proxy}
opener.add_handler(urllib2.ProxyHandler(proxy_params))
try:
response = opener.open(request)
html = response.read()
code = response.code
except Exception as e:
print 'Download error:', str(e)
html = ''
if hasattr(e, 'code'):
code = e.code
if num_retries > 0 and 500 <= code < 600:
# retry 5XX HTTP errors
return self._get(url, headers, proxy, num_retries-1, data)
else:
code = None
return {'html': html, 'code': code}
def crawling_hdf(openfile):
filename = open(openfile,'r')
namelist = filename.readlines()
app = QApplication(sys.argv)
for name in namelist:
url = "http://so.haodf.com/index/search?type=doctor&kw="+ urllib.quote(name)
#get doctor's home page
D = Downloader(delay=DEFAULT_DELAY, user_agent=DEFAULT_AGENT, proxies=None, num_retries=DEFAULT_RETRIES, cache=None)
html = D(url)
soup = BeautifulSoup(html)
tr = soup.find(attrs={'class':'docInfo'})
td = tr.find(attrs={'class':'docName font_16'}).get('href')
print td
#get doctor's detail information page
loadPage_bs4(td)
filename.close()
if __name__ == '__main__':
crawling_hdf("name_list.txt")
```
After I run the program , there shows a waring message:
**Warning (from warnings module):
File "C:\Python27\lib\site-packages\bs4\dammit.py", line 231
"Some characters could not be decoded, and were "
UnicodeWarning: Some characters could not be decoded, and were replaced with REPLACEMENT CHARACTER.**
I have used ***print str(html)*** and find all chinese language in tages are messy code.
I have tried use ”decode or encode“ and ”gzip“ solutions which are search in this website,but it doesn't work in my case.
Thank you very much for your help!
|
2016/11/30
|
[
"https://Stackoverflow.com/questions/40882899",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/7229399/"
] |
try changing your foreach loop to this.
```
foreach ($list as $key => $value) {
echo $value->title." || ";
echo $value->link." ";
echo nl2br("\n");
}
```
Hope this Works for you.
|
What you did is looping the keys and values seperated and than you tried to get the values from the keys of the stdClass, what you need to do is looping it as an object. I also used `json_decode($json_str, true)` to get the result as an array instead of an stdClass.
```
$json_str = '[{"title":"root","link":"one"},{"title":"branch","link":"two"},{"title":"leaf","link":"three"}]';
$json_decoded = json_decode($json_str, true);
foreach($json_decoded as $object)
{
echo $object['title'];
echo $object['link'];
}
```
|
40,882,899
|
I am using python+bs4+pyside in the code ,please look the part of the code below:
```
enter code here
#coding:gb2312
import urllib2
import sys
import urllib
import urlparse
import random
import time
from datetime import datetime, timedelta
import socket
from bs4 import BeautifulSoup
import lxml.html
from PySide.QtGui import *
from PySide.QtCore import *
from PySide.QtWebKit import *
def download(self, url, headers, proxy, num_retries, data=None):
print 'Downloading:', url
request = urllib2.Request(url, data, headers or {})
opener = self.opener or urllib2.build_opener()
if proxy:
proxy_params = {urlparse.urlparse(url).scheme: proxy}
opener.add_handler(urllib2.ProxyHandler(proxy_params))
try:
response = opener.open(request)
html = response.read()
code = response.code
except Exception as e:
print 'Download error:', str(e)
html = ''
if hasattr(e, 'code'):
code = e.code
if num_retries > 0 and 500 <= code < 600:
# retry 5XX HTTP errors
return self._get(url, headers, proxy, num_retries-1, data)
else:
code = None
return {'html': html, 'code': code}
def crawling_hdf(openfile):
filename = open(openfile,'r')
namelist = filename.readlines()
app = QApplication(sys.argv)
for name in namelist:
url = "http://so.haodf.com/index/search?type=doctor&kw="+ urllib.quote(name)
#get doctor's home page
D = Downloader(delay=DEFAULT_DELAY, user_agent=DEFAULT_AGENT, proxies=None, num_retries=DEFAULT_RETRIES, cache=None)
html = D(url)
soup = BeautifulSoup(html)
tr = soup.find(attrs={'class':'docInfo'})
td = tr.find(attrs={'class':'docName font_16'}).get('href')
print td
#get doctor's detail information page
loadPage_bs4(td)
filename.close()
if __name__ == '__main__':
crawling_hdf("name_list.txt")
```
After I run the program , there shows a waring message:
**Warning (from warnings module):
File "C:\Python27\lib\site-packages\bs4\dammit.py", line 231
"Some characters could not be decoded, and were "
UnicodeWarning: Some characters could not be decoded, and were replaced with REPLACEMENT CHARACTER.**
I have used ***print str(html)*** and find all chinese language in tages are messy code.
I have tried use ”decode or encode“ and ”gzip“ solutions which are search in this website,but it doesn't work in my case.
Thank you very much for your help!
|
2016/11/30
|
[
"https://Stackoverflow.com/questions/40882899",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/7229399/"
] |
try changing your foreach loop to this.
```
foreach ($list as $key => $value) {
echo $value->title." || ";
echo $value->link." ";
echo nl2br("\n");
}
```
Hope this Works for you.
|
**Code :**
```
$list = json_decode($json);
foreach ($list as $item) {
echo $item->title . ' || ' . $item->link . '<br>';
}
```
|
40,882,899
|
I am using python+bs4+pyside in the code ,please look the part of the code below:
```
enter code here
#coding:gb2312
import urllib2
import sys
import urllib
import urlparse
import random
import time
from datetime import datetime, timedelta
import socket
from bs4 import BeautifulSoup
import lxml.html
from PySide.QtGui import *
from PySide.QtCore import *
from PySide.QtWebKit import *
def download(self, url, headers, proxy, num_retries, data=None):
print 'Downloading:', url
request = urllib2.Request(url, data, headers or {})
opener = self.opener or urllib2.build_opener()
if proxy:
proxy_params = {urlparse.urlparse(url).scheme: proxy}
opener.add_handler(urllib2.ProxyHandler(proxy_params))
try:
response = opener.open(request)
html = response.read()
code = response.code
except Exception as e:
print 'Download error:', str(e)
html = ''
if hasattr(e, 'code'):
code = e.code
if num_retries > 0 and 500 <= code < 600:
# retry 5XX HTTP errors
return self._get(url, headers, proxy, num_retries-1, data)
else:
code = None
return {'html': html, 'code': code}
def crawling_hdf(openfile):
filename = open(openfile,'r')
namelist = filename.readlines()
app = QApplication(sys.argv)
for name in namelist:
url = "http://so.haodf.com/index/search?type=doctor&kw="+ urllib.quote(name)
#get doctor's home page
D = Downloader(delay=DEFAULT_DELAY, user_agent=DEFAULT_AGENT, proxies=None, num_retries=DEFAULT_RETRIES, cache=None)
html = D(url)
soup = BeautifulSoup(html)
tr = soup.find(attrs={'class':'docInfo'})
td = tr.find(attrs={'class':'docName font_16'}).get('href')
print td
#get doctor's detail information page
loadPage_bs4(td)
filename.close()
if __name__ == '__main__':
crawling_hdf("name_list.txt")
```
After I run the program , there shows a waring message:
**Warning (from warnings module):
File "C:\Python27\lib\site-packages\bs4\dammit.py", line 231
"Some characters could not be decoded, and were "
UnicodeWarning: Some characters could not be decoded, and were replaced with REPLACEMENT CHARACTER.**
I have used ***print str(html)*** and find all chinese language in tages are messy code.
I have tried use ”decode or encode“ and ”gzip“ solutions which are search in this website,but it doesn't work in my case.
Thank you very much for your help!
|
2016/11/30
|
[
"https://Stackoverflow.com/questions/40882899",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/7229399/"
] |
What you did is looping the keys and values seperated and than you tried to get the values from the keys of the stdClass, what you need to do is looping it as an object. I also used `json_decode($json_str, true)` to get the result as an array instead of an stdClass.
```
$json_str = '[{"title":"root","link":"one"},{"title":"branch","link":"two"},{"title":"leaf","link":"three"}]';
$json_decoded = json_decode($json_str, true);
foreach($json_decoded as $object)
{
echo $object['title'];
echo $object['link'];
}
```
|
**Code :**
```
$list = json_decode($json);
foreach ($list as $item) {
echo $item->title . ' || ' . $item->link . '<br>';
}
```
|
66,012,040
|
Running this DAG in airflow gives error as Task exited with return code Negsignal.SIGABRT.
I am not sure what is wrong I have done
```
from airflow import DAG
from airflow.providers.snowflake.operators.snowflake import SnowflakeOperator
from airflow.utils.dates import days_ago
SNOWFLAKE_CONN_ID = 'snowflake_conn'
# TODO: should be able to rely on connection's schema, but currently param required by S3ToSnowflakeTransfer
# SNOWFLAKE_SCHEMA = 'schema_name'
#SNOWFLAKE_STAGE = 'stage_name'
SNOWFLAKE_WAREHOUSE = 'SF_TUTS_WH'
SNOWFLAKE_DATABASE = 'KAFKA_DB'
SNOWFLAKE_ROLE = 'sysadmin'
SNOWFLAKE_SAMPLE_TABLE = 'sample_table'
CREATE_TABLE_SQL_STRING = (
f"CREATE OR REPLACE TRANSIENT TABLE {SNOWFLAKE_SAMPLE_TABLE} (name VARCHAR(250), id INT);"
)
SQL_INSERT_STATEMENT = f"INSERT INTO {SNOWFLAKE_SAMPLE_TABLE} VALUES ('name', %(id)s)"
SQL_LIST = [SQL_INSERT_STATEMENT % {"id": n} for n in range(0, 10)]
default_args = {
'owner': 'airflow',
}
dag = DAG(
'example_snowflake',
default_args=default_args,
start_date=days_ago(2),
tags=['example'],
)
snowflake_op_sql_str = SnowflakeOperator(
task_id='snowflake_op_sql_str',
dag=dag,
snowflake_conn_id=SNOWFLAKE_CONN_ID,
sql=CREATE_TABLE_SQL_STRING,
warehouse=SNOWFLAKE_WAREHOUSE,
database=SNOWFLAKE_DATABASE,
# schema=SNOWFLAKE_SCHEMA,
role=SNOWFLAKE_ROLE,
)
snowflake_op_with_params = SnowflakeOperator(
task_id='snowflake_op_with_params',
dag=dag,
snowflake_conn_id=SNOWFLAKE_CONN_ID,
sql=SQL_INSERT_STATEMENT,
parameters={"id": 56},
warehouse=SNOWFLAKE_WAREHOUSE,
database=SNOWFLAKE_DATABASE,
# schema=SNOWFLAKE_SCHEMA,
role=SNOWFLAKE_ROLE,
)
snowflake_op_sql_list = SnowflakeOperator(
task_id='snowflake_op_sql_list', dag=dag, snowflake_conn_id=SNOWFLAKE_CONN_ID, sql=SQL_LIST
)
snowflake_op_sql_str >> [
snowflake_op_with_params,
snowflake_op_sql_list,]
```
**Getting LOGS in airFlow as below ::**
```
Reading local file: /Users/aashayjain/airflow/logs/snowflake_test/snowflake_op_with_params/2021-02-02T13:51:18.229233+00:00/1.log
[2021-02-02 19:21:38,880] {taskinstance.py:826} INFO - Dependencies all met for <TaskInstance: snowflake_test.snowflake_op_with_params 2021-02-02T13:51:18.229233+00:00 [queued]>
[2021-02-02 19:21:38,887] {taskinstance.py:826} INFO - Dependencies all met for <TaskInstance: snowflake_test.snowflake_op_with_params 2021-02-02T13:51:18.229233+00:00 [queued]>
[2021-02-02 19:21:38,887] {taskinstance.py:1017} INFO -
--------------------------------------------------------------------------------
[2021-02-02 19:21:38,887] {taskinstance.py:1018} INFO - Starting attempt 1 of 1
[2021-02-02 19:21:38,887] {taskinstance.py:1019} INFO -
--------------------------------------------------------------------------------
[2021-02-02 19:21:38,892] {taskinstance.py:1038} INFO - Executing <Task(SnowflakeOperator): snowflake_op_with_params> on 2021-02-02T13:51:18.229233+00:00
[2021-02-02 19:21:38,895] {standard_task_runner.py:51} INFO - Started process 16510 to run task
[2021-02-02 19:21:38,901] {standard_task_runner.py:75} INFO - Running: ['airflow', 'tasks', 'run', 'snowflake_test', 'snowflake_op_with_params', '2021-02-02T13:51:18.229233+00:00', '--job-id', '7', '--pool', 'default_pool', '--raw', '--subdir', 'DAGS_FOLDER/snowflake_test.py', '--cfg-path', '/var/folders/6h/1pzt4pbx6h32h6p5v503wws00000gp/T/tmp1w61m38s']
[2021-02-02 19:21:38,903] {standard_task_runner.py:76} INFO - Job 7: Subtask snowflake_op_with_params
[2021-02-02 19:21:38,933] {logging_mixin.py:103} INFO - Running <TaskInstance: snowflake_test.snowflake_op_with_params 2021-02-02T13:51:18.229233+00:00 [running]> on host 1.0.0.127.in-addr.arpa
[2021-02-02 19:21:38,954] {taskinstance.py:1232} INFO - Exporting the following env vars:
AIRFLOW_CTX_DAG_OWNER=airflow
AIRFLOW_CTX_DAG_ID=snowflake_test
AIRFLOW_CTX_TASK_ID=snowflake_op_with_params
AIRFLOW_CTX_EXECUTION_DATE=2021-02-02T13:51:18.229233+00:00
AIRFLOW_CTX_DAG_RUN_ID=manual__2021-02-02T13:51:18.229233+00:00
[2021-02-02 19:21:38,955] {snowflake.py:119} INFO - Executing: INSERT INTO TEST_TABLE VALUES ('name', %(id)s)
[2021-02-02 19:21:38,961] {base.py:74} INFO - Using connection to: id: snowflake_conn. Host: uva00063.us-east-1.snowflakecomputing.com, Port: None, Schema: , Login: aashay, Password: XXXXXXXX, extra: XXXXXXXX
[2021-02-02 19:21:38,963] {connection.py:218} INFO - Snowflake Connector for Python Version: 2.3.7, Python Version: 3.7.3, Platform: Darwin-19.5.0-x86_64-i386-64bit
[2021-02-02 19:21:38,964] {connection.py:769} INFO - This connection is in OCSP Fail Open Mode. TLS Certificates would be checked for validity and revocation status. Any other Certificate Revocation related exceptions or OCSP Responder failures would be disregarded in favor of connectivity.
[2021-02-02 19:21:38,964] {connection.py:785} INFO - Setting use_openssl_only mode to False
[2021-02-02 19:21:38,996] {local_task_job.py:118} INFO - Task exited with return code Negsignal.SIGABRT
```
apache-airflow==2.0.0
python 3.7.3
Looking forward for help with this. let me know I need to provide any more details wrt. code or airflow...................................???
|
2021/02/02
|
[
"https://Stackoverflow.com/questions/66012040",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/10110307/"
] |
I ran into the same error on Mac OSX - which looks like the OS you are using based on the local file path.
Adding the following to my Airflow *scheduler* session fixed the problem:
```
export OBJC_DISABLE_INITIALIZE_FORK_SAFETY=YES
```
The cause is described in the [Airflow docs](https://airflow.apache.org/blog/airflow-1.10.10/#running-airflow-on-macos):
>
> This error occurs because of added security to restrict
> multiprocessing & multithreading in Mac OS High Sierra and above.
>
>
>
The doc refers to version 1.10.x - but based on local test environments this also resolves the error for v2.x.x.
**kaxil** comment in the following issue actually guided me to the relevant Airflow page:
<https://github.com/apache/airflow/issues/12808#issuecomment-738854764>
|
You are executing:
```
snowflake_op_with_params = SnowflakeOperator(
task_id='snowflake_op_with_params',
dag=dag,
snowflake_conn_id=SNOWFLAKE_CONN_ID,
sql=SQL_INSERT_STATEMENT,
parameters={"id": 56},
warehouse=SNOWFLAKE_WAREHOUSE,
database=SNOWFLAKE_DATABASE,
# schema=SNOWFLAKE_SCHEMA,
role=SNOWFLAKE_ROLE,
)
```
This try to run the `sql` in `SQL_INSERT_STATEMENT`.
So it executes:
```
f"INSERT INTO {SNOWFLAKE_SAMPLE_TABLE} VALUES ('name', %(id)s)"
```
which gives:
```
INSERT INTO sample_table VALUES ('name', %(id)s)
```
As shown in your own log:
```
[2021-02-02 19:21:38,955] {snowflake.py:119} INFO - Executing: INSERT INTO TEST_TABLE VALUES ('name', %(id)s)
```
This is not a valid SQL statement.
I can't really tell what SQL you wanted to execute. Based on `SQL_LIST` I can assume that `%(id)s` suppose to be and id of integer type.
|
66,012,040
|
Running this DAG in airflow gives error as Task exited with return code Negsignal.SIGABRT.
I am not sure what is wrong I have done
```
from airflow import DAG
from airflow.providers.snowflake.operators.snowflake import SnowflakeOperator
from airflow.utils.dates import days_ago
SNOWFLAKE_CONN_ID = 'snowflake_conn'
# TODO: should be able to rely on connection's schema, but currently param required by S3ToSnowflakeTransfer
# SNOWFLAKE_SCHEMA = 'schema_name'
#SNOWFLAKE_STAGE = 'stage_name'
SNOWFLAKE_WAREHOUSE = 'SF_TUTS_WH'
SNOWFLAKE_DATABASE = 'KAFKA_DB'
SNOWFLAKE_ROLE = 'sysadmin'
SNOWFLAKE_SAMPLE_TABLE = 'sample_table'
CREATE_TABLE_SQL_STRING = (
f"CREATE OR REPLACE TRANSIENT TABLE {SNOWFLAKE_SAMPLE_TABLE} (name VARCHAR(250), id INT);"
)
SQL_INSERT_STATEMENT = f"INSERT INTO {SNOWFLAKE_SAMPLE_TABLE} VALUES ('name', %(id)s)"
SQL_LIST = [SQL_INSERT_STATEMENT % {"id": n} for n in range(0, 10)]
default_args = {
'owner': 'airflow',
}
dag = DAG(
'example_snowflake',
default_args=default_args,
start_date=days_ago(2),
tags=['example'],
)
snowflake_op_sql_str = SnowflakeOperator(
task_id='snowflake_op_sql_str',
dag=dag,
snowflake_conn_id=SNOWFLAKE_CONN_ID,
sql=CREATE_TABLE_SQL_STRING,
warehouse=SNOWFLAKE_WAREHOUSE,
database=SNOWFLAKE_DATABASE,
# schema=SNOWFLAKE_SCHEMA,
role=SNOWFLAKE_ROLE,
)
snowflake_op_with_params = SnowflakeOperator(
task_id='snowflake_op_with_params',
dag=dag,
snowflake_conn_id=SNOWFLAKE_CONN_ID,
sql=SQL_INSERT_STATEMENT,
parameters={"id": 56},
warehouse=SNOWFLAKE_WAREHOUSE,
database=SNOWFLAKE_DATABASE,
# schema=SNOWFLAKE_SCHEMA,
role=SNOWFLAKE_ROLE,
)
snowflake_op_sql_list = SnowflakeOperator(
task_id='snowflake_op_sql_list', dag=dag, snowflake_conn_id=SNOWFLAKE_CONN_ID, sql=SQL_LIST
)
snowflake_op_sql_str >> [
snowflake_op_with_params,
snowflake_op_sql_list,]
```
**Getting LOGS in airFlow as below ::**
```
Reading local file: /Users/aashayjain/airflow/logs/snowflake_test/snowflake_op_with_params/2021-02-02T13:51:18.229233+00:00/1.log
[2021-02-02 19:21:38,880] {taskinstance.py:826} INFO - Dependencies all met for <TaskInstance: snowflake_test.snowflake_op_with_params 2021-02-02T13:51:18.229233+00:00 [queued]>
[2021-02-02 19:21:38,887] {taskinstance.py:826} INFO - Dependencies all met for <TaskInstance: snowflake_test.snowflake_op_with_params 2021-02-02T13:51:18.229233+00:00 [queued]>
[2021-02-02 19:21:38,887] {taskinstance.py:1017} INFO -
--------------------------------------------------------------------------------
[2021-02-02 19:21:38,887] {taskinstance.py:1018} INFO - Starting attempt 1 of 1
[2021-02-02 19:21:38,887] {taskinstance.py:1019} INFO -
--------------------------------------------------------------------------------
[2021-02-02 19:21:38,892] {taskinstance.py:1038} INFO - Executing <Task(SnowflakeOperator): snowflake_op_with_params> on 2021-02-02T13:51:18.229233+00:00
[2021-02-02 19:21:38,895] {standard_task_runner.py:51} INFO - Started process 16510 to run task
[2021-02-02 19:21:38,901] {standard_task_runner.py:75} INFO - Running: ['airflow', 'tasks', 'run', 'snowflake_test', 'snowflake_op_with_params', '2021-02-02T13:51:18.229233+00:00', '--job-id', '7', '--pool', 'default_pool', '--raw', '--subdir', 'DAGS_FOLDER/snowflake_test.py', '--cfg-path', '/var/folders/6h/1pzt4pbx6h32h6p5v503wws00000gp/T/tmp1w61m38s']
[2021-02-02 19:21:38,903] {standard_task_runner.py:76} INFO - Job 7: Subtask snowflake_op_with_params
[2021-02-02 19:21:38,933] {logging_mixin.py:103} INFO - Running <TaskInstance: snowflake_test.snowflake_op_with_params 2021-02-02T13:51:18.229233+00:00 [running]> on host 1.0.0.127.in-addr.arpa
[2021-02-02 19:21:38,954] {taskinstance.py:1232} INFO - Exporting the following env vars:
AIRFLOW_CTX_DAG_OWNER=airflow
AIRFLOW_CTX_DAG_ID=snowflake_test
AIRFLOW_CTX_TASK_ID=snowflake_op_with_params
AIRFLOW_CTX_EXECUTION_DATE=2021-02-02T13:51:18.229233+00:00
AIRFLOW_CTX_DAG_RUN_ID=manual__2021-02-02T13:51:18.229233+00:00
[2021-02-02 19:21:38,955] {snowflake.py:119} INFO - Executing: INSERT INTO TEST_TABLE VALUES ('name', %(id)s)
[2021-02-02 19:21:38,961] {base.py:74} INFO - Using connection to: id: snowflake_conn. Host: uva00063.us-east-1.snowflakecomputing.com, Port: None, Schema: , Login: aashay, Password: XXXXXXXX, extra: XXXXXXXX
[2021-02-02 19:21:38,963] {connection.py:218} INFO - Snowflake Connector for Python Version: 2.3.7, Python Version: 3.7.3, Platform: Darwin-19.5.0-x86_64-i386-64bit
[2021-02-02 19:21:38,964] {connection.py:769} INFO - This connection is in OCSP Fail Open Mode. TLS Certificates would be checked for validity and revocation status. Any other Certificate Revocation related exceptions or OCSP Responder failures would be disregarded in favor of connectivity.
[2021-02-02 19:21:38,964] {connection.py:785} INFO - Setting use_openssl_only mode to False
[2021-02-02 19:21:38,996] {local_task_job.py:118} INFO - Task exited with return code Negsignal.SIGABRT
```
apache-airflow==2.0.0
python 3.7.3
Looking forward for help with this. let me know I need to provide any more details wrt. code or airflow...................................???
|
2021/02/02
|
[
"https://Stackoverflow.com/questions/66012040",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/10110307/"
] |
This helped me too on Mac OSX:
1. Open terminal an open .zshrc (or .bash\_profile) with `nano .zshrc`
2. Add this line to the profile `export OBJC_DISABLE_INITIALIZE_FORK_SAFETY=YES`
3. Restart scheduler & web server.
|
You are executing:
```
snowflake_op_with_params = SnowflakeOperator(
task_id='snowflake_op_with_params',
dag=dag,
snowflake_conn_id=SNOWFLAKE_CONN_ID,
sql=SQL_INSERT_STATEMENT,
parameters={"id": 56},
warehouse=SNOWFLAKE_WAREHOUSE,
database=SNOWFLAKE_DATABASE,
# schema=SNOWFLAKE_SCHEMA,
role=SNOWFLAKE_ROLE,
)
```
This try to run the `sql` in `SQL_INSERT_STATEMENT`.
So it executes:
```
f"INSERT INTO {SNOWFLAKE_SAMPLE_TABLE} VALUES ('name', %(id)s)"
```
which gives:
```
INSERT INTO sample_table VALUES ('name', %(id)s)
```
As shown in your own log:
```
[2021-02-02 19:21:38,955] {snowflake.py:119} INFO - Executing: INSERT INTO TEST_TABLE VALUES ('name', %(id)s)
```
This is not a valid SQL statement.
I can't really tell what SQL you wanted to execute. Based on `SQL_LIST` I can assume that `%(id)s` suppose to be and id of integer type.
|
66,012,040
|
Running this DAG in airflow gives error as Task exited with return code Negsignal.SIGABRT.
I am not sure what is wrong I have done
```
from airflow import DAG
from airflow.providers.snowflake.operators.snowflake import SnowflakeOperator
from airflow.utils.dates import days_ago
SNOWFLAKE_CONN_ID = 'snowflake_conn'
# TODO: should be able to rely on connection's schema, but currently param required by S3ToSnowflakeTransfer
# SNOWFLAKE_SCHEMA = 'schema_name'
#SNOWFLAKE_STAGE = 'stage_name'
SNOWFLAKE_WAREHOUSE = 'SF_TUTS_WH'
SNOWFLAKE_DATABASE = 'KAFKA_DB'
SNOWFLAKE_ROLE = 'sysadmin'
SNOWFLAKE_SAMPLE_TABLE = 'sample_table'
CREATE_TABLE_SQL_STRING = (
f"CREATE OR REPLACE TRANSIENT TABLE {SNOWFLAKE_SAMPLE_TABLE} (name VARCHAR(250), id INT);"
)
SQL_INSERT_STATEMENT = f"INSERT INTO {SNOWFLAKE_SAMPLE_TABLE} VALUES ('name', %(id)s)"
SQL_LIST = [SQL_INSERT_STATEMENT % {"id": n} for n in range(0, 10)]
default_args = {
'owner': 'airflow',
}
dag = DAG(
'example_snowflake',
default_args=default_args,
start_date=days_ago(2),
tags=['example'],
)
snowflake_op_sql_str = SnowflakeOperator(
task_id='snowflake_op_sql_str',
dag=dag,
snowflake_conn_id=SNOWFLAKE_CONN_ID,
sql=CREATE_TABLE_SQL_STRING,
warehouse=SNOWFLAKE_WAREHOUSE,
database=SNOWFLAKE_DATABASE,
# schema=SNOWFLAKE_SCHEMA,
role=SNOWFLAKE_ROLE,
)
snowflake_op_with_params = SnowflakeOperator(
task_id='snowflake_op_with_params',
dag=dag,
snowflake_conn_id=SNOWFLAKE_CONN_ID,
sql=SQL_INSERT_STATEMENT,
parameters={"id": 56},
warehouse=SNOWFLAKE_WAREHOUSE,
database=SNOWFLAKE_DATABASE,
# schema=SNOWFLAKE_SCHEMA,
role=SNOWFLAKE_ROLE,
)
snowflake_op_sql_list = SnowflakeOperator(
task_id='snowflake_op_sql_list', dag=dag, snowflake_conn_id=SNOWFLAKE_CONN_ID, sql=SQL_LIST
)
snowflake_op_sql_str >> [
snowflake_op_with_params,
snowflake_op_sql_list,]
```
**Getting LOGS in airFlow as below ::**
```
Reading local file: /Users/aashayjain/airflow/logs/snowflake_test/snowflake_op_with_params/2021-02-02T13:51:18.229233+00:00/1.log
[2021-02-02 19:21:38,880] {taskinstance.py:826} INFO - Dependencies all met for <TaskInstance: snowflake_test.snowflake_op_with_params 2021-02-02T13:51:18.229233+00:00 [queued]>
[2021-02-02 19:21:38,887] {taskinstance.py:826} INFO - Dependencies all met for <TaskInstance: snowflake_test.snowflake_op_with_params 2021-02-02T13:51:18.229233+00:00 [queued]>
[2021-02-02 19:21:38,887] {taskinstance.py:1017} INFO -
--------------------------------------------------------------------------------
[2021-02-02 19:21:38,887] {taskinstance.py:1018} INFO - Starting attempt 1 of 1
[2021-02-02 19:21:38,887] {taskinstance.py:1019} INFO -
--------------------------------------------------------------------------------
[2021-02-02 19:21:38,892] {taskinstance.py:1038} INFO - Executing <Task(SnowflakeOperator): snowflake_op_with_params> on 2021-02-02T13:51:18.229233+00:00
[2021-02-02 19:21:38,895] {standard_task_runner.py:51} INFO - Started process 16510 to run task
[2021-02-02 19:21:38,901] {standard_task_runner.py:75} INFO - Running: ['airflow', 'tasks', 'run', 'snowflake_test', 'snowflake_op_with_params', '2021-02-02T13:51:18.229233+00:00', '--job-id', '7', '--pool', 'default_pool', '--raw', '--subdir', 'DAGS_FOLDER/snowflake_test.py', '--cfg-path', '/var/folders/6h/1pzt4pbx6h32h6p5v503wws00000gp/T/tmp1w61m38s']
[2021-02-02 19:21:38,903] {standard_task_runner.py:76} INFO - Job 7: Subtask snowflake_op_with_params
[2021-02-02 19:21:38,933] {logging_mixin.py:103} INFO - Running <TaskInstance: snowflake_test.snowflake_op_with_params 2021-02-02T13:51:18.229233+00:00 [running]> on host 1.0.0.127.in-addr.arpa
[2021-02-02 19:21:38,954] {taskinstance.py:1232} INFO - Exporting the following env vars:
AIRFLOW_CTX_DAG_OWNER=airflow
AIRFLOW_CTX_DAG_ID=snowflake_test
AIRFLOW_CTX_TASK_ID=snowflake_op_with_params
AIRFLOW_CTX_EXECUTION_DATE=2021-02-02T13:51:18.229233+00:00
AIRFLOW_CTX_DAG_RUN_ID=manual__2021-02-02T13:51:18.229233+00:00
[2021-02-02 19:21:38,955] {snowflake.py:119} INFO - Executing: INSERT INTO TEST_TABLE VALUES ('name', %(id)s)
[2021-02-02 19:21:38,961] {base.py:74} INFO - Using connection to: id: snowflake_conn. Host: uva00063.us-east-1.snowflakecomputing.com, Port: None, Schema: , Login: aashay, Password: XXXXXXXX, extra: XXXXXXXX
[2021-02-02 19:21:38,963] {connection.py:218} INFO - Snowflake Connector for Python Version: 2.3.7, Python Version: 3.7.3, Platform: Darwin-19.5.0-x86_64-i386-64bit
[2021-02-02 19:21:38,964] {connection.py:769} INFO - This connection is in OCSP Fail Open Mode. TLS Certificates would be checked for validity and revocation status. Any other Certificate Revocation related exceptions or OCSP Responder failures would be disregarded in favor of connectivity.
[2021-02-02 19:21:38,964] {connection.py:785} INFO - Setting use_openssl_only mode to False
[2021-02-02 19:21:38,996] {local_task_job.py:118} INFO - Task exited with return code Negsignal.SIGABRT
```
apache-airflow==2.0.0
python 3.7.3
Looking forward for help with this. let me know I need to provide any more details wrt. code or airflow...................................???
|
2021/02/02
|
[
"https://Stackoverflow.com/questions/66012040",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/10110307/"
] |
You are executing:
```
snowflake_op_with_params = SnowflakeOperator(
task_id='snowflake_op_with_params',
dag=dag,
snowflake_conn_id=SNOWFLAKE_CONN_ID,
sql=SQL_INSERT_STATEMENT,
parameters={"id": 56},
warehouse=SNOWFLAKE_WAREHOUSE,
database=SNOWFLAKE_DATABASE,
# schema=SNOWFLAKE_SCHEMA,
role=SNOWFLAKE_ROLE,
)
```
This try to run the `sql` in `SQL_INSERT_STATEMENT`.
So it executes:
```
f"INSERT INTO {SNOWFLAKE_SAMPLE_TABLE} VALUES ('name', %(id)s)"
```
which gives:
```
INSERT INTO sample_table VALUES ('name', %(id)s)
```
As shown in your own log:
```
[2021-02-02 19:21:38,955] {snowflake.py:119} INFO - Executing: INSERT INTO TEST_TABLE VALUES ('name', %(id)s)
```
This is not a valid SQL statement.
I can't really tell what SQL you wanted to execute. Based on `SQL_LIST` I can assume that `%(id)s` suppose to be and id of integer type.
|
I think the issue you are facing has to do with OCSP checking
Add "insecure\_mode=True" in snowflake connector to disable OCSP checking
```py
snowflake.connector.connect(
insecure_mode=True,
)
```
But "insecure\_mode=True" is needed only when running inside Airflow (Running as a pyscript works fine) . I don't know why is that.
Note: export `OBJC_DISABLE_INITIALIZE_FORK_SAFETY=YES` didn't work for me because i didn't get this error
`may have been in progress in another thread when fork() was called.` which will usually throw `Negsignal.SIGABRT` exception
|
66,012,040
|
Running this DAG in airflow gives error as Task exited with return code Negsignal.SIGABRT.
I am not sure what is wrong I have done
```
from airflow import DAG
from airflow.providers.snowflake.operators.snowflake import SnowflakeOperator
from airflow.utils.dates import days_ago
SNOWFLAKE_CONN_ID = 'snowflake_conn'
# TODO: should be able to rely on connection's schema, but currently param required by S3ToSnowflakeTransfer
# SNOWFLAKE_SCHEMA = 'schema_name'
#SNOWFLAKE_STAGE = 'stage_name'
SNOWFLAKE_WAREHOUSE = 'SF_TUTS_WH'
SNOWFLAKE_DATABASE = 'KAFKA_DB'
SNOWFLAKE_ROLE = 'sysadmin'
SNOWFLAKE_SAMPLE_TABLE = 'sample_table'
CREATE_TABLE_SQL_STRING = (
f"CREATE OR REPLACE TRANSIENT TABLE {SNOWFLAKE_SAMPLE_TABLE} (name VARCHAR(250), id INT);"
)
SQL_INSERT_STATEMENT = f"INSERT INTO {SNOWFLAKE_SAMPLE_TABLE} VALUES ('name', %(id)s)"
SQL_LIST = [SQL_INSERT_STATEMENT % {"id": n} for n in range(0, 10)]
default_args = {
'owner': 'airflow',
}
dag = DAG(
'example_snowflake',
default_args=default_args,
start_date=days_ago(2),
tags=['example'],
)
snowflake_op_sql_str = SnowflakeOperator(
task_id='snowflake_op_sql_str',
dag=dag,
snowflake_conn_id=SNOWFLAKE_CONN_ID,
sql=CREATE_TABLE_SQL_STRING,
warehouse=SNOWFLAKE_WAREHOUSE,
database=SNOWFLAKE_DATABASE,
# schema=SNOWFLAKE_SCHEMA,
role=SNOWFLAKE_ROLE,
)
snowflake_op_with_params = SnowflakeOperator(
task_id='snowflake_op_with_params',
dag=dag,
snowflake_conn_id=SNOWFLAKE_CONN_ID,
sql=SQL_INSERT_STATEMENT,
parameters={"id": 56},
warehouse=SNOWFLAKE_WAREHOUSE,
database=SNOWFLAKE_DATABASE,
# schema=SNOWFLAKE_SCHEMA,
role=SNOWFLAKE_ROLE,
)
snowflake_op_sql_list = SnowflakeOperator(
task_id='snowflake_op_sql_list', dag=dag, snowflake_conn_id=SNOWFLAKE_CONN_ID, sql=SQL_LIST
)
snowflake_op_sql_str >> [
snowflake_op_with_params,
snowflake_op_sql_list,]
```
**Getting LOGS in airFlow as below ::**
```
Reading local file: /Users/aashayjain/airflow/logs/snowflake_test/snowflake_op_with_params/2021-02-02T13:51:18.229233+00:00/1.log
[2021-02-02 19:21:38,880] {taskinstance.py:826} INFO - Dependencies all met for <TaskInstance: snowflake_test.snowflake_op_with_params 2021-02-02T13:51:18.229233+00:00 [queued]>
[2021-02-02 19:21:38,887] {taskinstance.py:826} INFO - Dependencies all met for <TaskInstance: snowflake_test.snowflake_op_with_params 2021-02-02T13:51:18.229233+00:00 [queued]>
[2021-02-02 19:21:38,887] {taskinstance.py:1017} INFO -
--------------------------------------------------------------------------------
[2021-02-02 19:21:38,887] {taskinstance.py:1018} INFO - Starting attempt 1 of 1
[2021-02-02 19:21:38,887] {taskinstance.py:1019} INFO -
--------------------------------------------------------------------------------
[2021-02-02 19:21:38,892] {taskinstance.py:1038} INFO - Executing <Task(SnowflakeOperator): snowflake_op_with_params> on 2021-02-02T13:51:18.229233+00:00
[2021-02-02 19:21:38,895] {standard_task_runner.py:51} INFO - Started process 16510 to run task
[2021-02-02 19:21:38,901] {standard_task_runner.py:75} INFO - Running: ['airflow', 'tasks', 'run', 'snowflake_test', 'snowflake_op_with_params', '2021-02-02T13:51:18.229233+00:00', '--job-id', '7', '--pool', 'default_pool', '--raw', '--subdir', 'DAGS_FOLDER/snowflake_test.py', '--cfg-path', '/var/folders/6h/1pzt4pbx6h32h6p5v503wws00000gp/T/tmp1w61m38s']
[2021-02-02 19:21:38,903] {standard_task_runner.py:76} INFO - Job 7: Subtask snowflake_op_with_params
[2021-02-02 19:21:38,933] {logging_mixin.py:103} INFO - Running <TaskInstance: snowflake_test.snowflake_op_with_params 2021-02-02T13:51:18.229233+00:00 [running]> on host 1.0.0.127.in-addr.arpa
[2021-02-02 19:21:38,954] {taskinstance.py:1232} INFO - Exporting the following env vars:
AIRFLOW_CTX_DAG_OWNER=airflow
AIRFLOW_CTX_DAG_ID=snowflake_test
AIRFLOW_CTX_TASK_ID=snowflake_op_with_params
AIRFLOW_CTX_EXECUTION_DATE=2021-02-02T13:51:18.229233+00:00
AIRFLOW_CTX_DAG_RUN_ID=manual__2021-02-02T13:51:18.229233+00:00
[2021-02-02 19:21:38,955] {snowflake.py:119} INFO - Executing: INSERT INTO TEST_TABLE VALUES ('name', %(id)s)
[2021-02-02 19:21:38,961] {base.py:74} INFO - Using connection to: id: snowflake_conn. Host: uva00063.us-east-1.snowflakecomputing.com, Port: None, Schema: , Login: aashay, Password: XXXXXXXX, extra: XXXXXXXX
[2021-02-02 19:21:38,963] {connection.py:218} INFO - Snowflake Connector for Python Version: 2.3.7, Python Version: 3.7.3, Platform: Darwin-19.5.0-x86_64-i386-64bit
[2021-02-02 19:21:38,964] {connection.py:769} INFO - This connection is in OCSP Fail Open Mode. TLS Certificates would be checked for validity and revocation status. Any other Certificate Revocation related exceptions or OCSP Responder failures would be disregarded in favor of connectivity.
[2021-02-02 19:21:38,964] {connection.py:785} INFO - Setting use_openssl_only mode to False
[2021-02-02 19:21:38,996] {local_task_job.py:118} INFO - Task exited with return code Negsignal.SIGABRT
```
apache-airflow==2.0.0
python 3.7.3
Looking forward for help with this. let me know I need to provide any more details wrt. code or airflow...................................???
|
2021/02/02
|
[
"https://Stackoverflow.com/questions/66012040",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/10110307/"
] |
I ran into the same error on Mac OSX - which looks like the OS you are using based on the local file path.
Adding the following to my Airflow *scheduler* session fixed the problem:
```
export OBJC_DISABLE_INITIALIZE_FORK_SAFETY=YES
```
The cause is described in the [Airflow docs](https://airflow.apache.org/blog/airflow-1.10.10/#running-airflow-on-macos):
>
> This error occurs because of added security to restrict
> multiprocessing & multithreading in Mac OS High Sierra and above.
>
>
>
The doc refers to version 1.10.x - but based on local test environments this also resolves the error for v2.x.x.
**kaxil** comment in the following issue actually guided me to the relevant Airflow page:
<https://github.com/apache/airflow/issues/12808#issuecomment-738854764>
|
I think the issue you are facing has to do with OCSP checking
Add "insecure\_mode=True" in snowflake connector to disable OCSP checking
```py
snowflake.connector.connect(
insecure_mode=True,
)
```
But "insecure\_mode=True" is needed only when running inside Airflow (Running as a pyscript works fine) . I don't know why is that.
Note: export `OBJC_DISABLE_INITIALIZE_FORK_SAFETY=YES` didn't work for me because i didn't get this error
`may have been in progress in another thread when fork() was called.` which will usually throw `Negsignal.SIGABRT` exception
|
66,012,040
|
Running this DAG in airflow gives error as Task exited with return code Negsignal.SIGABRT.
I am not sure what is wrong I have done
```
from airflow import DAG
from airflow.providers.snowflake.operators.snowflake import SnowflakeOperator
from airflow.utils.dates import days_ago
SNOWFLAKE_CONN_ID = 'snowflake_conn'
# TODO: should be able to rely on connection's schema, but currently param required by S3ToSnowflakeTransfer
# SNOWFLAKE_SCHEMA = 'schema_name'
#SNOWFLAKE_STAGE = 'stage_name'
SNOWFLAKE_WAREHOUSE = 'SF_TUTS_WH'
SNOWFLAKE_DATABASE = 'KAFKA_DB'
SNOWFLAKE_ROLE = 'sysadmin'
SNOWFLAKE_SAMPLE_TABLE = 'sample_table'
CREATE_TABLE_SQL_STRING = (
f"CREATE OR REPLACE TRANSIENT TABLE {SNOWFLAKE_SAMPLE_TABLE} (name VARCHAR(250), id INT);"
)
SQL_INSERT_STATEMENT = f"INSERT INTO {SNOWFLAKE_SAMPLE_TABLE} VALUES ('name', %(id)s)"
SQL_LIST = [SQL_INSERT_STATEMENT % {"id": n} for n in range(0, 10)]
default_args = {
'owner': 'airflow',
}
dag = DAG(
'example_snowflake',
default_args=default_args,
start_date=days_ago(2),
tags=['example'],
)
snowflake_op_sql_str = SnowflakeOperator(
task_id='snowflake_op_sql_str',
dag=dag,
snowflake_conn_id=SNOWFLAKE_CONN_ID,
sql=CREATE_TABLE_SQL_STRING,
warehouse=SNOWFLAKE_WAREHOUSE,
database=SNOWFLAKE_DATABASE,
# schema=SNOWFLAKE_SCHEMA,
role=SNOWFLAKE_ROLE,
)
snowflake_op_with_params = SnowflakeOperator(
task_id='snowflake_op_with_params',
dag=dag,
snowflake_conn_id=SNOWFLAKE_CONN_ID,
sql=SQL_INSERT_STATEMENT,
parameters={"id": 56},
warehouse=SNOWFLAKE_WAREHOUSE,
database=SNOWFLAKE_DATABASE,
# schema=SNOWFLAKE_SCHEMA,
role=SNOWFLAKE_ROLE,
)
snowflake_op_sql_list = SnowflakeOperator(
task_id='snowflake_op_sql_list', dag=dag, snowflake_conn_id=SNOWFLAKE_CONN_ID, sql=SQL_LIST
)
snowflake_op_sql_str >> [
snowflake_op_with_params,
snowflake_op_sql_list,]
```
**Getting LOGS in airFlow as below ::**
```
Reading local file: /Users/aashayjain/airflow/logs/snowflake_test/snowflake_op_with_params/2021-02-02T13:51:18.229233+00:00/1.log
[2021-02-02 19:21:38,880] {taskinstance.py:826} INFO - Dependencies all met for <TaskInstance: snowflake_test.snowflake_op_with_params 2021-02-02T13:51:18.229233+00:00 [queued]>
[2021-02-02 19:21:38,887] {taskinstance.py:826} INFO - Dependencies all met for <TaskInstance: snowflake_test.snowflake_op_with_params 2021-02-02T13:51:18.229233+00:00 [queued]>
[2021-02-02 19:21:38,887] {taskinstance.py:1017} INFO -
--------------------------------------------------------------------------------
[2021-02-02 19:21:38,887] {taskinstance.py:1018} INFO - Starting attempt 1 of 1
[2021-02-02 19:21:38,887] {taskinstance.py:1019} INFO -
--------------------------------------------------------------------------------
[2021-02-02 19:21:38,892] {taskinstance.py:1038} INFO - Executing <Task(SnowflakeOperator): snowflake_op_with_params> on 2021-02-02T13:51:18.229233+00:00
[2021-02-02 19:21:38,895] {standard_task_runner.py:51} INFO - Started process 16510 to run task
[2021-02-02 19:21:38,901] {standard_task_runner.py:75} INFO - Running: ['airflow', 'tasks', 'run', 'snowflake_test', 'snowflake_op_with_params', '2021-02-02T13:51:18.229233+00:00', '--job-id', '7', '--pool', 'default_pool', '--raw', '--subdir', 'DAGS_FOLDER/snowflake_test.py', '--cfg-path', '/var/folders/6h/1pzt4pbx6h32h6p5v503wws00000gp/T/tmp1w61m38s']
[2021-02-02 19:21:38,903] {standard_task_runner.py:76} INFO - Job 7: Subtask snowflake_op_with_params
[2021-02-02 19:21:38,933] {logging_mixin.py:103} INFO - Running <TaskInstance: snowflake_test.snowflake_op_with_params 2021-02-02T13:51:18.229233+00:00 [running]> on host 1.0.0.127.in-addr.arpa
[2021-02-02 19:21:38,954] {taskinstance.py:1232} INFO - Exporting the following env vars:
AIRFLOW_CTX_DAG_OWNER=airflow
AIRFLOW_CTX_DAG_ID=snowflake_test
AIRFLOW_CTX_TASK_ID=snowflake_op_with_params
AIRFLOW_CTX_EXECUTION_DATE=2021-02-02T13:51:18.229233+00:00
AIRFLOW_CTX_DAG_RUN_ID=manual__2021-02-02T13:51:18.229233+00:00
[2021-02-02 19:21:38,955] {snowflake.py:119} INFO - Executing: INSERT INTO TEST_TABLE VALUES ('name', %(id)s)
[2021-02-02 19:21:38,961] {base.py:74} INFO - Using connection to: id: snowflake_conn. Host: uva00063.us-east-1.snowflakecomputing.com, Port: None, Schema: , Login: aashay, Password: XXXXXXXX, extra: XXXXXXXX
[2021-02-02 19:21:38,963] {connection.py:218} INFO - Snowflake Connector for Python Version: 2.3.7, Python Version: 3.7.3, Platform: Darwin-19.5.0-x86_64-i386-64bit
[2021-02-02 19:21:38,964] {connection.py:769} INFO - This connection is in OCSP Fail Open Mode. TLS Certificates would be checked for validity and revocation status. Any other Certificate Revocation related exceptions or OCSP Responder failures would be disregarded in favor of connectivity.
[2021-02-02 19:21:38,964] {connection.py:785} INFO - Setting use_openssl_only mode to False
[2021-02-02 19:21:38,996] {local_task_job.py:118} INFO - Task exited with return code Negsignal.SIGABRT
```
apache-airflow==2.0.0
python 3.7.3
Looking forward for help with this. let me know I need to provide any more details wrt. code or airflow...................................???
|
2021/02/02
|
[
"https://Stackoverflow.com/questions/66012040",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/10110307/"
] |
This helped me too on Mac OSX:
1. Open terminal an open .zshrc (or .bash\_profile) with `nano .zshrc`
2. Add this line to the profile `export OBJC_DISABLE_INITIALIZE_FORK_SAFETY=YES`
3. Restart scheduler & web server.
|
I think the issue you are facing has to do with OCSP checking
Add "insecure\_mode=True" in snowflake connector to disable OCSP checking
```py
snowflake.connector.connect(
insecure_mode=True,
)
```
But "insecure\_mode=True" is needed only when running inside Airflow (Running as a pyscript works fine) . I don't know why is that.
Note: export `OBJC_DISABLE_INITIALIZE_FORK_SAFETY=YES` didn't work for me because i didn't get this error
`may have been in progress in another thread when fork() was called.` which will usually throw `Negsignal.SIGABRT` exception
|
32,103,424
|
Coming from [Python recursively appending list function](https://stackoverflow.com/questions/32102420/python-recursively-appending-list-function)
Trying to recursively get a list of permissions associated with a file structure.
I have this function:
```
def get_child_perms(self, folder, request, perm_list):
# Folder contains other folders
if folder.get_children():
# For every sub-folder
return [self.get_child_perms(subfolder, request, perm_list) for subfolder in folder.get_children()]
return folder.has_read_permission(request)
```
That returns all the results except the folders that contain other folders.
```
folder <- Missing (allowed)
subfolder <- Missing (restricted)
subsubfolder <- Get this (restricted)
files
```
Output from function would be
[True, False, False]
another case would be, where A = allowed, R = restricted
```
folder A
subfolder A
subsubfolder R
files
files
subfolder R
files
subfolder A
subsubfolder A
files
files
subfolder A
files
files
```
Output would be
[True,True,False,False,True,True,True]
|
2015/08/19
|
[
"https://Stackoverflow.com/questions/32103424",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4941891/"
] |
The basic issue occurs you are only returning the `folder permission` , when folder does not have any children , when it has children, you are not including the `folder.has_read_permission(request)` in your return result , which is most probably causing you issue. You need to do -
```
def get_child_perms(self, folder, request, perm_list):
# Folder contains other folders
if folder.get_children():
# For every sub-folder
return [folder.has_read_permission(request)] + [self.get_child_perms(subfolder, request, perm_list) for subfolder in folder.get_children()]
return [folder.has_read_permission(request)]
```
This should result in (not tested) -
```
[folderperm [subfolderperm [subsubfolderperm]]
```
---
|
why not [os.walk](https://docs.python.org/3/library/os.html?highlight=walk#os.walk)
>
> When topdown is True, the caller can modify the dirnames list in-place
> (perhaps using del or slice assignment), and walk() will only recurse
> into the subdirectories whose names remain in dirnames; this can be
> used to prune the search, impose a specific order of visiting, or even
> to inform walk() about directories the caller creates or renames
> before it resumes walk() again. Modifying dirnames when topdown is
> False is ineffective, because in bottom-up mode the directories in
> dirnames are generated before dirpath itself is generated.
>
>
>
for example you can build generator (lazy list) that generates only non restricted directories
```
for (dirpath, dirnames, filenames) in os.walk("top_path"):
if restricted(dirpath):
del dirnames
continue
yield (dirpath,tuple(filenames))
```
|
26,643,903
|
I'm trying to use python requests to PUT a .pmml model to a local openscoring server.
This works (from directory containing DecisionTreeIris.pmml):
```
curl -X PUT --data-binary @DecisionTreeIris.pmml -H "Content-type: text/xml" http://localhost:8080/openscoring/model/DecisionTreeIris
```
This doesn't:
```
import requests
file = '/Users/weitzenfeld/IntelliJProjects/openscoring/openscoring-server/etc/DecisionTreeIris.pmml'
r = requests.put('http://localhost:8080/openscoring/model/DecisionTreeIris', files={'file': open(file, 'rb')})
r.text
```
returns:
```
u'<html>\n<head>\n<meta http-equiv="Content-Type" content="text/html;charset=ISO-8859-1"/>\n<title>Error 415 </title>\n</head>\n<body>\n<h2>HTTP ERROR: 415</h2>\n<p>Problem accessing /openscoring/model/DecisionTreeIris. Reason:\n<pre> Unsupported Media Type</pre></p>\n<hr /><i><small>Powered by Jetty://</small></i>\n</body>\n</html>\n'
```
I also tried:
```
r = requests.put('http://localhost:8080/openscoring/model/DecisionTreeIris', files={'file': open(file, 'rb')}, headers={'Content-type': 'text/xml', 'Accept': 'text/xml'})
r.text
```
which returns:
```
u'<html>\n<head>\n<meta http-equiv="Content-Type" content="text/html;charset=ISO-8859-1"/>\n<title>Error 406 </title>\n</head>\n<body>\n<h2>HTTP ERROR: 406</h2>\n<p>Problem accessing /openscoring/model/DecisionTreeIris. Reason:\n<pre> Not Acceptable</pre></p>\n<hr /><i><small>Powered by Jetty://</small></i>\n</body>\n</html>\n'
```
Note that my python attempt is the same as in the accepted answer to this question: [Using Python to PUT PMML](https://stackoverflow.com/questions/24320021/using-python-to-put-pmml).
Also, someone with >1500 rep should consider making an 'openscoring' tag.
|
2014/10/30
|
[
"https://Stackoverflow.com/questions/26643903",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/829332/"
] |
You are trying to store Components in the same variable. This is wrong:
```
var enemy1Clone : Transform = Instantiate(enemy1, transform.position, transform.rotation);
enemy1Clone.GetComponent("enemyscript");
enemy1Clone.GetComponent(Animator);
```
`enemy1Clone` is of type `Transform`, don't try putting `enemyscript` or `Animator` Components inside of it.
You don't need those 2 `GetComponent` so just discard them.
---
I have completely changed the logic for your game. It works now.
**Player.js**
```
function Awake()
{
GUIcombo.text = "COMBO: ";
GUIlives.text = "LIVES: 3";
}
function Start ()
{
anim = GetComponent(Animator);
}
function Update ()
{
if(lives <= 0)
Destroy(Player);
Player.transform.position.x = -4.325;
if(!jumped)
anim.SetFloat("hf",0.0);
if(Input.GetButtonDown("Fire1") && !jumped)
{
jumpup();
jumped = true;
anim.SetFloat("hf",1);
}
}
function OnCollisionEnter2D(coll: Collision2D)
{
var g : GameObject = coll.gameObject;
if(g.CompareTag("ground"))
{
anim.SetFloat("hf",0.0);
jumped=false;
combo = 0;
GUIcombo.text = "COMBO: " + combo;
}
else if(g.CompareTag("enemy") && jumped)
{
// Notify the Enemy to die.
g.SendMessage("die");
jumpup();
jumped=true;
combo += 1;
GUIcombo.text = "COMBO: " + combo;
}
else if(g.CompareTag("enemy") && !jumped)
{
lives -=1;
GUIlives.text = "LIVES: " + lives;
}
}
function slam()
{
Player.rigidbody2D.AddForce(new Vector2(0,-3000), ForceMode2D.Force);
}
function glide()
{
Player.rigidbody2D.AddForce(Vector2(0,600), ForceMode2D.Force);
}
function jumpup()
{
Player.transform.Translate(Vector3(Input.GetAxis("Vertical") * speed * Time.deltaTime, 0, 0));
Player.rigidbody2D.velocity = Vector2(0,10);
if(jumplevel2)
Player.rigidbody2D.velocity = Vector2(0,13);
if(jumplevel3)
Player.rigidbody2D.velocity = Vector2(0,16);
}
```
**Enemy.js**
```
#pragma strict
var enemy : GameObject;
var speed : float = 1.0;
var anim : Animator;
var isDead : boolean = false;
function Start()
{
anim = GetComponent(Animator);
anim.SetFloat("EnemyDie", 0);
enemy.transform.position.x = 8.325;
enemy.transform.position.y = -1.2;
}
function Update()
{
if(!isDead)
{
enemy.transform.Translate(Vector3(Input.GetAxis("Horizontal") * speed * Time.deltaTime, 0, 0));
enemy.rigidbody2D.velocity = Vector2(-5, 0);
}
}
// Call here when enemy should die.
function die()
{
if(!isDead)
{
isDead = true;
anim.SetFloat("EnemyDie", 1);
yield WaitForSeconds(0.5f);
Destroy(gameObject);
}
}
```
|
if the **Animator** is not initialized, you have to re-order the components, put the **Animator** component right below the **Transform** component and you should be fine.
|
56,496,458
|
The [logging docs](https://docs.python.org/2/library/logging.html) don't mention what the default logger obtained from [`basicConfig`](https://docs.python.org/2/library/logging.html#logging.basicConfig) writes to: stdout or stderr.
What is the default behavior?
|
2019/06/07
|
[
"https://Stackoverflow.com/questions/56496458",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1804173/"
] |
If no `filename` argument is passed to [logging.basicconfig](https://docs.python.org/2/library/logging.html#logging.basicConfig) it will configure a `StreamHandler`. If a `stream` argument is passed to `logging.basicconfig` it will pass this on to `StreamHandler` otherwise `StreamHandler` defaults to using `sys.stderr` as can be seen from the [StreamHandler docs](https://docs.python.org/2/library/logging.handlers.html#logging.StreamHandler)
>
> class logging.StreamHandler(stream=None)
>
>
> Returns a new instance of the StreamHandler class. If stream is specified, the instance will use it for logging output; otherwise, sys.stderr will be used.
>
>
>
and the [source code](https://github.com/python/cpython/blob/2bfc2dc214445550521074f428245b502d215eac/Lib/logging/__init__.py#L827):
```
class StreamHandler(Handler):
"""
A handler class which writes logging records, appropriately formatted,
to a stream. Note that this class does not close the stream, as
sys.stdout or sys.stderr may be used.
"""
def __init__(self, stream=None):
"""
Initialize the handler.
If stream is not specified, sys.stderr is used.
"""
Handler.__init__(self)
if stream is None:
stream = sys.stderr
self.stream = stream
```
|
Apparently the default is stderr.
A quick check: Using a minimal example
```py
import logging
logger = logging.getLogger(__name__)
logging.basicConfig(level=logging.INFO)
logger.info("test")
```
and running it with `python test.py 1> /tmp/log_stdout 2> /tmp/log_stderr` results in an empty stdout file, but a non-empty stderr file.
|
9,222,129
|
I have three python functions:
```
def decorator_function(func)
def wrapper(..)
return func(*args, **kwargs)
return wrapper
def plain_func(...)
@decorator_func
def wrapped_func(....)
```
inside a module A.
Now I want to get all the functions inside this module A, for which I do:
```
for fname, func in inspect.getmembers(A, inspect.isfunction):
# My code
```
The problem here is that the value of func is not what I want it to be.
It would be decorator\_function, plain\_func and wrapper (instead of wrapped\_func).
How can I make sure that wrapped\_func is returned instead of wrapper?
|
2012/02/10
|
[
"https://Stackoverflow.com/questions/9222129",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/334909/"
] |
If all you want is to keep the original function name visible from "the outside" I think that you can do that with
```
@functools.wraps
```
as a decorator to your decorator
here's the example from the standard docs
```
>>> from functools import wraps
>>> def my_decorator(f):
... @wraps(f)
... def wrapper(*args, **kwds):
... print('Calling decorated function')
... return f(*args, **kwds)
... return wrapper
...
>>> @my_decorator
... def example():
... """Docstring"""
... print('Called example function')
...
>>> example()
Calling decorated function
Called example function
>>> example.__name__
'example'
>>> example.__doc__
'Docstring'
```
|
You can access the pre-decorated function with:
```
wrapped_func.func_closure[0].cell_contents()
```
For example,
```
def decorator_function(func):
def wrapper(*args, **kwargs):
print('Bar')
return func(*args, **kwargs)
return wrapper
@decorator_function
def wrapped_func():
print('Foo')
wrapped_func.func_closure[0].cell_contents()
```
prints
```
Foo # Note, `Bar` was not also printed
```
But really, if you know you want to access the pre-decorated function, then it would be a whole lot cleaner to define
```
def wrapped_func():
print('Foo')
deco_wrapped_func = decorator_function(wrapped_func)
```
So `wrapped_func` will be the pre-decorated function, and
`deco_wrapped_func` will be the decorated version.
|
13,400,876
|
>
> **Possible Duplicate:**
>
> [Python’s most efficient way to choose longest string in list?](https://stackoverflow.com/questions/873327/pythons-most-efficient-way-to-choose-longest-string-in-list)
>
>
>
I have a list L
```
L = [[1,2,3],[5,7],[1,3],[77]]
```
I want to return the length of the longest sublist without needing to loop through them, in this case 3 because [1,2,3] is length 3 and it is the longest of the four sublists. I tried len(max(L)) but this doesn't do what I want. Any way to do this or is a loop my only way?
|
2012/11/15
|
[
"https://Stackoverflow.com/questions/13400876",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1538364/"
] |
`max(L,key=len)` will give you the object with the longest length (`[1,2,3]` in your example) -- To actually get the length (if that's all you care about), you can do `len(max(L,key=len))` which is a bit ugly -- I'd break it up onto 2 lines. Or you can use the version supplied by ecatamur.
All of these answers have loops -- in my case, the loops are *implicit* which usually means they'll be executed in optimized native machine code. If you think about it, How could you know which element is the longest without looking at each one?
---
Finally, note that `key=function` isn't a feature that is specific to `max`. A lot of the python builtins (`max`,`min`,`sorted`,`itertools.groupby`,...) use this particular keyword argument. It's definitely worth investing a little time to understand how it works and what it typically does.
|
Try a comprehension:
```
max(len(l) for l in L)
```
|
50,506,478
|
i am gaurav and i am learning programming. i was reading regular expressions in dive into python 3,so i thought to try myself something so i wrote this code in eclipse but i got a lot of errors.can anyone pls help me
```
import re
def add_shtner(add):
return re.sub(r"\bROAD\b","RD",add)
print(add_shtner("100,BROAD ROAD"))
# a code to check valid roman no.
ptn=r"^(M{0,3})(CM|CD|D?C{0,3})(XC|XL|L?X{0,3})(IX|IV|V?I{0,3}$)"
def romancheck(num):
num=num.upper()
if re.search(ptn,num):
return "VALID"
else:
return "INVALID"
print(romancheck("MMMLXXVIII"))
print(romancheck("MMMLxvviii"))
mul_line_str='''adding third argument
re.VERBOSE in re.search()
will ignore whitespace
and comments'''
print(re.search("re.search()will",mul_line_str,re.VERBOSE))
print(re.search("re.search() will",mul_line_str,re.VERBOSE))
print(re.search("ignore",mul_line_str,re.VERBOSE))
ptn='''
^ #beginning of the string
M{0,3} #thousands-0 to 3 M's
(CM|CD|D?C{0,3} #hundreds
(XC|XL|L?XXX) #tens
(IX|IV|V?III) #ones
$ #end of the string
'''
print(re.search(ptn,"MMMCDLXXIX",re.VERBOSE))
def romanCheck(num):
num=num.upper()
if re.search(ptn,num,re.VERBOSE):
return "VALID"
else:
return "INVALID"
print(romanCheck("mmCLXXXIV"))
print(romanCheck("MMMCCLXXXiv"))
```
i wrote this code and i ran it but i got this-
```
100,BROAD RD
VALID
INVALID
None
None
<_sre.SRE_Match object; span=(120, 126), match='ignore'>
Traceback (most recent call last):
File "G:\pydev\xyz\rts\regular_expressions.py", line 46, in <module>
print(re.search(ptn,"MMMCDLXXIX",re.VERBOSE))
File "C:\Users\Owner\AppData\Local\Programs\Python\Python36\lib\re.py", line 182, in search
return _compile(pattern, flags).search(string)
File "C:\Users\Owner\AppData\Local\Programs\Python\Python36\lib\re.py", line 301, in _compile
p = sre_compile.compile(pattern, flags)
File "C:\Users\Owner\AppData\Local\Programs\Python\Python36\lib\sre_compile.py", line 562, in compile
p = sre_parse.parse(p, flags)
File "C:\Users\Owner\AppData\Local\Programs\Python\Python36\lib\sre_parse.py", line 856, in parse
p = _parse_sub(source, pattern, flags & SRE_FLAG_VERBOSE, 0)
File "C:\Users\Owner\AppData\Local\Programs\Python\Python36\lib\sre_parse.py", line 416, in _parse_sub
not nested and not items))
File "C:\Users\Owner\AppData\Local\Programs\Python\Python36\lib\sre_parse.py", line 768, in _parse
source.tell() - start)
sre_constants.error: missing ), unterminated subpattern at position 113 (line 4, column 6)
```
what are these errors can anyone help me.
i have understood all the output but i am not able to understand this errors
|
2018/05/24
|
[
"https://Stackoverflow.com/questions/50506478",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5925439/"
] |
This is supported from version `@angular/cli@6.1.0-beta.2`.
---
**Update** (November 2019):
**[The `fileReplacements` solution does not work with
Angular CLI 8.0.0 anymore](https://github.com/angular/angular-cli/issues/14599#issuecomment-527131237)**
Change `angular.json` to set your production `index.html` instead:
```
"architect":{
"build": {
"options": {
"index": "src/index.html", //non-prod
},
"configurations": {
"production":{
"index": { //prod index replacement
"input": "src/index.prod.html",
"output": "index.html
}
}
}
}
```
|
I'm using Angular 8 and I don't know why it doesn't work on my side.
But I can specify the location for the `index.html` for any build configuration (and looks like also `serve`) <https://stackoverflow.com/a/57274333/3473303>
|
50,506,478
|
i am gaurav and i am learning programming. i was reading regular expressions in dive into python 3,so i thought to try myself something so i wrote this code in eclipse but i got a lot of errors.can anyone pls help me
```
import re
def add_shtner(add):
return re.sub(r"\bROAD\b","RD",add)
print(add_shtner("100,BROAD ROAD"))
# a code to check valid roman no.
ptn=r"^(M{0,3})(CM|CD|D?C{0,3})(XC|XL|L?X{0,3})(IX|IV|V?I{0,3}$)"
def romancheck(num):
num=num.upper()
if re.search(ptn,num):
return "VALID"
else:
return "INVALID"
print(romancheck("MMMLXXVIII"))
print(romancheck("MMMLxvviii"))
mul_line_str='''adding third argument
re.VERBOSE in re.search()
will ignore whitespace
and comments'''
print(re.search("re.search()will",mul_line_str,re.VERBOSE))
print(re.search("re.search() will",mul_line_str,re.VERBOSE))
print(re.search("ignore",mul_line_str,re.VERBOSE))
ptn='''
^ #beginning of the string
M{0,3} #thousands-0 to 3 M's
(CM|CD|D?C{0,3} #hundreds
(XC|XL|L?XXX) #tens
(IX|IV|V?III) #ones
$ #end of the string
'''
print(re.search(ptn,"MMMCDLXXIX",re.VERBOSE))
def romanCheck(num):
num=num.upper()
if re.search(ptn,num,re.VERBOSE):
return "VALID"
else:
return "INVALID"
print(romanCheck("mmCLXXXIV"))
print(romanCheck("MMMCCLXXXiv"))
```
i wrote this code and i ran it but i got this-
```
100,BROAD RD
VALID
INVALID
None
None
<_sre.SRE_Match object; span=(120, 126), match='ignore'>
Traceback (most recent call last):
File "G:\pydev\xyz\rts\regular_expressions.py", line 46, in <module>
print(re.search(ptn,"MMMCDLXXIX",re.VERBOSE))
File "C:\Users\Owner\AppData\Local\Programs\Python\Python36\lib\re.py", line 182, in search
return _compile(pattern, flags).search(string)
File "C:\Users\Owner\AppData\Local\Programs\Python\Python36\lib\re.py", line 301, in _compile
p = sre_compile.compile(pattern, flags)
File "C:\Users\Owner\AppData\Local\Programs\Python\Python36\lib\sre_compile.py", line 562, in compile
p = sre_parse.parse(p, flags)
File "C:\Users\Owner\AppData\Local\Programs\Python\Python36\lib\sre_parse.py", line 856, in parse
p = _parse_sub(source, pattern, flags & SRE_FLAG_VERBOSE, 0)
File "C:\Users\Owner\AppData\Local\Programs\Python\Python36\lib\sre_parse.py", line 416, in _parse_sub
not nested and not items))
File "C:\Users\Owner\AppData\Local\Programs\Python\Python36\lib\sre_parse.py", line 768, in _parse
source.tell() - start)
sre_constants.error: missing ), unterminated subpattern at position 113 (line 4, column 6)
```
what are these errors can anyone help me.
i have understood all the output but i am not able to understand this errors
|
2018/05/24
|
[
"https://Stackoverflow.com/questions/50506478",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5925439/"
] |
This is supported from version `@angular/cli@6.1.0-beta.2`.
---
**Update** (November 2019):
**[The `fileReplacements` solution does not work with
Angular CLI 8.0.0 anymore](https://github.com/angular/angular-cli/issues/14599#issuecomment-527131237)**
Change `angular.json` to set your production `index.html` instead:
```
"architect":{
"build": {
"options": {
"index": "src/index.html", //non-prod
},
"configurations": {
"production":{
"index": { //prod index replacement
"input": "src/index.prod.html",
"output": "index.html
}
}
}
}
```
|
In **Angular 8** "fileReplacements" did not work for replacing index.html. This was my solution.
**Current Working Solution**
```
...
"production": {
"index": {
"input": "src/index.prod.html",
"output": "index.html"
},
...
```
***Full Build Configuration***
```
"build": {
"builder": "@angular-devkit/build-angular:browser",
"options": {
"outputPath": "dist/bandbutter-angular",
"index": "src/index.html",
"main": "src/main.ts",
"polyfills": "src/polyfills.ts",
"tsConfig": "src/tsconfig.app.json",
"assets": [
"src/favicon.ico",
"src/assets",
"src/manifest.webmanifest"
],
"styles": [
"./animate.min.css",
"src/styles.css"
],
"scripts": [
{
"input": "node_modules/document-register-element/build/document-register-element.js"
}
]
},
"configurations": {
"production": {
"index": {
"input": "src/index.prod.html",
"output": "index.html"
},
"fileReplacements": [
{
"replace": "src/environments/environment.ts",
"with": "src/environments/environment.prod.ts"
}
],
"optimization": true,
"outputHashing": "all",
"sourceMap": false,
"extractCss": true,
"namedChunks": false,
"aot": true,
"extractLicenses": true,
"vendorChunk": false,
"buildOptimizer": true,
"budgets": [
{
"type": "initial",
"maximumWarning": "2mb",
"maximumError": "5mb"
}
],
"serviceWorker": true,
"ngswConfigPath": "ngsw-config.json"
},
"dev": {
"fileReplacements": [
{
"replace": "src/environments/environment.ts",
"with": "src/environments/environment.dev.ts"
}
],
"optimization": true,
"outputHashing": "all",
"sourceMap": false,
"extractCss": true,
"namedChunks": false,
"aot": true,
"extractLicenses": true,
"vendorChunk": false,
"buildOptimizer": true,
"budgets": [
{
"type": "initial",
"maximumWarning": "2mb",
"maximumError": "5mb"
}
]
}
}
},
```
|
50,506,478
|
i am gaurav and i am learning programming. i was reading regular expressions in dive into python 3,so i thought to try myself something so i wrote this code in eclipse but i got a lot of errors.can anyone pls help me
```
import re
def add_shtner(add):
return re.sub(r"\bROAD\b","RD",add)
print(add_shtner("100,BROAD ROAD"))
# a code to check valid roman no.
ptn=r"^(M{0,3})(CM|CD|D?C{0,3})(XC|XL|L?X{0,3})(IX|IV|V?I{0,3}$)"
def romancheck(num):
num=num.upper()
if re.search(ptn,num):
return "VALID"
else:
return "INVALID"
print(romancheck("MMMLXXVIII"))
print(romancheck("MMMLxvviii"))
mul_line_str='''adding third argument
re.VERBOSE in re.search()
will ignore whitespace
and comments'''
print(re.search("re.search()will",mul_line_str,re.VERBOSE))
print(re.search("re.search() will",mul_line_str,re.VERBOSE))
print(re.search("ignore",mul_line_str,re.VERBOSE))
ptn='''
^ #beginning of the string
M{0,3} #thousands-0 to 3 M's
(CM|CD|D?C{0,3} #hundreds
(XC|XL|L?XXX) #tens
(IX|IV|V?III) #ones
$ #end of the string
'''
print(re.search(ptn,"MMMCDLXXIX",re.VERBOSE))
def romanCheck(num):
num=num.upper()
if re.search(ptn,num,re.VERBOSE):
return "VALID"
else:
return "INVALID"
print(romanCheck("mmCLXXXIV"))
print(romanCheck("MMMCCLXXXiv"))
```
i wrote this code and i ran it but i got this-
```
100,BROAD RD
VALID
INVALID
None
None
<_sre.SRE_Match object; span=(120, 126), match='ignore'>
Traceback (most recent call last):
File "G:\pydev\xyz\rts\regular_expressions.py", line 46, in <module>
print(re.search(ptn,"MMMCDLXXIX",re.VERBOSE))
File "C:\Users\Owner\AppData\Local\Programs\Python\Python36\lib\re.py", line 182, in search
return _compile(pattern, flags).search(string)
File "C:\Users\Owner\AppData\Local\Programs\Python\Python36\lib\re.py", line 301, in _compile
p = sre_compile.compile(pattern, flags)
File "C:\Users\Owner\AppData\Local\Programs\Python\Python36\lib\sre_compile.py", line 562, in compile
p = sre_parse.parse(p, flags)
File "C:\Users\Owner\AppData\Local\Programs\Python\Python36\lib\sre_parse.py", line 856, in parse
p = _parse_sub(source, pattern, flags & SRE_FLAG_VERBOSE, 0)
File "C:\Users\Owner\AppData\Local\Programs\Python\Python36\lib\sre_parse.py", line 416, in _parse_sub
not nested and not items))
File "C:\Users\Owner\AppData\Local\Programs\Python\Python36\lib\sre_parse.py", line 768, in _parse
source.tell() - start)
sre_constants.error: missing ), unterminated subpattern at position 113 (line 4, column 6)
```
what are these errors can anyone help me.
i have understood all the output but i am not able to understand this errors
|
2018/05/24
|
[
"https://Stackoverflow.com/questions/50506478",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5925439/"
] |
This is supported from version `@angular/cli@6.1.0-beta.2`.
---
**Update** (November 2019):
**[The `fileReplacements` solution does not work with
Angular CLI 8.0.0 anymore](https://github.com/angular/angular-cli/issues/14599#issuecomment-527131237)**
Change `angular.json` to set your production `index.html` instead:
```
"architect":{
"build": {
"options": {
"index": "src/index.html", //non-prod
},
"configurations": {
"production":{
"index": { //prod index replacement
"input": "src/index.prod.html",
"output": "index.html
}
}
}
}
```
|
As explained [in this github issue](https://github.com/angular/angular-cli/issues/14599#issuecomment-527131237), change `angular.json` to set your production `index.html`:
```
"architect":{
"build": {
"options": {
"index": "src/index.html", //non-prod
},
"configurations": {
"production":{
"index": { //prod index replacement
"input": "src/index.prod.html",
"output": "index.html
}
}
}
}
```
|
50,506,478
|
i am gaurav and i am learning programming. i was reading regular expressions in dive into python 3,so i thought to try myself something so i wrote this code in eclipse but i got a lot of errors.can anyone pls help me
```
import re
def add_shtner(add):
return re.sub(r"\bROAD\b","RD",add)
print(add_shtner("100,BROAD ROAD"))
# a code to check valid roman no.
ptn=r"^(M{0,3})(CM|CD|D?C{0,3})(XC|XL|L?X{0,3})(IX|IV|V?I{0,3}$)"
def romancheck(num):
num=num.upper()
if re.search(ptn,num):
return "VALID"
else:
return "INVALID"
print(romancheck("MMMLXXVIII"))
print(romancheck("MMMLxvviii"))
mul_line_str='''adding third argument
re.VERBOSE in re.search()
will ignore whitespace
and comments'''
print(re.search("re.search()will",mul_line_str,re.VERBOSE))
print(re.search("re.search() will",mul_line_str,re.VERBOSE))
print(re.search("ignore",mul_line_str,re.VERBOSE))
ptn='''
^ #beginning of the string
M{0,3} #thousands-0 to 3 M's
(CM|CD|D?C{0,3} #hundreds
(XC|XL|L?XXX) #tens
(IX|IV|V?III) #ones
$ #end of the string
'''
print(re.search(ptn,"MMMCDLXXIX",re.VERBOSE))
def romanCheck(num):
num=num.upper()
if re.search(ptn,num,re.VERBOSE):
return "VALID"
else:
return "INVALID"
print(romanCheck("mmCLXXXIV"))
print(romanCheck("MMMCCLXXXiv"))
```
i wrote this code and i ran it but i got this-
```
100,BROAD RD
VALID
INVALID
None
None
<_sre.SRE_Match object; span=(120, 126), match='ignore'>
Traceback (most recent call last):
File "G:\pydev\xyz\rts\regular_expressions.py", line 46, in <module>
print(re.search(ptn,"MMMCDLXXIX",re.VERBOSE))
File "C:\Users\Owner\AppData\Local\Programs\Python\Python36\lib\re.py", line 182, in search
return _compile(pattern, flags).search(string)
File "C:\Users\Owner\AppData\Local\Programs\Python\Python36\lib\re.py", line 301, in _compile
p = sre_compile.compile(pattern, flags)
File "C:\Users\Owner\AppData\Local\Programs\Python\Python36\lib\sre_compile.py", line 562, in compile
p = sre_parse.parse(p, flags)
File "C:\Users\Owner\AppData\Local\Programs\Python\Python36\lib\sre_parse.py", line 856, in parse
p = _parse_sub(source, pattern, flags & SRE_FLAG_VERBOSE, 0)
File "C:\Users\Owner\AppData\Local\Programs\Python\Python36\lib\sre_parse.py", line 416, in _parse_sub
not nested and not items))
File "C:\Users\Owner\AppData\Local\Programs\Python\Python36\lib\sre_parse.py", line 768, in _parse
source.tell() - start)
sre_constants.error: missing ), unterminated subpattern at position 113 (line 4, column 6)
```
what are these errors can anyone help me.
i have understood all the output but i am not able to understand this errors
|
2018/05/24
|
[
"https://Stackoverflow.com/questions/50506478",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5925439/"
] |
This is supported from version `@angular/cli@6.1.0-beta.2`.
---
**Update** (November 2019):
**[The `fileReplacements` solution does not work with
Angular CLI 8.0.0 anymore](https://github.com/angular/angular-cli/issues/14599#issuecomment-527131237)**
Change `angular.json` to set your production `index.html` instead:
```
"architect":{
"build": {
"options": {
"index": "src/index.html", //non-prod
},
"configurations": {
"production":{
"index": { //prod index replacement
"input": "src/index.prod.html",
"output": "index.html
}
}
}
}
```
|
[](https://i.stack.imgur.com/LXtQG.png)
you need to update your angular.json file for production configuration
see image.
|
50,506,478
|
i am gaurav and i am learning programming. i was reading regular expressions in dive into python 3,so i thought to try myself something so i wrote this code in eclipse but i got a lot of errors.can anyone pls help me
```
import re
def add_shtner(add):
return re.sub(r"\bROAD\b","RD",add)
print(add_shtner("100,BROAD ROAD"))
# a code to check valid roman no.
ptn=r"^(M{0,3})(CM|CD|D?C{0,3})(XC|XL|L?X{0,3})(IX|IV|V?I{0,3}$)"
def romancheck(num):
num=num.upper()
if re.search(ptn,num):
return "VALID"
else:
return "INVALID"
print(romancheck("MMMLXXVIII"))
print(romancheck("MMMLxvviii"))
mul_line_str='''adding third argument
re.VERBOSE in re.search()
will ignore whitespace
and comments'''
print(re.search("re.search()will",mul_line_str,re.VERBOSE))
print(re.search("re.search() will",mul_line_str,re.VERBOSE))
print(re.search("ignore",mul_line_str,re.VERBOSE))
ptn='''
^ #beginning of the string
M{0,3} #thousands-0 to 3 M's
(CM|CD|D?C{0,3} #hundreds
(XC|XL|L?XXX) #tens
(IX|IV|V?III) #ones
$ #end of the string
'''
print(re.search(ptn,"MMMCDLXXIX",re.VERBOSE))
def romanCheck(num):
num=num.upper()
if re.search(ptn,num,re.VERBOSE):
return "VALID"
else:
return "INVALID"
print(romanCheck("mmCLXXXIV"))
print(romanCheck("MMMCCLXXXiv"))
```
i wrote this code and i ran it but i got this-
```
100,BROAD RD
VALID
INVALID
None
None
<_sre.SRE_Match object; span=(120, 126), match='ignore'>
Traceback (most recent call last):
File "G:\pydev\xyz\rts\regular_expressions.py", line 46, in <module>
print(re.search(ptn,"MMMCDLXXIX",re.VERBOSE))
File "C:\Users\Owner\AppData\Local\Programs\Python\Python36\lib\re.py", line 182, in search
return _compile(pattern, flags).search(string)
File "C:\Users\Owner\AppData\Local\Programs\Python\Python36\lib\re.py", line 301, in _compile
p = sre_compile.compile(pattern, flags)
File "C:\Users\Owner\AppData\Local\Programs\Python\Python36\lib\sre_compile.py", line 562, in compile
p = sre_parse.parse(p, flags)
File "C:\Users\Owner\AppData\Local\Programs\Python\Python36\lib\sre_parse.py", line 856, in parse
p = _parse_sub(source, pattern, flags & SRE_FLAG_VERBOSE, 0)
File "C:\Users\Owner\AppData\Local\Programs\Python\Python36\lib\sre_parse.py", line 416, in _parse_sub
not nested and not items))
File "C:\Users\Owner\AppData\Local\Programs\Python\Python36\lib\sre_parse.py", line 768, in _parse
source.tell() - start)
sre_constants.error: missing ), unterminated subpattern at position 113 (line 4, column 6)
```
what are these errors can anyone help me.
i have understood all the output but i am not able to understand this errors
|
2018/05/24
|
[
"https://Stackoverflow.com/questions/50506478",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5925439/"
] |
In **Angular 8** "fileReplacements" did not work for replacing index.html. This was my solution.
**Current Working Solution**
```
...
"production": {
"index": {
"input": "src/index.prod.html",
"output": "index.html"
},
...
```
***Full Build Configuration***
```
"build": {
"builder": "@angular-devkit/build-angular:browser",
"options": {
"outputPath": "dist/bandbutter-angular",
"index": "src/index.html",
"main": "src/main.ts",
"polyfills": "src/polyfills.ts",
"tsConfig": "src/tsconfig.app.json",
"assets": [
"src/favicon.ico",
"src/assets",
"src/manifest.webmanifest"
],
"styles": [
"./animate.min.css",
"src/styles.css"
],
"scripts": [
{
"input": "node_modules/document-register-element/build/document-register-element.js"
}
]
},
"configurations": {
"production": {
"index": {
"input": "src/index.prod.html",
"output": "index.html"
},
"fileReplacements": [
{
"replace": "src/environments/environment.ts",
"with": "src/environments/environment.prod.ts"
}
],
"optimization": true,
"outputHashing": "all",
"sourceMap": false,
"extractCss": true,
"namedChunks": false,
"aot": true,
"extractLicenses": true,
"vendorChunk": false,
"buildOptimizer": true,
"budgets": [
{
"type": "initial",
"maximumWarning": "2mb",
"maximumError": "5mb"
}
],
"serviceWorker": true,
"ngswConfigPath": "ngsw-config.json"
},
"dev": {
"fileReplacements": [
{
"replace": "src/environments/environment.ts",
"with": "src/environments/environment.dev.ts"
}
],
"optimization": true,
"outputHashing": "all",
"sourceMap": false,
"extractCss": true,
"namedChunks": false,
"aot": true,
"extractLicenses": true,
"vendorChunk": false,
"buildOptimizer": true,
"budgets": [
{
"type": "initial",
"maximumWarning": "2mb",
"maximumError": "5mb"
}
]
}
}
},
```
|
I'm using Angular 8 and I don't know why it doesn't work on my side.
But I can specify the location for the `index.html` for any build configuration (and looks like also `serve`) <https://stackoverflow.com/a/57274333/3473303>
|
50,506,478
|
i am gaurav and i am learning programming. i was reading regular expressions in dive into python 3,so i thought to try myself something so i wrote this code in eclipse but i got a lot of errors.can anyone pls help me
```
import re
def add_shtner(add):
return re.sub(r"\bROAD\b","RD",add)
print(add_shtner("100,BROAD ROAD"))
# a code to check valid roman no.
ptn=r"^(M{0,3})(CM|CD|D?C{0,3})(XC|XL|L?X{0,3})(IX|IV|V?I{0,3}$)"
def romancheck(num):
num=num.upper()
if re.search(ptn,num):
return "VALID"
else:
return "INVALID"
print(romancheck("MMMLXXVIII"))
print(romancheck("MMMLxvviii"))
mul_line_str='''adding third argument
re.VERBOSE in re.search()
will ignore whitespace
and comments'''
print(re.search("re.search()will",mul_line_str,re.VERBOSE))
print(re.search("re.search() will",mul_line_str,re.VERBOSE))
print(re.search("ignore",mul_line_str,re.VERBOSE))
ptn='''
^ #beginning of the string
M{0,3} #thousands-0 to 3 M's
(CM|CD|D?C{0,3} #hundreds
(XC|XL|L?XXX) #tens
(IX|IV|V?III) #ones
$ #end of the string
'''
print(re.search(ptn,"MMMCDLXXIX",re.VERBOSE))
def romanCheck(num):
num=num.upper()
if re.search(ptn,num,re.VERBOSE):
return "VALID"
else:
return "INVALID"
print(romanCheck("mmCLXXXIV"))
print(romanCheck("MMMCCLXXXiv"))
```
i wrote this code and i ran it but i got this-
```
100,BROAD RD
VALID
INVALID
None
None
<_sre.SRE_Match object; span=(120, 126), match='ignore'>
Traceback (most recent call last):
File "G:\pydev\xyz\rts\regular_expressions.py", line 46, in <module>
print(re.search(ptn,"MMMCDLXXIX",re.VERBOSE))
File "C:\Users\Owner\AppData\Local\Programs\Python\Python36\lib\re.py", line 182, in search
return _compile(pattern, flags).search(string)
File "C:\Users\Owner\AppData\Local\Programs\Python\Python36\lib\re.py", line 301, in _compile
p = sre_compile.compile(pattern, flags)
File "C:\Users\Owner\AppData\Local\Programs\Python\Python36\lib\sre_compile.py", line 562, in compile
p = sre_parse.parse(p, flags)
File "C:\Users\Owner\AppData\Local\Programs\Python\Python36\lib\sre_parse.py", line 856, in parse
p = _parse_sub(source, pattern, flags & SRE_FLAG_VERBOSE, 0)
File "C:\Users\Owner\AppData\Local\Programs\Python\Python36\lib\sre_parse.py", line 416, in _parse_sub
not nested and not items))
File "C:\Users\Owner\AppData\Local\Programs\Python\Python36\lib\sre_parse.py", line 768, in _parse
source.tell() - start)
sre_constants.error: missing ), unterminated subpattern at position 113 (line 4, column 6)
```
what are these errors can anyone help me.
i have understood all the output but i am not able to understand this errors
|
2018/05/24
|
[
"https://Stackoverflow.com/questions/50506478",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5925439/"
] |
As explained [in this github issue](https://github.com/angular/angular-cli/issues/14599#issuecomment-527131237), change `angular.json` to set your production `index.html`:
```
"architect":{
"build": {
"options": {
"index": "src/index.html", //non-prod
},
"configurations": {
"production":{
"index": { //prod index replacement
"input": "src/index.prod.html",
"output": "index.html
}
}
}
}
```
|
I'm using Angular 8 and I don't know why it doesn't work on my side.
But I can specify the location for the `index.html` for any build configuration (and looks like also `serve`) <https://stackoverflow.com/a/57274333/3473303>
|
50,506,478
|
i am gaurav and i am learning programming. i was reading regular expressions in dive into python 3,so i thought to try myself something so i wrote this code in eclipse but i got a lot of errors.can anyone pls help me
```
import re
def add_shtner(add):
return re.sub(r"\bROAD\b","RD",add)
print(add_shtner("100,BROAD ROAD"))
# a code to check valid roman no.
ptn=r"^(M{0,3})(CM|CD|D?C{0,3})(XC|XL|L?X{0,3})(IX|IV|V?I{0,3}$)"
def romancheck(num):
num=num.upper()
if re.search(ptn,num):
return "VALID"
else:
return "INVALID"
print(romancheck("MMMLXXVIII"))
print(romancheck("MMMLxvviii"))
mul_line_str='''adding third argument
re.VERBOSE in re.search()
will ignore whitespace
and comments'''
print(re.search("re.search()will",mul_line_str,re.VERBOSE))
print(re.search("re.search() will",mul_line_str,re.VERBOSE))
print(re.search("ignore",mul_line_str,re.VERBOSE))
ptn='''
^ #beginning of the string
M{0,3} #thousands-0 to 3 M's
(CM|CD|D?C{0,3} #hundreds
(XC|XL|L?XXX) #tens
(IX|IV|V?III) #ones
$ #end of the string
'''
print(re.search(ptn,"MMMCDLXXIX",re.VERBOSE))
def romanCheck(num):
num=num.upper()
if re.search(ptn,num,re.VERBOSE):
return "VALID"
else:
return "INVALID"
print(romanCheck("mmCLXXXIV"))
print(romanCheck("MMMCCLXXXiv"))
```
i wrote this code and i ran it but i got this-
```
100,BROAD RD
VALID
INVALID
None
None
<_sre.SRE_Match object; span=(120, 126), match='ignore'>
Traceback (most recent call last):
File "G:\pydev\xyz\rts\regular_expressions.py", line 46, in <module>
print(re.search(ptn,"MMMCDLXXIX",re.VERBOSE))
File "C:\Users\Owner\AppData\Local\Programs\Python\Python36\lib\re.py", line 182, in search
return _compile(pattern, flags).search(string)
File "C:\Users\Owner\AppData\Local\Programs\Python\Python36\lib\re.py", line 301, in _compile
p = sre_compile.compile(pattern, flags)
File "C:\Users\Owner\AppData\Local\Programs\Python\Python36\lib\sre_compile.py", line 562, in compile
p = sre_parse.parse(p, flags)
File "C:\Users\Owner\AppData\Local\Programs\Python\Python36\lib\sre_parse.py", line 856, in parse
p = _parse_sub(source, pattern, flags & SRE_FLAG_VERBOSE, 0)
File "C:\Users\Owner\AppData\Local\Programs\Python\Python36\lib\sre_parse.py", line 416, in _parse_sub
not nested and not items))
File "C:\Users\Owner\AppData\Local\Programs\Python\Python36\lib\sre_parse.py", line 768, in _parse
source.tell() - start)
sre_constants.error: missing ), unterminated subpattern at position 113 (line 4, column 6)
```
what are these errors can anyone help me.
i have understood all the output but i am not able to understand this errors
|
2018/05/24
|
[
"https://Stackoverflow.com/questions/50506478",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5925439/"
] |
I'm using Angular 8 and I don't know why it doesn't work on my side.
But I can specify the location for the `index.html` for any build configuration (and looks like also `serve`) <https://stackoverflow.com/a/57274333/3473303>
|
[](https://i.stack.imgur.com/LXtQG.png)
you need to update your angular.json file for production configuration
see image.
|
50,506,478
|
i am gaurav and i am learning programming. i was reading regular expressions in dive into python 3,so i thought to try myself something so i wrote this code in eclipse but i got a lot of errors.can anyone pls help me
```
import re
def add_shtner(add):
return re.sub(r"\bROAD\b","RD",add)
print(add_shtner("100,BROAD ROAD"))
# a code to check valid roman no.
ptn=r"^(M{0,3})(CM|CD|D?C{0,3})(XC|XL|L?X{0,3})(IX|IV|V?I{0,3}$)"
def romancheck(num):
num=num.upper()
if re.search(ptn,num):
return "VALID"
else:
return "INVALID"
print(romancheck("MMMLXXVIII"))
print(romancheck("MMMLxvviii"))
mul_line_str='''adding third argument
re.VERBOSE in re.search()
will ignore whitespace
and comments'''
print(re.search("re.search()will",mul_line_str,re.VERBOSE))
print(re.search("re.search() will",mul_line_str,re.VERBOSE))
print(re.search("ignore",mul_line_str,re.VERBOSE))
ptn='''
^ #beginning of the string
M{0,3} #thousands-0 to 3 M's
(CM|CD|D?C{0,3} #hundreds
(XC|XL|L?XXX) #tens
(IX|IV|V?III) #ones
$ #end of the string
'''
print(re.search(ptn,"MMMCDLXXIX",re.VERBOSE))
def romanCheck(num):
num=num.upper()
if re.search(ptn,num,re.VERBOSE):
return "VALID"
else:
return "INVALID"
print(romanCheck("mmCLXXXIV"))
print(romanCheck("MMMCCLXXXiv"))
```
i wrote this code and i ran it but i got this-
```
100,BROAD RD
VALID
INVALID
None
None
<_sre.SRE_Match object; span=(120, 126), match='ignore'>
Traceback (most recent call last):
File "G:\pydev\xyz\rts\regular_expressions.py", line 46, in <module>
print(re.search(ptn,"MMMCDLXXIX",re.VERBOSE))
File "C:\Users\Owner\AppData\Local\Programs\Python\Python36\lib\re.py", line 182, in search
return _compile(pattern, flags).search(string)
File "C:\Users\Owner\AppData\Local\Programs\Python\Python36\lib\re.py", line 301, in _compile
p = sre_compile.compile(pattern, flags)
File "C:\Users\Owner\AppData\Local\Programs\Python\Python36\lib\sre_compile.py", line 562, in compile
p = sre_parse.parse(p, flags)
File "C:\Users\Owner\AppData\Local\Programs\Python\Python36\lib\sre_parse.py", line 856, in parse
p = _parse_sub(source, pattern, flags & SRE_FLAG_VERBOSE, 0)
File "C:\Users\Owner\AppData\Local\Programs\Python\Python36\lib\sre_parse.py", line 416, in _parse_sub
not nested and not items))
File "C:\Users\Owner\AppData\Local\Programs\Python\Python36\lib\sre_parse.py", line 768, in _parse
source.tell() - start)
sre_constants.error: missing ), unterminated subpattern at position 113 (line 4, column 6)
```
what are these errors can anyone help me.
i have understood all the output but i am not able to understand this errors
|
2018/05/24
|
[
"https://Stackoverflow.com/questions/50506478",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5925439/"
] |
In **Angular 8** "fileReplacements" did not work for replacing index.html. This was my solution.
**Current Working Solution**
```
...
"production": {
"index": {
"input": "src/index.prod.html",
"output": "index.html"
},
...
```
***Full Build Configuration***
```
"build": {
"builder": "@angular-devkit/build-angular:browser",
"options": {
"outputPath": "dist/bandbutter-angular",
"index": "src/index.html",
"main": "src/main.ts",
"polyfills": "src/polyfills.ts",
"tsConfig": "src/tsconfig.app.json",
"assets": [
"src/favicon.ico",
"src/assets",
"src/manifest.webmanifest"
],
"styles": [
"./animate.min.css",
"src/styles.css"
],
"scripts": [
{
"input": "node_modules/document-register-element/build/document-register-element.js"
}
]
},
"configurations": {
"production": {
"index": {
"input": "src/index.prod.html",
"output": "index.html"
},
"fileReplacements": [
{
"replace": "src/environments/environment.ts",
"with": "src/environments/environment.prod.ts"
}
],
"optimization": true,
"outputHashing": "all",
"sourceMap": false,
"extractCss": true,
"namedChunks": false,
"aot": true,
"extractLicenses": true,
"vendorChunk": false,
"buildOptimizer": true,
"budgets": [
{
"type": "initial",
"maximumWarning": "2mb",
"maximumError": "5mb"
}
],
"serviceWorker": true,
"ngswConfigPath": "ngsw-config.json"
},
"dev": {
"fileReplacements": [
{
"replace": "src/environments/environment.ts",
"with": "src/environments/environment.dev.ts"
}
],
"optimization": true,
"outputHashing": "all",
"sourceMap": false,
"extractCss": true,
"namedChunks": false,
"aot": true,
"extractLicenses": true,
"vendorChunk": false,
"buildOptimizer": true,
"budgets": [
{
"type": "initial",
"maximumWarning": "2mb",
"maximumError": "5mb"
}
]
}
}
},
```
|
[](https://i.stack.imgur.com/LXtQG.png)
you need to update your angular.json file for production configuration
see image.
|
50,506,478
|
i am gaurav and i am learning programming. i was reading regular expressions in dive into python 3,so i thought to try myself something so i wrote this code in eclipse but i got a lot of errors.can anyone pls help me
```
import re
def add_shtner(add):
return re.sub(r"\bROAD\b","RD",add)
print(add_shtner("100,BROAD ROAD"))
# a code to check valid roman no.
ptn=r"^(M{0,3})(CM|CD|D?C{0,3})(XC|XL|L?X{0,3})(IX|IV|V?I{0,3}$)"
def romancheck(num):
num=num.upper()
if re.search(ptn,num):
return "VALID"
else:
return "INVALID"
print(romancheck("MMMLXXVIII"))
print(romancheck("MMMLxvviii"))
mul_line_str='''adding third argument
re.VERBOSE in re.search()
will ignore whitespace
and comments'''
print(re.search("re.search()will",mul_line_str,re.VERBOSE))
print(re.search("re.search() will",mul_line_str,re.VERBOSE))
print(re.search("ignore",mul_line_str,re.VERBOSE))
ptn='''
^ #beginning of the string
M{0,3} #thousands-0 to 3 M's
(CM|CD|D?C{0,3} #hundreds
(XC|XL|L?XXX) #tens
(IX|IV|V?III) #ones
$ #end of the string
'''
print(re.search(ptn,"MMMCDLXXIX",re.VERBOSE))
def romanCheck(num):
num=num.upper()
if re.search(ptn,num,re.VERBOSE):
return "VALID"
else:
return "INVALID"
print(romanCheck("mmCLXXXIV"))
print(romanCheck("MMMCCLXXXiv"))
```
i wrote this code and i ran it but i got this-
```
100,BROAD RD
VALID
INVALID
None
None
<_sre.SRE_Match object; span=(120, 126), match='ignore'>
Traceback (most recent call last):
File "G:\pydev\xyz\rts\regular_expressions.py", line 46, in <module>
print(re.search(ptn,"MMMCDLXXIX",re.VERBOSE))
File "C:\Users\Owner\AppData\Local\Programs\Python\Python36\lib\re.py", line 182, in search
return _compile(pattern, flags).search(string)
File "C:\Users\Owner\AppData\Local\Programs\Python\Python36\lib\re.py", line 301, in _compile
p = sre_compile.compile(pattern, flags)
File "C:\Users\Owner\AppData\Local\Programs\Python\Python36\lib\sre_compile.py", line 562, in compile
p = sre_parse.parse(p, flags)
File "C:\Users\Owner\AppData\Local\Programs\Python\Python36\lib\sre_parse.py", line 856, in parse
p = _parse_sub(source, pattern, flags & SRE_FLAG_VERBOSE, 0)
File "C:\Users\Owner\AppData\Local\Programs\Python\Python36\lib\sre_parse.py", line 416, in _parse_sub
not nested and not items))
File "C:\Users\Owner\AppData\Local\Programs\Python\Python36\lib\sre_parse.py", line 768, in _parse
source.tell() - start)
sre_constants.error: missing ), unterminated subpattern at position 113 (line 4, column 6)
```
what are these errors can anyone help me.
i have understood all the output but i am not able to understand this errors
|
2018/05/24
|
[
"https://Stackoverflow.com/questions/50506478",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5925439/"
] |
As explained [in this github issue](https://github.com/angular/angular-cli/issues/14599#issuecomment-527131237), change `angular.json` to set your production `index.html`:
```
"architect":{
"build": {
"options": {
"index": "src/index.html", //non-prod
},
"configurations": {
"production":{
"index": { //prod index replacement
"input": "src/index.prod.html",
"output": "index.html
}
}
}
}
```
|
[](https://i.stack.imgur.com/LXtQG.png)
you need to update your angular.json file for production configuration
see image.
|
63,997,745
|
```py
string = "This is a test string. It has 44 characters." #Line1
for i in range(len(string) // 10): #Line2
result= string[10 * i:10 * i + 10] #Line3
print(result) #Line4
```
I want to understand the above code so that I can achieve the same thing using C#
According to my understanding, in Line2:
len(string) counts the length of above string which is 44, dividing by 10 returns 4, [range](https://www.w3schools.com/python/trypython.asp?filename=demo_ref_range)(4) should return: 0,1,2,3 so the `for` loop will run 4 times to print `result`
I confirmed how Line3 works by adding below statements in the python code:
```py
print(string[0:10])
print(string[10:20])
print(string[20:30])
print(string[30:40])
```
The output of both were:
```
This is a
test strin
g. It has
44 charact
```
**I tried the below code in C# to achieve the same which didn't print anything:**
```cs
string str = "This is a test string. It has 44 characters.";
foreach (int i in Enumerable.Range(0, str.Length / 10))
{
string result = str[(10*i)..(10*i+10)];
Console.WriteLine(result);
}
```
|
2020/09/21
|
[
"https://Stackoverflow.com/questions/63997745",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/14311349/"
] |
You can do:
```
colSums(df) != 0
A B C
TRUE TRUE FALSE
```
|
Maybe `apply()` can be useful:
```
#Data
df = data.frame("A" = c("TRUE","FALSE","FALSE","FALSE"),
"B" = c("FALSE","TRUE","TRUE","FALSE"),
"C" = c("FALSE","FALSE","FALSE","FALSE"))
#Apply
apply(df,2,function(x) any(x=='TRUE'))
```
Output:
```
A B C
TRUE TRUE FALSE
```
Or setting logical values:
```
#Data 2
df = data.frame("A" = c(TRUE,FALSE,FALSE,FALSE),
"B" = c(FALSE,TRUE,TRUE,FALSE),
"C" = c(FALSE,FALSE,FALSE,FALSE))
#Apply
apply(df,2,function(x) any(x==TRUE))
```
Output:
```
A B C
TRUE TRUE FALSE
```
|
63,997,745
|
```py
string = "This is a test string. It has 44 characters." #Line1
for i in range(len(string) // 10): #Line2
result= string[10 * i:10 * i + 10] #Line3
print(result) #Line4
```
I want to understand the above code so that I can achieve the same thing using C#
According to my understanding, in Line2:
len(string) counts the length of above string which is 44, dividing by 10 returns 4, [range](https://www.w3schools.com/python/trypython.asp?filename=demo_ref_range)(4) should return: 0,1,2,3 so the `for` loop will run 4 times to print `result`
I confirmed how Line3 works by adding below statements in the python code:
```py
print(string[0:10])
print(string[10:20])
print(string[20:30])
print(string[30:40])
```
The output of both were:
```
This is a
test strin
g. It has
44 charact
```
**I tried the below code in C# to achieve the same which didn't print anything:**
```cs
string str = "This is a test string. It has 44 characters.";
foreach (int i in Enumerable.Range(0, str.Length / 10))
{
string result = str[(10*i)..(10*i+10)];
Console.WriteLine(result);
}
```
|
2020/09/21
|
[
"https://Stackoverflow.com/questions/63997745",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/14311349/"
] |
You can do:
```
colSums(df) != 0
A B C
TRUE TRUE FALSE
```
|
Using `dplyr`
```
library(dplyr)
df %>%
summarise(across(everything(), any))
# A B C
#1 TRUE TRUE FALSE
```
|
63,997,745
|
```py
string = "This is a test string. It has 44 characters." #Line1
for i in range(len(string) // 10): #Line2
result= string[10 * i:10 * i + 10] #Line3
print(result) #Line4
```
I want to understand the above code so that I can achieve the same thing using C#
According to my understanding, in Line2:
len(string) counts the length of above string which is 44, dividing by 10 returns 4, [range](https://www.w3schools.com/python/trypython.asp?filename=demo_ref_range)(4) should return: 0,1,2,3 so the `for` loop will run 4 times to print `result`
I confirmed how Line3 works by adding below statements in the python code:
```py
print(string[0:10])
print(string[10:20])
print(string[20:30])
print(string[30:40])
```
The output of both were:
```
This is a
test strin
g. It has
44 charact
```
**I tried the below code in C# to achieve the same which didn't print anything:**
```cs
string str = "This is a test string. It has 44 characters.";
foreach (int i in Enumerable.Range(0, str.Length / 10))
{
string result = str[(10*i)..(10*i+10)];
Console.WriteLine(result);
}
```
|
2020/09/21
|
[
"https://Stackoverflow.com/questions/63997745",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/14311349/"
] |
Maybe `apply()` can be useful:
```
#Data
df = data.frame("A" = c("TRUE","FALSE","FALSE","FALSE"),
"B" = c("FALSE","TRUE","TRUE","FALSE"),
"C" = c("FALSE","FALSE","FALSE","FALSE"))
#Apply
apply(df,2,function(x) any(x=='TRUE'))
```
Output:
```
A B C
TRUE TRUE FALSE
```
Or setting logical values:
```
#Data 2
df = data.frame("A" = c(TRUE,FALSE,FALSE,FALSE),
"B" = c(FALSE,TRUE,TRUE,FALSE),
"C" = c(FALSE,FALSE,FALSE,FALSE))
#Apply
apply(df,2,function(x) any(x==TRUE))
```
Output:
```
A B C
TRUE TRUE FALSE
```
|
Using `dplyr`
```
library(dplyr)
df %>%
summarise(across(everything(), any))
# A B C
#1 TRUE TRUE FALSE
```
|
1,621,521
|
Is there a program which I can run like this:
```
py2py.py < orig.py > smaller.py
```
Where orig.py contains python source code with comments and doc strings, and smaller.py contains identical, runnable source code but without the comments and doc strings?
Code which originally looked like this:
```
#/usr/bin/python
"""Do something
blah blah...
"""
# Beware the frubnitz!
def foo(it):
"""Foo it!"""
print it # hmm?
```
Would then look like this:
```
def foo(it):
print it
```
|
2009/10/25
|
[
"https://Stackoverflow.com/questions/1621521",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/196255/"
] |
[This Python minifier](https://pypi.python.org/pypi/pyminifier) looks like it does what you need.
|
I recommend [minipy](https://github.com/gareth-rees/minipy). The most compelling reason is that it does proper analysis of the source code abstract syntax tree so the minified code is much more accurate. I've found that the more well known [pyminifier](https://pypi.python.org/pypi/pyminifier) tends to generate code with undefined symbol errors, misinterpreted tuples, etc. I also got a few percent better compression results with minipy. A minor benefit of minipy is that it's less than half the code size of pyminifier. It's also easier to manage and integrate into a build pipeline because it's a single standalone python file.
|
29,853,907
|
I was working with django-1.8.1 and everything was good but when I tried again to run my server with command bellow, I get some errors :
command : `python manage.py runserver`
errors appeared in command-line :
```
> Traceback (most recent call last):
File "C:\Python34\lib\wsgiref\handlers.py", line 137, in run
self.result = application(self.environ, self.start_response)
File "C:\Python34\lib\site-packages\django\contrib\staticfiles\handlers.py", l
ine 63, in __call__
return self.application(environ, start_response)
File "C:\Python34\lib\site-packages\django\core\handlers\wsgi.py", line 170, i
n __call__
self.load_middleware()
File "C:\Python34\lib\site-packages\django\core\handlers\base.py", line 50, in
load_middleware
mw_class = import_string(middleware_path)
File "C:\Python34\lib\site-packages\django\utils\module_loading.py", line 24,
in import_string
six.reraise(ImportError, ImportError(msg), sys.exc_info()[2])
File "C:\Python34\lib\site-packages\django\utils\six.py", line 658, in reraise
>
> raise value.with_traceback(tb) File "C:\Python34\lib\site-packages\django\utils\module_loading.py", line
> 21, in import_string
> module_path, class_name = dotted_path.rsplit('.', 1) ImportError: polls doesn't look like a module path [24/Apr/2015 21:50:25]"GET /
> HTTP/1.1" 500 59
```
error appeared in browser :
>
> A server error occurred. Please contact the administrator.
>
>
>
Also, I searched the web for this issue and I found something but they were not working for me.
This is my settings.py : `
```
"""
Django settings for havij project.
Generated by 'django-admin startproject' using Django 1.8.
For more information on this file, see
https://docs.djangoproject.com/en/1.8/topics/settings/
For the full list of settings and their values, see
https://docs.djangoproject.com/en/1.8/ref/settings/
"""
# Build paths inside the project like this: os.path.join(BASE_DIR, ...)
import os
BASE_DIR = os.path.dirname(os.path.dirname(os.path.abspath(__file__)))
# Quick-start development settings - unsuitable for production
# See https://docs.djangoproject.com/en/1.8/howto/deployment/checklist/
# SECURITY WARNING: keep the secret key used in production secret!
SECRET_KEY = '9va4(dd_pf#!(2-efc-ipiz@7fgb8y!d(=5gdmie0cces*9lih'
# SECURITY WARNING: don't run with debug turned on in production!
DEBUG = True
ALLOWED_HOSTS = []
# Application definition
INSTALLED_APPS = (
'django.contrib.admin',
'django.contrib.auth',
'django.contrib.contenttypes',
'django.contrib.sessions',
'django.contrib.messages',
'django.contrib.staticfiles',
'polls' ,
)
MIDDLEWARE_CLASSES = (
'django.contrib.sessions.middleware.SessionMiddleware',
'django.middleware.common.CommonMiddleware',
'django.middleware.csrf.CsrfViewMiddleware',
'django.contrib.auth.middleware.AuthenticationMiddleware',
'django.contrib.auth.middleware.SessionAuthenticationMiddleware',
'django.contrib.messages.middleware.MessageMiddleware',
'django.middleware.clickjacking.XFrameOptionsMiddleware',
'django.middleware.security.SecurityMiddleware',
'polls',
)
ROOT_URLCONF = 'havij.urls'
TEMPLATES = [
{
'BACKEND': 'django.template.backends.django.DjangoTemplates',
'DIRS': [],
'APP_DIRS': True,
'OPTIONS': {
'context_processors': [
'django.template.context_processors.debug',
'django.template.context_processors.request',
'django.contrib.auth.context_processors.auth',
'django.contrib.messages.context_processors.messages',
],
},
},
]
WSGI_APPLICATION = 'havij.wsgi.application'
# Database
# https://docs.djangoproject.com/en/1.8/ref/settings/#databases
DATABASES = {
'default': {
'ENGINE': 'django.db.backends.sqlite3',
'NAME': os.path.join(BASE_DIR, 'db.sqlite3'),
}
}
# Internationalization
# https://docs.djangoproject.com/en/1.8/topics/i18n/
LANGUAGE_CODE = 'en-us'
TIME_ZONE = 'UTC'
USE_I18N = True
USE_L10N = True
USE_TZ = True
# Static files (CSS, JavaScript, Images)
# https://docs.djangoproject.com/en/1.8/howto/static-files/
STATIC_URL = '/static/'
`
```
And here is the urls.py :
```
from django.conf.urls import include, url
from django.contrib import admin
urlpatterns = [
# Examples:
# url(r'^$', 'havij.views.home', name='home'),
# url(r'^blog/', include('blog.urls')),
url(r'^admin/', include(admin.site.urls)),
```
Thanks for your answering.
|
2015/04/24
|
[
"https://Stackoverflow.com/questions/29853907",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4701816/"
] |
Looks like a mixed solution for me.
Typically `TcpListener listener = new TcpListener(8888)` awaits for the connection on given port.
Then, when it accepts the connection from client, it establishes connection on different socket `Socket socket = listener.AcceptSocket()` so that listening port remains awaiting for other connections (another clients).
Then, we can read from client that connected to server from the stream: `Stream stream = NetworkStream(socket)`.
`TcpClient` is used to connect to such server and it should be used in Client application, not in server one.
|
The term client has two definitions. At the application level you have a client and server application. The client is the master and the server is the slave. At the socket level, both the client application and server application have a client (also called a socket).
The server socket listens at the loopback address 127.0.0.1 (or IPAny). While the client socket connects to the server IP address.
|
16,251,016
|
Hi all i want to web based GUI Testing tool. I found dogtail is written using python. but i didnot get any good tutorial and examples to move further. Please Guide me weather dogtail is perfect or something better than this in python is there?. and if please share doc and example.
My requirement:
A DVR continuous showing live video on tile(4 x 4 ), GUI is web based(mozilla) . i Should be able to swap video and check log and have to compare actual result and present.
|
2013/04/27
|
[
"https://Stackoverflow.com/questions/16251016",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2017556/"
] |
[Selenium](https://pypi.python.org/pypi/selenium) is designed exactly for this, it allows you to control the browser in Python, and check if things are as expected (e.g check if a specific element exists, submit a form etc)
There's [some more examples in the documentation](http://selenium-python.readthedocs.org/en/latest/getting-started.html)
[Project Sikuli](http://www.sikuli.org/) is a similar tool, but is more general than just web-browsers
|
Selenium provides a python interface rather than just record your mouse movements, see <http://selenium-python.readthedocs.org/en/latest/api.html>
If you need to check your video frames your can record them locally and OCR the frames looking for some expected text or timecode.
|
16,251,016
|
Hi all i want to web based GUI Testing tool. I found dogtail is written using python. but i didnot get any good tutorial and examples to move further. Please Guide me weather dogtail is perfect or something better than this in python is there?. and if please share doc and example.
My requirement:
A DVR continuous showing live video on tile(4 x 4 ), GUI is web based(mozilla) . i Should be able to swap video and check log and have to compare actual result and present.
|
2013/04/27
|
[
"https://Stackoverflow.com/questions/16251016",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2017556/"
] |
Selenium provides a python interface rather than just record your mouse movements, see <http://selenium-python.readthedocs.org/en/latest/api.html>
If you need to check your video frames your can record them locally and OCR the frames looking for some expected text or timecode.
|
For Simple form based UI Testing. I have created a framework using python/selenium/phantomjs although it can do complex stuff too. I am yet to document it. (If you don't need to run firefox you don't need to install phantomjs)
<https://github.com/manav148/PyUIT>
|
16,251,016
|
Hi all i want to web based GUI Testing tool. I found dogtail is written using python. but i didnot get any good tutorial and examples to move further. Please Guide me weather dogtail is perfect or something better than this in python is there?. and if please share doc and example.
My requirement:
A DVR continuous showing live video on tile(4 x 4 ), GUI is web based(mozilla) . i Should be able to swap video and check log and have to compare actual result and present.
|
2013/04/27
|
[
"https://Stackoverflow.com/questions/16251016",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2017556/"
] |
[Selenium](https://pypi.python.org/pypi/selenium) is designed exactly for this, it allows you to control the browser in Python, and check if things are as expected (e.g check if a specific element exists, submit a form etc)
There's [some more examples in the documentation](http://selenium-python.readthedocs.org/en/latest/getting-started.html)
[Project Sikuli](http://www.sikuli.org/) is a similar tool, but is more general than just web-browsers
|
For Simple form based UI Testing. I have created a framework using python/selenium/phantomjs although it can do complex stuff too. I am yet to document it. (If you don't need to run firefox you don't need to install phantomjs)
<https://github.com/manav148/PyUIT>
|
52,928,809
|
Hi I am a beginner in python coding!
This is my code:
```
while True:
try:
x=raw_input("Please enter a word: ")
break
except ValueError:
print( "Sorry it is not a word. try again")
```
The main aim of this code is to check the input. If the input is string than OK, but when the input is integer it is an error. My problem is that the code with the format integer too, i dont get the error message. Can you help me where is the mistake?
|
2018/10/22
|
[
"https://Stackoverflow.com/questions/52928809",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/10540371/"
] |
Try using KV0, it works for me:
```
- (BOOL)application:(UIApplication *)application didFinishLaunchingWithOptions:(NSDictionary *)launchOptions {
[[AVAudioSession sharedInstance] setActive:YES error:nil];
[[AVAudioSession sharedInstance] addObserver:self forKeyPath:@"outputVolume" options:0 context:nil];
return YES;
}
- (void)observeValueForKeyPath:(NSString *)keyPath ofObject:(id)object change:(NSDictionary *)change context:(void *)context {
NSLog(@"Volumechanged");
}
```
Note that you need the line:
[[AVAudioSession sharedInstance] setActive:YES error:nil];
if omitted you will not get notifications.
Also import AVFoundation.
As stated here:
<https://developer.apple.com/documentation/avfoundation/avaudiosession/1616533-outputvolume?language=objc>
**"You can observe changes to the value of this property by using Key-value observing."**
|
Apparently, my code worked ok. The problem was with my laptop - it had some volume issue, some when I was changing the volume on the simulator, the volume didn't really changed. When I switched to a real device, everything worked properly
|
57,745,554
|
I am trying to write a script in python so I can find in 1 sec the COM number of the USB serial adapter I have plugged to my laptop.
What I need is to isolate the COMx port so I can display the result and open putty with that specific port. Can you help me with that?
Until now I have already written a script in batch/powershell and I am getting this information but I havent been able to separate the text of the COMx port so I can call the putty program with the serial parameter.
I have also been able to find the port via Python but I cant isolate it from the string.
```
import re # Used for regular expressions (unused)
import os # To check that the path of the files defined in the config file exist (unused)
import sys # To leave the script if (unused)
import numpy as np
from infi.devicemanager import DeviceManager
dm = DeviceManager()
dm.root.rescan()
devs = dm.all_devices
print ('Size of Devs: ',len(devs))
print ('Type of Devs: ',type(devs))
myarray = ([])
myarray =np.array(devs)
print ('Type of thing: ',type(myarray))
match = '<USB Serial Port (COM6)>' (custom match. the ideal would be "USB Serial Port")
i=0
#print (myarray, '\n')
while i != len(devs):
if match == myarray[i]:
print ('Found it!')
break
print ('array: ',i," : ", myarray[i])
i = i+1
print ('array 49: ', myarray[49]) (here I was checking what is the difference of the "element" inside the array)
print ('match : ', match) (and what is the difference of what I submitted)
print ('end')
```
I was expecting the if match == myarray[i] to find the two elements but for some reason it doesnt. Its returning me that those two are not the same.
Thank you for any help in advance!
=== UPDATE ===
Full script can be found here
<https://github.com/elessargr/k9-serial>
|
2019/09/01
|
[
"https://Stackoverflow.com/questions/57745554",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4964403/"
] |
this is a follow up answer from @MacrosG
i tried a minimal example with properties from Device
```
from infi.devicemanager import DeviceManager
dm = DeviceManager()
dm.root.rescan()
devs = dm.all_devices
print ('Size of Devs: ',len(devs))
for d in devs:
if "USB" in d.description :
print(d.description)
```
|
If Python says the strings are not the same I dare say it's quite likely they are not.
You can compare with:
```
if "USB Serial Port" in devs[i]:
```
Then you should be able to find not a complete letter by letter match but one that contains a USB port.
There is no need to use numpy, `devs` is already a list and hence iterable.
|
57,745,554
|
I am trying to write a script in python so I can find in 1 sec the COM number of the USB serial adapter I have plugged to my laptop.
What I need is to isolate the COMx port so I can display the result and open putty with that specific port. Can you help me with that?
Until now I have already written a script in batch/powershell and I am getting this information but I havent been able to separate the text of the COMx port so I can call the putty program with the serial parameter.
I have also been able to find the port via Python but I cant isolate it from the string.
```
import re # Used for regular expressions (unused)
import os # To check that the path of the files defined in the config file exist (unused)
import sys # To leave the script if (unused)
import numpy as np
from infi.devicemanager import DeviceManager
dm = DeviceManager()
dm.root.rescan()
devs = dm.all_devices
print ('Size of Devs: ',len(devs))
print ('Type of Devs: ',type(devs))
myarray = ([])
myarray =np.array(devs)
print ('Type of thing: ',type(myarray))
match = '<USB Serial Port (COM6)>' (custom match. the ideal would be "USB Serial Port")
i=0
#print (myarray, '\n')
while i != len(devs):
if match == myarray[i]:
print ('Found it!')
break
print ('array: ',i," : ", myarray[i])
i = i+1
print ('array 49: ', myarray[49]) (here I was checking what is the difference of the "element" inside the array)
print ('match : ', match) (and what is the difference of what I submitted)
print ('end')
```
I was expecting the if match == myarray[i] to find the two elements but for some reason it doesnt. Its returning me that those two are not the same.
Thank you for any help in advance!
=== UPDATE ===
Full script can be found here
<https://github.com/elessargr/k9-serial>
|
2019/09/01
|
[
"https://Stackoverflow.com/questions/57745554",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4964403/"
] |
this is a follow up answer from @MacrosG
i tried a minimal example with properties from Device
```
from infi.devicemanager import DeviceManager
dm = DeviceManager()
dm.root.rescan()
devs = dm.all_devices
print ('Size of Devs: ',len(devs))
for d in devs:
if "USB" in d.description :
print(d.description)
```
|
If you want to do this with regular-expressions:
```
def main():
from infi.devicemanager import DeviceManager
import re
device_manager = DeviceManager()
device_manager.root.rescan()
pattern = r"USB Serial Port \(COM(\d)\)"
for device in device_manager.all_devices:
try:
match = re.fullmatch(pattern, device.friendly_name)
except KeyError:
continue
if match is None:
continue
com_number = match.group(1)
print(f"Found device \"{device.friendly_name}\" -> com_number: {com_number}")
return 0
if __name__ == "__main__":
import sys
sys.exit(main())
```
Output:
```
Found device "USB Serial Port (COM3)" -> com_number: 3
```
|
57,745,554
|
I am trying to write a script in python so I can find in 1 sec the COM number of the USB serial adapter I have plugged to my laptop.
What I need is to isolate the COMx port so I can display the result and open putty with that specific port. Can you help me with that?
Until now I have already written a script in batch/powershell and I am getting this information but I havent been able to separate the text of the COMx port so I can call the putty program with the serial parameter.
I have also been able to find the port via Python but I cant isolate it from the string.
```
import re # Used for regular expressions (unused)
import os # To check that the path of the files defined in the config file exist (unused)
import sys # To leave the script if (unused)
import numpy as np
from infi.devicemanager import DeviceManager
dm = DeviceManager()
dm.root.rescan()
devs = dm.all_devices
print ('Size of Devs: ',len(devs))
print ('Type of Devs: ',type(devs))
myarray = ([])
myarray =np.array(devs)
print ('Type of thing: ',type(myarray))
match = '<USB Serial Port (COM6)>' (custom match. the ideal would be "USB Serial Port")
i=0
#print (myarray, '\n')
while i != len(devs):
if match == myarray[i]:
print ('Found it!')
break
print ('array: ',i," : ", myarray[i])
i = i+1
print ('array 49: ', myarray[49]) (here I was checking what is the difference of the "element" inside the array)
print ('match : ', match) (and what is the difference of what I submitted)
print ('end')
```
I was expecting the if match == myarray[i] to find the two elements but for some reason it doesnt. Its returning me that those two are not the same.
Thank you for any help in advance!
=== UPDATE ===
Full script can be found here
<https://github.com/elessargr/k9-serial>
|
2019/09/01
|
[
"https://Stackoverflow.com/questions/57745554",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4964403/"
] |
this is a follow up answer from @MacrosG
i tried a minimal example with properties from Device
```
from infi.devicemanager import DeviceManager
dm = DeviceManager()
dm.root.rescan()
devs = dm.all_devices
print ('Size of Devs: ',len(devs))
for d in devs:
if "USB" in d.description :
print(d.description)
```
|
Huge thanks to everyone and especially to bigdataolddriver since I went with his solution
Last thing!
```
for d in devs:
if "USB Serial Port" in d.description :
str = d.__str__()
COMport = str.split('(', 1)[1].split(')')[0]
i=1
break
else:
i=0
if i == 1:
print ("I found it!")
print(d.description, "found on : ", COMport)
subprocess.Popen(r'"C:\Tools\putty.exe" -serial ', COMport)
elif i ==0:
print ("USB Serial Not found \nPlease check physical connection.")
else:
print("Error")
```
Any ideas how to pass the COMport to the putty.exe as a parameter?
===== UPDATE =====
```
if i == 1:
print ("I found it!")
print(d.description, "found on : ", COMport)
command = '"C:\MyTools\putty.exe" -serial ' + COMport
#print (command)
subprocess.Popen(command)
```
Thank you everyone!
|
55,007,820
|
I've the following problem : I'm actually making a script for an ovirt server to automatically delete virtual machine which include unregister them from the DNS. But for some very specific virtual machine there is multiple FQDN for an IP address example:
```
myfirstfqdn.com IN A 10.10.10.10
mysecondfqdn.com IN A 10.10.10.10
```
I've tried to do it with socket in Python but it return only one answer, I've also tried python with dnspython but I failed.
the goal is to count the number of type A record on the dns server
Anyone have an idea to do stuff like this?
|
2019/03/05
|
[
"https://Stackoverflow.com/questions/55007820",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/8919169/"
] |
That's outright impossible. If I am in the right mood, I could add an entry to my DNS server pointing to your IP address. Generally, you cannot find it out (except for some hints in some protocols like http(s)).
|
Given a zone file in the above format, you could do something like...
```
from collections import defaultdict
zone_file = """myfirstfqdn.com IN A 10.10.10.10
mysecondfqdn.com IN A 10.10.10.10"""
# Build mapping for lookups
ip_fqdn_mapping = defaultdict(list)
for record in zone_file.split("\n"):
fqdn, record_class, record_type, ip_address = record.split()
ip_fqdn_mapping[ip_address].append(fqdn)
# Lookup
ip_address_to_lookup = "10.10.10.10"
fqdns = ip_fqdn_mapping[ip_address_to_lookup]
print(fqdns)
```
Note: Using socket can be done like so - [Python lookup hostname from IP with 1 second timeout](https://stackoverflow.com/questions/2575760/python-lookup-hostname-from-ip-with-1-second-timeout)
However this does require that DNS server that you are querying has correctly configured PTR reverse records.
<https://www.cloudns.net/wiki/article/40/>
|
52,588,535
|
I was trying to pass some arguments via PyCharm when I noticed that it's behaving differently that my console. When I pass arguments with no space in between all works fine, but when my arguments contains spaces inside it the behavior diverge.
```
def main():
"""
Main function
"""
for i, arg in enumerate(sys.argv):
print('Arg#{}: {}'.format(i, arg))
```
If I run the same function:
```
python3 argumnents_tester.py 'argument 1' argument2
```
Run in **PyCharm**:
>
> Arg#0: /home/gorfanidis/PycharmProjects/test1/argparse\_test.py
>
> Arg#1: 'argument
>
> Arg#2: 1'
>
> Arg#3: argument2
>
>
>
Run in **Console**:
>
> Arg#0: argparse\_test.py
>
> Arg#1: argument 1
>
> Arg#2: argument2
>
>
>
So, PyCharm tends to ignore quotes altogether and splits the arguments using the spaces regardless of any quotes. Also, arguments with quotes are treated differently than the same arguments without quotes.
Question is why it this happening and at a practical level how am I suppose to pass an argument that contains spaces using PyCharm for example?
I am using Ubuntu 16.04 by the way.
|
2018/10/01
|
[
"https://Stackoverflow.com/questions/52588535",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3584765/"
] |
You can use [`String.prototype.trim()`](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/String/trim):
```js
const title1 = "DisneyLand-Paris";
const title2 = "DisneyLand-Paris ";
const trimmed = title2.trim();
console.log(title1 === title2); // false
console.log(title1 === trimmed); // true
```
So you could do:
```js
// Trim and compare both strings, and report the result.
function compareTitles(title1, title2) {
if (title1.trim() === title2.trim()) {
console.log("It's the same titles!");
}
}
compareTitles("DisneyLand-Paris", "DisneyLand-Paris "); // It's the same titles!
compareTitles("DisneyLand-Paris ", "DisneyLand-Paris"); // It's the same titles!
```
|
You need to use trim() method
```
const string1 = "DisneyLand-Paris";
const string2 = "DisneyLand-Paris ";
const string3 = "";
if(string1 === string2.trim()){
string3 = `it's the same titles ` ; //here you have to use template literals because of it’s
console.log(string3);
}
```
|
11,329,212
|
I have started to look into python and am trying to grasp new things in little chunks, the latest goal i set for myself was to read a tab seperate file of floats into memory and compare values in the list and print the values if difference was as large as the user specified. I have written the following code for it so far:
```
#! /usr/bin/env python
value = raw_input('Please enter a mass difference:')
fh = open ( "values" );
x = []
for line in fh.readlines():
y = [float for float in line.split()]
x.append(y)
fh.close()
for i in range(0,len(x)-1):
for j in range(i,len(x)):
if x[j][0] - x[i][0] == value:
print x[i][0],x[j][0]
```
The compiler complains that i am not allowed to substract strings from strings (logically) but my question is ... why are they strings? Shouldn't the nested list be a list of floats as i use float for float?
Literal error:
```
TypeError: unsupported operand type(s) for -: 'str' and 'str'
```
I would greatly appreciate if someone can tell me where my reasoning goes wrong ;)
|
2012/07/04
|
[
"https://Stackoverflow.com/questions/11329212",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1093485/"
] |
Try this in place of your list comprehension:
```
y = [float(i) for i in line.split()]
```
*Explanation*:
The data you read from the file are strings, to convert them to other types you need to cast them. So in your case you want to cast your values to float via `float()` .. which you tried, but not quite correctly/successfully. This should give you the results you were looking for.
If you have other values to convert, this syntax will work:
```
float_val = float(string_val)
```
assuming that `string_val` contains valid characters for a float, it will convert, otherwise you'll get an exception.
```
>>> float('3.5')
3.5
>>> float('apple')
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
ValueError: invalid literal for float(): apple
```
|
The list comprehension isn't doing what you think it's doing. It's simply assigning each string to the variable `float`, and returning it. Instead you actually want to use another name and call float on it:
```
y = [float(x) for x in line.split()]
```
|
11,329,212
|
I have started to look into python and am trying to grasp new things in little chunks, the latest goal i set for myself was to read a tab seperate file of floats into memory and compare values in the list and print the values if difference was as large as the user specified. I have written the following code for it so far:
```
#! /usr/bin/env python
value = raw_input('Please enter a mass difference:')
fh = open ( "values" );
x = []
for line in fh.readlines():
y = [float for float in line.split()]
x.append(y)
fh.close()
for i in range(0,len(x)-1):
for j in range(i,len(x)):
if x[j][0] - x[i][0] == value:
print x[i][0],x[j][0]
```
The compiler complains that i am not allowed to substract strings from strings (logically) but my question is ... why are they strings? Shouldn't the nested list be a list of floats as i use float for float?
Literal error:
```
TypeError: unsupported operand type(s) for -: 'str' and 'str'
```
I would greatly appreciate if someone can tell me where my reasoning goes wrong ;)
|
2012/07/04
|
[
"https://Stackoverflow.com/questions/11329212",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1093485/"
] |
Try this in place of your list comprehension:
```
y = [float(i) for i in line.split()]
```
*Explanation*:
The data you read from the file are strings, to convert them to other types you need to cast them. So in your case you want to cast your values to float via `float()` .. which you tried, but not quite correctly/successfully. This should give you the results you were looking for.
If you have other values to convert, this syntax will work:
```
float_val = float(string_val)
```
assuming that `string_val` contains valid characters for a float, it will convert, otherwise you'll get an exception.
```
>>> float('3.5')
3.5
>>> float('apple')
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
ValueError: invalid literal for float(): apple
```
|
**Error 1**: `y = [float(x) for x in line.split()]` or simply `map(float,lines.split())`
**Error 2**: `if x[j][0] - x[i][0] == float(value): #you didn't converted value to a float`
|
46,824,700
|
With Rebol pick I can only get one element:
```
list: [1 2 3 4 5 6 7 8 9]
pick list 3
```
In python one can get a whole sub-list with
```
list[3:7]
```
|
2017/10/19
|
[
"https://Stackoverflow.com/questions/46824700",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/310291/"
] |
* [**AT**](http://www.rebol.com/docs/words/wat.html) can seek a position at a list.
* [**COPY**](http://www.rebol.com/r3/docs/functions/copy.html) will copy from a position to the end of list, by default
* the **/PART** refinement of COPY lets you add a limit to copying
Passing an integer to /PART assumes how many things you want to copy:
```
>> list: [1 2 3 4 5 6 7 8 9]
>> copy/part (at list 3) 5
== [3 4 5 6 7]
```
If you provide a series *position* to be the end, then it will copy *up to* that point, so you'd have to be past it if your range means to be inclusive.
```
>> copy/part (at list 3) (next at list 7)
== [3 4 5 6 7]
```
There have been some proposals for range dialects, I can't find any offhand. Simple code to give an idea:
```
range: func [list [series!] spec [block!] /local start end] [
if not parse spec [
set start integer! '.. set end integer!
][
do make error! "Bad range spec, expected e.g. [3 .. 7]"
]
copy/part (at list start) (next at list end)
]
>> list: [1 2 3 4 5 6 7 8 9]
>> range list [3 .. 7]
== [3 4 5 6 7]
```
|
```
>> list: [1 2 3 4 5 6 7 8 9]
== [1 2 3 4 5 6 7 8 9]
>> copy/part skip list 2 5
== [3 4 5 6 7]
```
So, you can skip to the right location in the list, and then copy as many consecutive members as you need.
If you want an equivalent function, you can write your own.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.