qid
int64 46k
74.7M
| question
stringlengths 54
37.8k
| date
stringlengths 10
10
| metadata
listlengths 3
3
| response_j
stringlengths 29
22k
| response_k
stringlengths 26
13.4k
| __index_level_0__
int64 0
17.8k
|
|---|---|---|---|---|---|---|
14,970,952
|
Im kinda new to python, and dont really understand my issue, really appreciate the help. Anyways, this is the line of coding.
```
def Banker(warrior):
gold = open(chairs[warrior-1], "strength")
return gold
```
This is the error i got.
```
line 22, in Banker
gold = open(chairs[warrior-1], "strength")
TypeError: 'file' object is unsubscriptable
```
<http://pastebin.com/1wMbaSYY>
|
2013/02/20
|
[
"https://Stackoverflow.com/questions/14970952",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2089394/"
] |
On a UNIX machine, use the [`pwent`](http://www.kernel.org/doc/man-pages/online/pages/man3/getpwent.3.html) series of functions:
```
#include <sys/types.h>
#include <pwd.h>
int main() {
struct passwd *p;
while((p = getpwent())) {
printf("name: %s\n", p->pw_name);
}
}
```
This will consult the system's authoritative database of users, which may not necessarily be `/etc/passwd`.
|
The users of a machine are listed in /etc/passwd. A good way to filter all 'human' users is to do
```
cat /etc/passwd | grep "/home" |cut -d: -f1
```
as the human users usually have a home directory.
Now, for calling it inside C, you may use popen. Take a look at
```
man popen
```
| 8,426
|
74,618,168
|
I have just starting learning python and as I creating this program, which asks user to input two numbers, which then adds them to together using a simple `if-elif-else` statement, however the else part of the code just seems to not work if, an user types out the six, for example, in words instead of the number.
```
num_1 = int(input("Enter the first number: "))
num_2 = int(input("Enter the second number: "))
Total = num_1 + num_2
print("The total is: ",Total)
if num_1 > num_2:
print("num_1 is greater then num_2")
elif num_2 > num_1:
print("num_2 is greater then num_1")
elif num_1 == num_2:
print("Equal")
else:
if num_1 == str:
if num_2 == str:
print("invalid")
```
|
2022/11/29
|
[
"https://Stackoverflow.com/questions/74618168",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/17851996/"
] |
This should be what you are looking for:
```
try:
num_1 = int(input("Enter the first number: "))
num_2 = int(input("Enter the second number: "))
except ValueError:
print("invalid")
exit()
Total = num_1 + num_2
print("The total is: ", Total)
if num_1 > num_2:
print("num_1 is greater then num_2")
elif num_2 > num_1:
print("num_2 is greater then num_1")
else:
print("Equal")
```
|
In your first two lines you’re calling int() on a string in the situation you’re describing. This won’t work, and your code will stop running here. What you want is probably something call a try-catch statement.
| 8,431
|
36,862,589
|
I'm attempting to Dockerise a Python application, which depends on OpenCV. I've tried several different ways, but I keep getting... `ImportError: No module named cv2` when I attempt to run the application.
Here's my current Dockerfile.
```
FROM python:2.7
MAINTAINER Ewan Valentine <ewan@theladbible.com>
RUN mkdir -p /usr/src/app
WORKDIR /usr/src/app
# Various Python and C/build deps
RUN apt-get update && apt-get install -y \
wget \
build-essential \
cmake \
git \
pkg-config \
python-dev \
python-opencv \
libopencv-dev \
libav-tools \
libjpeg-dev \
libpng-dev \
libtiff-dev \
libjasper-dev \
libgtk2.0-dev \
python-numpy \
python-pycurl \
libatlas-base-dev \
gfortran \
webp \
python-opencv
# Install Open CV - Warning, this takes absolutely forever
RUN cd ~ && git clone https://github.com/Itseez/opencv.git && \
cd opencv && \
git checkout 3.0.0 && \
cd ~ && git clone https://github.com/Itseez/opencv_contrib.git && \
cd opencv_contrib && \
git checkout 3.0.0 && \
cd ~/opencv && mkdir -p build && cd build && \
cmake -D CMAKE_BUILD_TYPE=RELEASE \
-D CMAKE_INSTALL_PREFIX=/usr/local \
-D INSTALL_C_EXAMPLES=ON \
-D INSTALL_PYTHON_EXAMPLES=ON \
-D OPENCV_EXTRA_MODULES_PATH=~/opencv_contrib/modules \
-D BUILD_EXAMPLES=OFF .. && \
make -j4 && \
make install && \
ldconfig
COPY requirements.txt /usr/src/app/
RUN pip install --no-cache-dir -r requirements.txt
COPY . /usr/src/app
```
And my requirements.txt file
```
Flask==0.8
gunicorn==0.14.2
requests==0.11.1
bs4==0.0.1
nltk==3.2.1
pymysql==0.7.2
xlsxwriter==0.8.5
numpy==1.11
Pillow==3.2.0
cv2==1.0
pytesseract==0.1
```
|
2016/04/26
|
[
"https://Stackoverflow.com/questions/36862589",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1541609/"
] |
Here's an [image](https://hub.docker.com/r/chennavarri/ubuntu_opencv_python/) that is built on Ubuntu 16.04 with Python2 + Python3 + OpenCV. You can pull it using
`docker pull chennavarri/ubuntu_opencv_python`
Here's the Dockerfile (provided in the same dockerhub repo mentioned above) that will install opencv for both python2 and python3 on Ubuntu 16.04 and also sets the appropriate raw1394 link. Copied from <https://github.com/chennavarri/docker-ubuntu-python-opencv>
```
FROM ubuntu:16.04
MAINTAINER Chenna Varri
RUN apt-get update
RUN apt-get install -y build-essential apt-utils
RUN apt-get install -y cmake git libgtk2.0-dev pkg-config libavcodec-dev \
libavformat-dev libswscale-dev
RUN apt-get update && apt-get install -y python-dev python-numpy \
python3 python3-pip python3-dev libtbb2 libtbb-dev \
libjpeg-dev libjasper-dev libdc1394-22-dev \
python-opencv libopencv-dev libav-tools python-pycurl \
libatlas-base-dev gfortran webp qt5-default libvtk6-dev zlib1g-dev
RUN pip3 install numpy
RUN apt-get install -y python-pip
RUN pip install --upgrade pip
RUN cd ~/ &&\
git clone https://github.com/Itseez/opencv.git &&\
git clone https://github.com/Itseez/opencv_contrib.git &&\
cd opencv && mkdir build && cd build && cmake -DWITH_QT=ON -DWITH_OPENGL=ON -DFORCE_VTK=ON -DWITH_TBB=ON -DWITH_GDAL=ON -DWITH_XINE=ON -DBUILD_EXAMPLES=ON .. && \
make -j4 && make install && ldconfig
# Set the appropriate link
RUN ln /dev/null /dev/raw1394
RUN cd ~/opencv
```
Some additional instructions for people newly starting with Docker:
* In the directory where you put this Dockerfile, build the docker image as `docker build -t ubuntu_cv .`
* Once the image is built, you can check by doing `docker images`
```
REPOSITORY TAG IMAGE ID CREATED SIZE
ubuntu_cv latest 6210ddd6346b 24 minutes ago 2.192 GB
```
* You can start a docker container as `docker run -t -i ubuntu_cv:latest`
|
if you want to use Opencv dnn with CUDA, and torch with gpu (optionally) i recommend this:
```
FROM nvidia/cuda:10.2-base-ubuntu18.04
WORKDIR /home
ENV DEBIAN_FRONTEND=noninteractive
ENV TZ=Europe/Minsk
RUN ln -snf /usr/share/zoneinfo/$TZ /etc/localtime && echo $TZ > /etc/timezone
RUN apt-get update && apt-get install -y \
keyboard-configuration \
nvidia-driver-440\
curl \
ca-certificates \
sudo \
git \
bzip2 \
libx11-6 \
cmake \
g++ \
wget \
build-essential \
cmake \
git \
unzip \
pkg-config \
python-dev \
python-opencv \
libopencv-dev \
libjpeg-dev \
libpng-dev \
libtiff-dev \
libgtk2.0-dev \
python-numpy \
python-pycurl \
libatlas-base-dev \
gfortran \
webp \
python-opencv \
qt5-default \
libvtk6-dev \
zlib1g-dev \
libcudnn7=7.6.5.32-1+cuda10.2 \
libcudnn7-dev=7.6.5.32-1+cuda10.2 \
python3-pip \
python3-venv \
nano
RUN alias python='/usr/bin/python3'
RUN pip3 install numpy
RUN pip3 install torch
#RUN echo ############ && python --version && ##############
# Install Open CV - Warning, this takes absolutely forever
RUN git clone https://github.com/opencv/opencv_contrib && \
cd opencv_contrib && \
git fetch --all --tags && \
git checkout tags/4.3.0 && \
cd .. && \
git clone https://github.com/opencv/opencv.git && \
cd opencv && \
git checkout tags/4.3.0
#RUN pip3 freeze && which python3 && python3 --version
################################################################
#################### OPENCV CPU ################################
#RUN pwd &&\
# cd opencv && \
# pwd &&\
# mkdir build && cd build && \
# pwd &&\
# cmake -DCMAKE_BUILD_TYPE=Release \
# -DENABLE_CXX14=ON \
# -DBUILD_PERF_TESTS=OFF \
# -DOPENCV_GENERATE_PKGCONFIG=ON \
# -DWITH_XINE=ON \
# -DBUILD_TESTS=OFF \
# -DENABLE_PRECOMPILED_HEADERS=OFF \
# -DCMAKE_SKIP_RPATH=ON \
# -DBUILD_WITH_DEBUG_INFO=OFF \
# -DOPENCV_EXTRA_MODULES_PATH=../../opencv_contrib/modules \
#
# -Dopencv_dnn_superres=ON /usr/bin/ .. && \
# make -j$(nproc) && \
# make install
################################################################
#################### OPENCV GPU ################################
RUN cd opencv && mkdir build && cd build && \
cmake -DCMAKE_BUILD_TYPE=Release \
-D CMAKE_CXX_COMPILER=/usr/bin/g++ \
-D PYTHON_DEFAULT_EXECUTABLE=$(which python3) \
-D BUILD_NEW_PYTHON_SUPPORT=ON \
-D BUILD_opencv_python3=ON \
-D HAVE_opencv_python3=ON \
-D INSTALL_PYTHON_EXAMPLES=ON \
-D CUDA_TOOLKIT_ROOT_DIR=/usr/local/cuda-10.2 \
-D CUDA_BIN_PATH=/usr/local/cuda-10.2 \
-D CUDNN_INCLUDE_DIR=/usr/include/cudnn.h \
-D WITH_CUDNN=ON \
-D CUDA_ARCH_BIN=6.1 \
-D OPENCV_DNN_CUDA=ON \
-D WITH_CUDA=ON \
-D BUILD_opencv_cudacodec=OFF \
-D WITH_GTK=ON \
-D CMAKE_BUILD_TYPE=RELEASE \
-D CUDA_HOST_COMPILER:FILEPATH=/usr/bin/gcc-7 \
-D ENABLE_PRECOMPILED_HEADERS=OFF \
-D WITH_TBB=ON \
-D WITH_OPENMP=ON \
-D WITH_IPP=ON \
-D BUILD_EXAMPLES=OFF \
-D BUILD_DOCS=OFF \
-D BUILD_PERF_TESTS=OFF \
-D BUILD_TESTS=OFF \
-D WITH_CSTRIPES=ON \
-D OPENCV_EXTRA_MODULES_PATH=../../opencv_contrib/modules \
-D CMAKE_INSTALL_PREFIX=/usr/local/ \
-DBUILD_opencv_python3=ON \
-D PYTHON_DEFAULT_EXECUTABLE=$(which python3) \
-D PYTHON3_EXECUTABLE=$(which python3) \
-D PYTHON_INCLUDE_DIR=$(python3 -c "from distutils.sysconfig import get_python_inc; print(get_python_inc())") \
-D PYTHON3_INCLUDE_DIR=$(python3 -c "from distutils.sysconfig import get_python_inc; print(get_python_inc())") \
-D PYTHON3_LIBRARY=$(python3 -c "from distutils.sysconfig import get_config_var;from os.path import dirname,join ; print(join(dirname(get_config_var('LIBPC')),get_config_var('LDLIBRARY')))") \
-D PYTHON3_NUMPY_INCLUDE_DIRS=$(python3 -c "import numpy; print(numpy.get_include())") \
-D PYTHON3_PACKAGES_PATH=$(python3 -c "from distutils.sysconfig import get_python_lib; print(get_python_lib())") \
-D OPENCV_GENERATE_PKGCONFIG=ON .. \
-Dopencv_dnn_superres=ON /usr/bin/ .. && \
make -j$(nproc) && \
make install
RUN pip3 install opencv/build/python_loader
```
then you can run
```
import torch
import os
print('availabe:',torch.cuda.is_available() )
print('devices available', torch.cuda.device_count())
print('device id:',torch.cuda.current_device() )
print('device address', torch.cuda.device(0))
print('gpu model',torch.cuda.get_device_name(0))
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
print('Using device:', device)
#Additional Info when using cuda
if device.type == 'cuda':
print(torch.cuda.get_device_name(0))
print('Memory Usage:')
print('Allocated:', round(torch.cuda.memory_allocated(0)/1024**3,1), 'GB')
print('Cached: ', round(torch.cuda.memory_reserved(0)/1024**3,1), 'GB')
import cv2
print("DNN_BACKEND_CUDA",cv2.dnn.DNN_BACKEND_CUDA)
print("DNN_BACKEND_CUDA",cv2.dnn.DNN_TARGET_CUDA)
```
and you will get something like:
```
Using device: cuda
availabe: True
devices available 1
device id: 0
device address <torch.cuda.device object at 0x7f5a0a392550>
gpu model GeForce GTX 1050 Ti
Memory Usage:
Allocated: 0.0 GB
Cached: 0.0 GB
DNN_BACKEND_CUDA 5
DNN_BACKEND_CUDA 6
```
| 8,432
|
49,768,187
|
When doing some simple calculation from dataframe object (python 3.5, pandas 0.20.1), pandas is not behaving consistently when the calculated result doesn't fit the current numeric type. Why?
Please see code below, creating a dataframe with numeric type-int16 :
```
import pandas as pd
import numpy as np
d = {'col1': [313], 'col2': [5]}
df = pd.DataFrame(data=d,dtype=np.int16)
print(df.dtypes)
#col1 int16
#col2 int16
#dtype: object
df['col1'] *= 1000000
df['col2'] *= 10000
print(df.dtypes)
#col1 int32
#col2 int16
#dtype: object
```
As you can see, since the upper limit of int16 is 32767, the result of both 313\*1000000 and 5\*10000 would exceed the upper limit. However, it seems like pandas only automatically converted the result of the first calculation to int32 (which makes sense and is ideal for me) but still kept the result of the second calculation as int16 (which made the result wierd and not ideal for me).
Is there a way to always make pandas automatically convert the numeric type when needed?
|
2018/04/11
|
[
"https://Stackoverflow.com/questions/49768187",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6217667/"
] |
You have to apply the :hover effect in a:hover because you have already applied background-color to a element. Try and add this code.
```
.tabs-nav a:hover {
background-color: red;
}
```
|
Below code works for me
```
.tabs-nav li :hover {
color: white;
background: red;
}
```
If I am not wrong Space is added to apply hover to the child of li. In this case for anchor tag
| 8,442
|
46,247,732
|
So I am learning python and am trying to count the number of vowels in a sentence. I figured out how to do it both using the count() function and an iteration but now I am trying to do it using recursion. When I try the following method I get an error "IndexError: string index out of range". Here is my code.
```
sentence = input(": ")
def count_vowels_recursive(sentence):
total = 0
if sentence[0] == "a" or sentence[0] == "e" or sentence[0] == "i" or sentence[0] == "o" or sentence[0] == "u":
total = total + 1 + count_vowels_recursive(sentence[1:])
else:
total = total + count_vowels_recursive(sentence[1:])
return the_sum
print(count_vowels_recursive(sentence))
```
Here are my previous two solutions.
```
def count_vowels(sentence):
a = sentence.count("a")
b = sentence.count("e")
c = sentence.count("i")
d = sentence.count("o")
e = sentence.count("i")
return (a+b+c+d+e)
def count_vowels_iterative(sentence):
a_ = 0
e_ = 0
i_ = 0
o_ = 0
u_ = 0
for i in range(len(sentence)):
if "a" == sentence[i]:
a_ = a_ + 1
elif "e" == sentence[i]:
e_ = e_ + 1
elif "i" == sentence[i]:
i_ = i_ + 1
elif "o" == sentence[i]:
o_ = o_ + 1
elif "u" == sentence[i]:
u_ = u_ + 1
else:
continue
return (a_ + e_ + i_ + o_ + u_)
```
|
2017/09/15
|
[
"https://Stackoverflow.com/questions/46247732",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/8616651/"
] |
You have no base case. The function will keep recursing until `sentence` is empty, in which case your first if statement will cause that index error.
You should first of all check if sentence is empty, and if so return 0
|
You can shorten things up quite a bit:
```
def count_vowels_recursive(sentence):
# this base case is needed to stop the recursion
if not sentence:
return 0
# otherwise, sentence[0] will raise an exception for the empty string
return (sentence[0] in "aeiou") + count_vowels_recursive(sentence[1:])
# the boolean expression `sentence[0] in "aeiou"` is cast to an int for the addition
```
| 8,445
|
49,490,803
|
I'm working through a python workbook, and I have to turn the following dictionary into a list:
```
lexicon = {
'north': 'direction',
'south': 'direction',
'east': 'direction',
'west': 'direction',
'down': 'direction',
'up': 'direction',
'left': 'direction',
'right': 'direction',
'back': 'direction',
'go': 'verb',
'stop': 'verb',
'kill': 'verb',
'eat': 'verb',
'the': 'stop',
'in': 'stop',
'of': 'stop',
'from': 'stop',
'at': 'stop',
'it': 'stop',
'door': 'noun',
'bear': 'noun',
'princess': 'noun',
'cabinet': 'noun'}
```
But I can't find anything on the internet that's helped me do so. How would I go about turning this into a list? Help is appreciated!
|
2018/03/26
|
[
"https://Stackoverflow.com/questions/49490803",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/9429075/"
] |
You can use `.keys()` or `.values()`.
```
>>> list(lexicon.keys())
['princess', 'down', 'east', 'north', 'cabinet', 'at', 'right', 'door', 'left', 'up', 'from', 'bear', 'of', 'the', 'south', 'in', 'kill', 'eat', 'back', 'west', 'it', 'go', 'stop']
>>> list(lexicon.values())
['noun', 'direction', 'direction', 'direction', 'noun', 'stop', 'direction', 'noun', 'direction', 'direction', 'stop', 'noun', 'stop', 'stop', 'direction', 'stop', 'verb', 'verb', 'direction', 'direction', 'stop', 'verb', 'verb']
```
You could use `.items()` to get the key,value pairs as a list of tuples
```
>>> list(lexicon.items())
[('princess', 'noun'), ('down', 'direction'), ('east', 'direction'), ('north', 'direction'), ('cabinet', 'noun'), ('at', 'stop'), ('right', 'direction'), ('door', 'noun'), ('left', 'direction'), ('up', 'direction'), ('from', 'stop'), ('bear', 'noun'), ('of', 'stop'), ('the', 'stop'), ('south', 'direction'), ('in', 'stop'), ('kill', 'verb'), ('eat', 'verb'), ('back', 'direction'), ('west', 'direction'), ('it', 'stop'), ('go', 'verb'), ('stop', 'verb')]
```
|
if you just want values you can use :
`lexicon.values()` it will return you the values saved against each key.
but if you want to have a list of key value pairs then you can use the following :
```
>>lexicon.items()
output :
[('right', 'direction'), ('it', 'stop'), ('down', 'direction'), ('kill', 'verb'), ('at', 'stop'), ('in', 'stop'), ('go', 'verb'), ('door', 'noun'), ('from', 'stop'), ('west', 'direction'), ('eat', 'verb'), ('east', 'direction'), ('north', 'direction'), ('stop', 'verb'), ('bear', 'noun'), ('back', 'direction'), ('cabinet', 'noun'), ('princess', 'noun'), ('of', 'stop'), ('up', 'direction'), ('the', 'stop'), ('south', 'direction'), ('left', 'direction')]
```
hope this helps
| 8,447
|
38,471,306
|
As what I have understand on python, when you pass a variable on a function parameter it is already reference to the original variable. On my implementation when I try to equate a variable that I pass on the function it resulted empty list.
This is my code:
```
#on the main -------------
temp_obj = []
obj = [
{'name':'a1', 'level':0},
{'name':'a2', 'level':0},
{'name':'a3', 'level':1},
{'name':'a4', 'level':1},
{'name':'a5', 'level':2},
{'name':'a6', 'level':2},
]
the_result = myFunction(obj, temp_obj)
print(temp_obj)
#above print would result to an empty list
#this is my problem
#end of main body -------------
def myFunction(obj, new_temp_obj):
inside_list = []
for x in obj[:]:
if x['level'] == 0:
inside_list.append(x)
obj.remove(x) #removing the element that was added to the inside_list
new_temp_obj = obj[:] #copying the remaining element
print(new_temp_obj)
# the above print would result to
#[{'name': 'a3', 'level': 1}, {'name': 'a4', 'level': 1}, {'name': 'a5', 'level': 2}, {'name': 'a6', 'level': 2}]
return inside_list
```
Am I missing something or did I misunderstand the idea of python call by reference?
|
2016/07/20
|
[
"https://Stackoverflow.com/questions/38471306",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6099766/"
] |
If all you want to do is switch from design view to code view then use the F7 key. In older versions of VS, F7 would switch back again too but in later versions you use Shift+F7 to switch from code view to design view.
When in design view, you can select the form or a control/component, open the Properties window, click the Events button and then create or select a handler for any event and/or jump to it in code view. When in code view, you can use the drop-down lists at the top to select the form or a control/component or any field declared `WithEvents` and to create and/or jump to the handler for that event.
|
Already resolved. I was able to do it by creating another project and choosing the windows form application as visual basic, not c#.
| 8,448
|
69,528,110
|
I have the following code
```
name = "testyaml"
version = "2.5"
os = "Linux"
sources = [
{
'source': 'news',
'target': 'industry'
},
{
'source': 'testing',
'target': 'computer'
}
]
```
And I want to make this yaml with python3
```
services:
name: name,
version: version,
operating_system: os,
sources:
-
source: news
target: industry
-
source: testing
target: computer
```
I need a help specially on Sources part that how I can add my dictionary list there
|
2021/10/11
|
[
"https://Stackoverflow.com/questions/69528110",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/878280/"
] |
```py
import yaml
name = "testyaml"
version = "2.5"
os = "Linux"
sources = [
{"source": "news", "target": "industry"},
{"source": "testing", "target": "computer"},
]
yaml.dump(
{"services": {"name": name, "version": version, "os": os, "sources": sources}}
)
```
|
Python's `yaml` module allows you to dump dictionary data into yaml format:
```py
import yaml
# Create a dictionary with your data
tmp_data = dict(
services=dict(
name=name,
version=version,
os=os,
sources=sources
)
)
if __name__ == '__main__':
with open('my_yaml.yaml', 'w') as f:
yaml.dump(tmp_data, f)
```
Contents from the yaml:
```yaml
services:
name: testyaml
os: Linux
sources:
- source: news
target: industry
- source: testing
target: computer
version: '2.5'
```
| 8,449
|
26,810,892
|
I am trying to write the output of a python code in an excel sheet.
Here's my attempt:
```
import xlwt
wbk = xlwt.Workbook()
sheet = wbk.add_sheet('pyt')
row =0 # row counter
col=0 # col counter
inputdata = [(1,2,3,4),(2,3,4,5)]
for c in inputdata:
for d in c:
sheet.write(row,col,d)
col +=1
row +=1
wbk.save('pyt.xls')
Result obtained:
1 2 3 4
2 3 4 5
Desired result
row1: 1 2 3 4
row2: 2 3 4 5
```
Any ideas on how to get the desired result? thanks
|
2014/11/07
|
[
"https://Stackoverflow.com/questions/26810892",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2274879/"
] |
You're seeing that behaviour because you're not setting `col` back to zero at the end of the row.
Instead, though, you should use the built-in [`enumerate()`](https://docs.python.org/2/library/functions.html#enumerate) which handles the incrementing for you.
```
for row, c in enumerate(inputdata):
for col, d in enumerate(c):
sheet.write(row,col,d)
```
|
Add `col = 0` on the next line after `row+=1`
| 8,450
|
1,650,095
|
I am reading the book [Think Python](http://www.greenteapress.com/thinkpython/) by Allen Downey. For chapter 4, one has to use a suite of modules called [Swampy](http://www.greenteapress.com/thinkpython/swampy/). I have downloaded and installed it.
The problem is that the modules were written in Python 2 and I have Python 3 (in Windows 7 RC1). When I ran the TurtleWorld module from Swampy, I got error messages about the print and exec statements, which are now functions in Python 3. I fixed those errors by including parentheses with print and exec in the code of the GUI and World modules. I also got an error that the Tkinter module could not be found. It turned out that in Python 3, the module name is spelled with a lower case t.
The third error is more difficult: ImportError: No module named tkFont.
Does anyone have any idea, please? Thank you.
|
2009/10/30
|
[
"https://Stackoverflow.com/questions/1650095",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/115139/"
] |
Many important third-party libraries have not yet been rewritten for Python 3; you'll have to stick to Python 2.x for now. There is no way around it. As it says on the [official Python download page](http://www.python.org/download/),
>
> If you don't know which version to
> use, start with Python 2.6.4; more
> existing third party software is
> compatible with Python 2 than Python 3
> right now.
>
>
>
|
There is a conversion tool for converting Python 2 code to work with Python 3: <http://svn.python.org/view/sandbox/trunk/2to3/>
Not sure how this extends to 3rd party libraries but it might be worth passing this over the swampy code.
| 8,451
|
54,140,796
|
I have a very large string consiting of a series of numbers separated by one or more spaces. Some of the numbers are equal to -123, and the rest can be any random number.
```
example_string = "102.3 42.89 98 812.7 374 5 -123 8 -123 13 -123 21..."
```
I would like to replace the values that are not equal to -123 with 456 in the most efficient way possible.
```
updated_example_string = "456 456 456 456 456 456 -123 456 -123 456 -123 456..."
```
I know that python's regular expression library has a sub method that will replace matching values quite efficiently. Is there a way to replace values that DO NOT match?
As I mentioned, this is a rather large string, coming from a source file around 100MB. Assuming there's a way to use re.sub to accomplish this task, is that even the correct/most efficient way of handling such problem?
|
2019/01/11
|
[
"https://Stackoverflow.com/questions/54140796",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/691928/"
] |
You can use this regex:
```
(^|\s)(?!-123(\s|$))-?[0-9.]+(?=\s|$)
```
It looks for the start of string or a space, not followed by -123 and space of end of string (using a negative lookahead) then some number of digits or a `.`, followed by either a space or end of string.
Then you can replace with `\g<1>456` to turn all those numbers into 456. The `\g<1>` in the replacement preserves any space captured by the first group.
[Demo on regex101](https://regex101.com/r/9Hu5wu/1)
In Python:
```
import re
string = "102.3 42.89 -1234 98 -812.7 374 5 -123 8 -123 13 -123 21 -123"
print re.sub(r'(^|\s)(?!-123(\s|$))-?[0-9.]+(?=\s|$)', '\g<1>456', string)
```
Output
```
456 456 456 456 456 456 456 -123 456 -123 456 -123 456 -123
```
[Demo on rextester](https://rextester.com/RIGDMV75452)
|
You could match only the numbers between whitspace boundaries and the use re.sub with a callback function to check if the match is not `-123`. If it not, relace it with `456`
```
(?<!\S)-?\d+(?:\.\d+)?(?!\S)
```
**Explanation**
* `(?<!\S)` Negative lookbehind to assert what is on the left is not a non-whitespace character
* `-?` Optional `-`
* `\d+(?:\.\d+)?` Match 1+ digits with an optional part that matches a `.` and 1+ digits
* `(?!\S)` Negative lookahead to assert what is on the right is not a non-whitespace character
Example
```
import re
pattern = r"(?<!\S)-?\d+(?:\.\d+)?(?!\S)"
s = "102.3 42.89 98 812.7 374 5 -123 8 -123 13 -123 21"
print(re.sub(pattern, lambda m: "456" if m.group() != "-123" else m.group(), s))
```
Result
```
456 456 456 456 456 456 -123 456 -123 456 -123 456
```
See the [Regex demo](https://regex101.com/r/aReSQ7/1) | [Python demo](https://ideone.com/5JQSTf)
| 8,456
|
21,704,149
|
I am trying to configure Chronos to use custom mesos-docker executor present at <https://github.com/mesosphere/mesos-docker/> . Everytime I try to run the command it fails.
I created the task using below command
```
echo '{"schedule":"R/2014-02-14T00:52:00Z/PT90M", "name":"testing_docker_executor", "command":"docker_ubuntu_test /root/docker_test.sh", "epsilon":"PT15M", "executor":"/var/lib/mesos/executors/docker" }' | http POST localhost:8080/scheduler/iso8601
```
I also configured logging in executor and below are the logs I get when it fails
```
Feb 11 13:51:36 ip6-localhost docker[13895]: Ready to serve!
Feb 11 13:51:36 ip6-localhost docker[13895]: Registered with Mesos slave
Feb 11 13:51:36 ip6-localhost docker[13895]: Task is: ct:1392126755612:2:testing_docker_executor
Feb 11 13:51:36 ip6-localhost docker[13895]: JSON from framework is rubbish
Feb 11 13:51:36 ip6-localhost docker[13895]: No JSON object could be decoded
Feb 11 13:51:36 ip6-localhost docker[13895]: Traceback (most recent call last):
Feb 11 13:51:36 ip6-localhost docker[13895]: File "/var/lib/mesos/executors/docker", line 120, in launchTask
Feb 11 13:51:36 ip6-localhost docker[13895]: self.data = json.loads(task.data) if task.data else {}
Feb 11 13:51:36 ip6-localhost docker[13895]: File "/usr/lib/python2.7/json/__init__.py", line 338, in loads
Feb 11 13:51:36 ip6-localhost docker[13895]: return _default_decoder.decode(s)
Feb 11 13:51:36 ip6-localhost docker[13895]: File "/usr/lib/python2.7/json/decoder.py", line 365, in decode
Feb 11 13:51:36 ip6-localhost docker[13895]: obj, end = self.raw_decode(s, idx=_w(s, 0).end())
Feb 11 13:51:36 ip6-localhost docker[13895]: File "/usr/lib/python2.7/json/decoder.py", line 383, in raw_decode
Feb 11 13:51:36 ip6-localhost docker[13895]: raise ValueError("No JSON object could be decoded")
Feb 11 13:51:36 ip6-localhost docker[13895]: ValueError: No JSON object could be decoded
Feb 11 13:51:36 ip6-localhost docker[13895]: []
Feb 11 13:51:36 ip6-localhost docker[13895]: Traceback (most recent call last):
Feb 11 13:51:36 ip6-localhost docker[13895]: File "/var/lib/mesos/executors/docker", line 67, in run
Feb 11 13:51:36 ip6-localhost docker[13895]: img = self.args[0]
Feb 11 13:51:36 ip6-localhost docker[13895]: IndexError: list index out of range
```
Is there something I am missing. Do I need to provide JSON in command.
|
2014/02/11
|
[
"https://Stackoverflow.com/questions/21704149",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1213542/"
] |
the problem is here:
```
size_t file;
```
size\_t is unsigned, so it will always be >=0
it should have been:
```
int file;
```
|
>
> the open call returns something greater than 0
>
>
>
`open` returns `int`, but you put in in an unsigned variable (`size_t` is usually unsigned), so you fail to detect when it is `<0`
| 8,457
|
65,732,046
|
here is my code
```
import numpy
a = numpy.arange(0.5, 1.5, 0.1, dtype=numpy.float64)
print(a)
print(a.tolist())
>>>[0.5 0.6 0.7 0.8 0.9 1. 1.1 1.2 1.3 1.4]
>>>[0.5, 0.6, 0.7, 0.7999999999999999, 0.8999999999999999, 0.9999999999999999, 1.0999999999999999, 1.1999999999999997, 1.2999999999999998, 1.4]
```
When trying to convert **numpy array** to **list** getting this problem with value like ***0.7999999999999999*** in place of ***0.8***
please help me to convert **numpy array** to normal python **list** without losing the **decimal value.**
|
2021/01/15
|
[
"https://Stackoverflow.com/questions/65732046",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/14922407/"
] |
%%writefile is an IPython [cell magic](https://ipython.readthedocs.io/en/stable/interactive/magics.html#cell-magics), not a magic method. Cell magics are different by line magics because they are identified by a double %.
IPyhton cell and line magics are specific to IPython. See [here](https://ipython.readthedocs.io/en/stable/interactive/python-ipython-diff.html#magics) for further information.
Some constructs you can use in IPython do not exist in Python, and you get an error if you try to run them as Python commands, as stated in IPython guide:
>
> Unless expressed otherwise all of the construct you will see here will raise a SyntaxError if run in a pure Python shell, or if executing in a Python script.
>
>
>
This because IPython is a set of tools aimed at improving the experience of using Python interactively, easing some repetitive tasks.
|
If you mean this [magic command in iPython](https://ipython.readthedocs.io/en/stable/interactive/magics.html#cellmagic-writefile) (note: *command*, not *function*), then that's your answer; it is a specific iPython extension, not part of the Python language itself.
| 8,458
|
55,454,569
|
I am calling some java binary in unix environment wrapped inside python script
When I call script from bash, output comes clean and also being stored in desired variable , However when i run the same script from Cron, Output stored(in a Variable) is incomplete
my code:
```
command = '/opt/HP/BSM/PMDB/bin/abcAdminUtil -abort -streamId ETL_' \
'SystemManagement_PA@Fact_SCOPE_OVPAGlobal'
proc = subprocess.Popen(command, shell=True, stdout=subprocess.PIPE,
stderr=subprocess.PIPE)
(output, err) = proc.communicate() # Storing Output in output variable
```
Value of output variable when running from shell:
```
Abort cmd output:PID:8717
Executing abort function
hibernateConfigurationFile = /OBRHA/HPE-OBR/PMDB/lib/hibernate-core-4.3.8.Final.jar
Starting to Abort Stream ETL_SystemManagement_PA@Fact_SCOPE_OVPAGlobal
Aborting StreamETL_SystemManagement_PA@Fact_SCOPE_OVPAGlobal
```
Value of output variable when running from cron:
```
PID:830
```
It seems output after creating new process is not being stored inside variable , i don't know why ?
|
2019/04/01
|
[
"https://Stackoverflow.com/questions/55454569",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/8848689/"
] |
```
alias py=python3.7
py filename.py
```
Add the alias to you `bash_aliases` to get it in every terminal
|
If you're using linux, you can shorten it to nothing by adding the line
```py
#!/usr/bin/env python3.7
```
to the top of your python file. Then `chmod 755 <filename.py>` and run it like any other executable.
| 8,459
|
59,544,848
|
I have captcha image as attached in this question.
[](https://i.stack.imgur.com/wVbyF.png)
I am trying to extract the text in the image. My following code is able to make all areas except the text and lines in white color
```
import cv2
from PIL import Image
import numpy as np
image1 = Image.open("E:\\python\\downloaded\\captcha.png").convert('L')
image2 = Image.open("E:\\python\\downloaded\\captcha.png").convert('L')
pix = image1.load()
for column in range(0, image1.height):
for row in range(0, image1.width):
if pix[row, column] >= 90:
pix[row, column] = 255
cv2.imshow("1", np.array(image2))
cv2.imshow("2", np.array(image1))
cv2.waitKey(0)
```
But I am trying to remove the line crossing the text, but it does not seem to work. I tried with below portion of code which is posted on other question in StackOverflow. But it does not work
[](https://i.stack.imgur.com/b9iNk.jpg)
```
def eliminate_zeros(self,vector):
return [(dex,v) for (dex,v) in enumerate(vector) if v!=0 ]
def get_line_position(self,img):
sumx=img.sum(axis=0)
list_without_zeros=self.eliminate_zeros(sumx)
min1,min2=heapq.nsmallest(2,list_without_zeros,key=itemgetter(1))
l=[dex for [dex,val] in enumerate(sumx) if val==min1[1] or val==min2[1]]
mindex=[l[0],l[len(l)-1]]
cols=img[:,mindex[:]]
col1=cols[:,0]
col2=cols[:,1]
col1_without_0=self.eliminate_zeros(col1)
col2_without_0=self.eliminate_zeros(col2)
line_length=len(col1_without_0)
dex1=col1_without_0[round(len(col1_without_0)/2)][0]
dex2=col2_without_0[round(len(col2_without_0)/2)][0]
p1=[dex1,mindex[0]]
p2=[dex2,mindex[1]]
return p1,p2,line_length
def remove_line(self,p1,p2,LL,img):
m=(p2[0]-p1[0])/(p2[1]-p1[1]) if p2[1]!=p1[1] else np.inf
w,h=len(img),len(img[0])
x=list(range(h))
y=list(map(lambda z : int(np.round(p1[0]+m*(z-p1[1]))),x))
img_removed_line=list(img)
for dex in range(h):
i,j=y[dex],x[dex]
i=int(i)
j=int(j)
rlist=[]
while i>=0 and i<len(img_removed_line)-1:
f1=i
if img_removed_line[i][j]==0 and img_removed_line[i-1][j]==0:
break
rlist.append(i)
i=i-1
i,j=y[dex],x[dex]
i=int(i)
j=int(j)
while i>=0 and i<len(img_removed_line)-1:
f2=i
if img_removed_line[i][j]==0 and img_removed_line[i+1][j]==0:
break
rlist.append(i)
i=i+1
if np.abs(f2-f1) in [LL+1,LL,LL-1]:
rlist=list(set(rlist))
for k in rlist:
img_removed_line[k][j]=0
return img_removed_line
```
I am new to CV and can someone help here to suggest the way?. Original and partially processed image files are attached here.
|
2019/12/31
|
[
"https://Stackoverflow.com/questions/59544848",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/8340105/"
] |
My approach is based on the fact that the line is thinner than the characters. In this example I used blurring, threshold and morphology to get rid of the line between the characters. The result is this:
[](https://i.stack.imgur.com/VwXMR.png)
```py
import cv2
import numpy as np
image = cv2.imread('captcha.png')
image = cv2.blur(image, (3, 3))
ret, image = cv2.threshold(image, 90, 255, cv2.THRESH_BINARY)
image = cv2.dilate(image, np.ones((3, 1), np.uint8))
image = cv2.erode(image, np.ones((2, 2), np.uint8))
cv2.imshow("1", np.array(image))
cv2.waitKey(0)
```
|
You can use CV2 functions like threshold, dilate, bitwise\_and and bitwise\_not for removing unwanted lines from captcha
```
import numpy as np
import cv2
img = cv2.imread('captcha.jpg',0)
horizontal_inv = cv2.bitwise_not(img)
masked_img = cv2.bitwise_and(img, img, mask=horizontal_inv)
masked_img_inv = cv2.bitwise_not(masked_img)
kernel = np.ones((5,5),np.uint8)
dilation = cv2.dilate(masked_img_inv,kernel,iterations = 3)
ret,thresh2 = cv2.threshold(dilation,254,255,cv2.THRESH_BINARY_INV)
thresh2=cv2.bitwise_not(thresh2)
cv2.waitKey(0)
cv2.destroyAllWindows()
```
| 8,460
|
3,106,994
|
I've been researching on finding an efficient solution to this. I've looked into diffing engines (google's diff-match-patch, python's diff) and some some longest common chain algorithms.
I was hoping on getting you guys suggestions on how to solve this issue. Any algorithm or library in particular you would like to recommend?
|
2010/06/24
|
[
"https://Stackoverflow.com/questions/3106994",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/245968/"
] |
In addition to `difflib` and other common subsequence libraries, if it's natural language text, you might look into stemming, which normalizes words to their root form. You can find several implementations in the Natural Language Toolkit ( <http://www.nltk.org/> ) library. You can also compare blobs of natural language text more semantically by using N-Grams ( <http://en.wikipedia.org/wiki/N-gram> ).
|
Longest common chain? Perhaps this will help then: <http://en.wikipedia.org/wiki/Longest_common_subsequence_problem>
| 8,461
|
36,732,614
|
Getting errors as below, when I follow **step 4** of the instruction from [Getting Started with ARC Open Source on Linux](https://chromium.googlesource.com/arc/arc/+/release-39.4410.148.0/docs/getting-started-open-source.md). OS is Ubuntu 14.04 LTS running in Hyper-V.
>
> UBUNTU14:~/arc$ ./configure
>
> ERROR:root:While running
> ['third\_party/tools/depot\_tools/third\_party/gsutil/gsutil', 'cp',
> 'gs://arc-build/naclports/builds/pepper\_40/python.zip',
> '/tmp/tmpUZ0IoK/naclports-python'] ERROR:root:GSResponseError:
> status=403, code=None, reason=Forbidden.
>
>
> ERROR:root:Try prodaccess, and if it does not solve the problem try rm
> ~/.devstore\_token @@@STEP\_WARNINGS@@@ ERROR:root:Retrying after 9 s
> sleeping Traceback (most recent call last): File
> "/home/fkiller/arc/src/build/build\_common.py", line 938, in wrapper
> return func(\*args, \*\*kwargs) File "/home/fkiller/arc/src/build/util/download\_package\_util.py", line 243,
> in \_download\_package\_with\_retries
> self.\_download\_method(url, download\_package\_path) File "/home/fkiller/arc/src/build/util/download\_package\_util.py", line 119,
> in \_download
> build\_common.get\_gsutil\_executable(), 'cp', url, destination\_path]) File
> "/home/fkiller/arc/src/build/util/download\_package\_util.py", line 97,
> in execute\_subprocess
> output = subprocess.check\_output(cmd, cwd=cwd, stderr=subprocess.STDOUT) File "/usr/lib/python2.7/subprocess.py",
> line 573, in check\_output
> raise CalledProcessError(retcode, cmd, output=output) CalledProcessError: Command
> '['third\_party/tools/depot\_tools/third\_party/gsutil/gsutil', 'cp',
> 'gs://arc-build/naclports/builds/pepper\_40/python.zip',
> '/tmp/tmpUZ0IoK/naclports-python']' returned non-zero exit status 1
>
>
>
Any idea to resolve this without changing build script? I may manually pointing python.zip from other sources such as <https://naclports.storage.googleapis.com/builds/pepper_40/trunk-147-g49eb4c9/publish/python/pnacl/python.zip>, but I want to build it as is without changing scripts.
I've already tried to setup gsutil and its authenticator, but it didn't fix the issue.
**EDIT:** After @elijah-taylor fixed ACL, now I'm getting errors below
>
> Traceback (most recent call last): File "src/build/configure.py",
> line 365, in
> sys.exit(main()) File "src/build/configure.py", line 347, in main
> \_gclient\_sync\_third\_party() File "src/build/configure.py", line 132, in \_gclient\_sync\_third\_party
> subprocess.check\_output(cmd, cwd=os.path.dirname(gclient\_filename)) File
> "/usr/lib/python2.7/subprocess.py", line 566, in check\_output
> process = Popen(stdout=PIPE, \*popenargs, \*\*kwargs) File "/usr/lib/python2.7/subprocess.py", line 710, in **init**
> errread, errwrite) File "/usr/lib/python2.7/subprocess.py", line 1327, in \_execute\_child
> raise child\_exception OSError: [Errno 2] No such file or directory
>
>
>
In the line 132,
```
File "src/build/configure.py", line 132, in _gclient_sync_third_party
subprocess.check_output(cmd, cwd=os.path.dirname(gclient_filename))
```
gclient\_filename is "third\_party/.gclient" and os.path.dirname(gclient\_filename) is "thrid\_party".
|
2016/04/20
|
[
"https://Stackoverflow.com/questions/36732614",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/633210/"
] |
The problem was bad ACLs on the files. I reached out to @elijah-taylor for a fix, it should now work!
|
faced same issue..fixed after running the following.
```
apt-get install gsutil
apt-get install libwww-perl
chmod +x ./third_party/tools/depot_tools/third_party/gsutil/gsutil
```
| 8,467
|
51,432,473
|
the problem
-----------
I'm trying to use the `concurrent.futures` library to run a function on a list of "things". The code looks something like this.
```
import concurrent.futures
import logging
logger = logging.getLogger(__name__)
def process_thing(thing, count):
logger.info(f'starting processing for thing {count}')
# Do some io related stuff
logger.info(f'finished processing for thing {count}')
def process_things_concurrently(things)
with concurrent.futures.ThreadPoolExecutor() as executor:
futures = []
for count, thing in enumerate(things):
futures.append(executor.submit(process_thing, thing, count))
for future in concurrent.futures.as_completed(futures):
future.result()
```
As the code is now, the logging can happen in any order.
For example:
```
starting processing for thing 2
starting processing for thing 1
finished processing for thing 2
finished processing for thing 1
```
I want to change the code so that the records for a particular call of `process_thing()` are buffered until the future finishes.
In other words, all of the records for a particular call stick together. These 'groups' of records are ordered by when the call finished.
So from the example above the log output above would instead look like
```
starting processing for thing 2
finished processing for thing 2
starting processing for thing 1
finished processing for thing 1
```
what I've tried
---------------
I tried making a logger for each call that would have its own custom handler, possibly subclassing [BufferingHandler](https://docs.python.org/3/library/logging.handlers.html#logging.handlers.BufferingHandler). But eventually there will be lots of "things" and I read that making a lot of loggers is bad.
I'm open to anything that works! Thanks.
|
2018/07/19
|
[
"https://Stackoverflow.com/questions/51432473",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/7830612/"
] |
Here's a little recipe for a `DelaydLogger` class that puts all calls to `logger`'s methods into a list instead of actually performing the call, until you finally do a `flush` where they are all fired up.
```
from functools import partial
class DelayedLogger:
def __init__(self, logger):
self.logger = logger
self._call_stack = [] # list of (method, *args, **kwargs) tuples
self._delayed_methods = {
name : partial(self._delayed_method_proxy, getattr(logger, name))
for name in ["info", "debug", "warning", "error", "critical"]
}
def __getattr__(self, name):
""" Proxy getattr to self.logger, except for self._delayed_methods. """
return self._delayed_methods.get(name, getattr(self.logger, name))
def _delayed_method_proxy(self, method, *args, **kwargs):
self._call_stack.append((method, args, kwargs))
def flush(self):
""" Flush self._call_stack to the real logger. """
for method, args, kwargs in self._call_stack:
method(*args, **kwargs)
self._call_stack = []
```
In your example, you could use it like so:
```
import logging
logger = logging.getLogger(__name__)
def process_thing(thing, count):
dlogger = DelayedLogger(logger)
dlogger.info(f'starting processing for thing {count}')
# Do some io related stuff
dlogger.info(f'finished processing for thing {count}')
dlogger.flush()
process_thing(None, 10)
```
There may be ways to beautfiy this or make it more compact, but it should get the job done if that's what you really want.
|
First I modified @Jeronimo's answer to come up with this
```
class DelayedLogger:
class ThreadLogger:
"""to be logged from a single thread"""
def __init__(self, logger):
self._call_stack = [] # list of (method, *args, **kwargs) tuples
self.logger = logger
self._delayed_methods = {
name: partial(self._delayed_method_proxy, getattr(logger, name))
for name in ["info", "debug", "warning", "error", "critical"]
}
def __getattr__(self, name):
""" Proxy getattr to self.logger, except for self._delayed_methods. """
return self._delayed_methods.get(name, getattr(self.logger, name))
def _delayed_method_proxy(self, method, *args, **kwargs):
self._call_stack.append((method, args, kwargs))
def flush(self):
""" Flush self._call_stack to the real logger. """
for method, args, kwargs in self._call_stack:
method(*args, **kwargs)
self._call_stack = []
def __init__(self, logger):
self.logger = logger
self._thread_loggers: typing.Dict[self.ThreadLogger] = {}
def new_thread(self, count):
"""Make a new sub-logger class that writes to the call stack in its slot"""
new_logger = self.ThreadLogger(self.logger)
self._thread_loggers[count] = new_logger
return new_logger
def get_thread(self, count):
return self._thread_loggers[count]
delayed_logger = DelayedLogger(logger)
```
Which can be used like this
```
delayed_logger = DelayedLogger(logger)
with concurrent.futures.ThreadPoolExecutor() as executor:
futures = []
for count, thing in enumerate(things):
futures.append(executor.submit(process_thing,
count,
thing,
logger=delayed_logger.new_thread(count)))
for future in concurrent.futures.as_completed(futures):
count = future.result()
delayed_logger.get_thread(count).flush()
```
The problem here is that `process_thing()` now needs to take the logger as an argument and the logger is limited in scope. If `process_thing()` calls subroutines then the their logging won't be delayed.
Probably the solution is just to not try to do this at all. Instead threads can make a log filter or some other way to distinguish their messages.
| 8,468
|
46,158,930
|
Have questions concerning the output of `apply()` method in python `pandas.DataFrame`
### Q1 -
Why does this function returns a `pandas.DataFrame` **with the same format** as the input (`pandas.DataFrame`) when `apply` function returns an `array` with the same shape as input?.
For instance
```
foo = pd.DataFrame([[1,2],[3,4]],columns=['a','b'])
foo.apply(lambda x: [np.min(x)/2,np.max(x)/2], axis='index')
```
code will return:
```
a b
0 min(a)/2 min(b)/2
1 max(a)/2 max(b)/2
```
### Q2 -
For some reason I would like to output a `pandaq.Series` of arrays instead:
```
0 [min(a)/2, max(a)/2]
1 [min(b)/2, max(b)/2]
...
```
I have tried `reduce=True` without success.
Then, **How should I do?**
Thank you in advance.
|
2017/09/11
|
[
"https://Stackoverflow.com/questions/46158930",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3941704/"
] |
You can do it in one line by composing a regular expression pattern `"(item1|item2|item3)"`
```
let array = ["dee", "kamal"]
let str = "Hello all how are you, I m here for deepak."
let success = str.range(of: "(" + array.joined(separator: "|") + ")", options: .regularExpression) != nil
```
|
You should iterate over the array and for each element, call `str.contains`.
```
for word in array {
if str.contains(word) {
print("\(word) is part of the string")
} else {
print("Word not found")
}
}
```
| 8,469
|
46,606,947
|
I'm trying to INSERT to MySQL from a CSV file, first 'column' in the file is a date in this format:
```
31/08/2017;
```
then my column in the table is set as YYYY-MM-DD
this is my code:
```
import datetime
import csv
import MySQLdb
...
insertionSQL="INSERT INTO transactions (trans_date, trans_desc, trans_amnt) VALUES(" + datetime.datetime.strptime('{0}', '%d/%m/%Y').strftime('%Y-%m-%d') + ",{2},{3}), row)"
cur.execute(insertionSQL)
```
I get the following error from my python script:
```
(data_string, format))
ValueError: time data '{0}' does not match format '%d/%m/%Y'
```
|
2017/10/06
|
[
"https://Stackoverflow.com/questions/46606947",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5794219/"
] |
You will first have to create a path as a rounded rectangle. Then with each step in your animation you have to modify the eight segments of the path. This will only work with `Path` objects, not if your rectangle is a `Shape`.
The segment points and the handles have to be set like this:
[](https://i.stack.imgur.com/pVl4s.png)
κ (kappa) is defined in paper.js as `Numerical.KAPPA` (more on Kappa [*here*](http://www.whizkidtech.redprince.net/bezier/circle/kappa/)).
The code to change the radius could look like this ([*Click here for the Sketch*](http://sketch.paperjs.org/#S/fVRdb5tAEPwrJ57AwQSSuJHiuFJSqVKVpLXsvoU8nGENp8BhHUsSN/F/790B8fHR8oDY2d2ZuQH73eI0B+vKWj8DRqnlWlERqzrkIX+hggiIkCwIh1eypJh6K1lTnmRga6hgHO3A910ib46r59bsD5jYue/MQ66IvG2VZfs1ZPIZYkmLooJ5K1Ui7CQWzOtyByICjjQBCfp6alvxCFnBScG/C+nbhhc54ZB32URj/mShydQOloDfCsFBrGjMqtJWPlyD3FFDbEtsY/8rmfnk48O0cE38RofIS1udLMg00BqHkB86/sZEVXwuEUXFY8aTZU2tORVjD5dHflBx54zbM5lkXdA32x9SqHQlQ4jt+7pVA6WkaGW9jUbqOaLHtC810iU7la+MTI7aRzrvlcX6AEckBZak6Bi0KeVxBvfAE0wVea0yIT+rHASLaObd3SyXN81Gfc8aow1pAngPW7Qdt27joP272H12xaC7Up4++5tB/7ZALHLb6XjQnx8kncxknctE2tRU+9F/8nbqk/fe5KRGzk0kIyfNmXtLdS6/qu5ijf7gGpya2ZnrFwPNSxMRZDqieTGqednT/Jdk0Ars280vJrIZlQwMcnPv6GP/H8mzgeTMRHA02bM+e7vYMdJPVv9U5T/dRgB91hKldfX4dPgL)):
```
var rect = new Path.Rectangle(new Point(100, 100), new Size(100, 100), 30);
rect.fullySelected = true;
var step = 1;
var percentage = 0;
function onFrame(event) {
percentage += step;
setCornerRadius(rect, percentage)
if (percentage > 50 || percentage < 0) {
step *= -1;
}
}
function setCornerRadius(rectPath, roundingPercent) {
roundingPercent = Math.min(50, Math.max(0, roundingPercent));
var rectBounds = rectPath.bounds;
var radius = roundingPercent/100 * Math.min(rectBounds.width, rectBounds.height);
var handleLength = radius * Numerical.KAPPA;
l = rectBounds.getLeft(),
t = rectBounds.getTop(),
r = rectBounds.getRight(),
b = rectBounds.getBottom();
var segs = rectPath.segments;
segs[0].point.x = segs[3].point.x = l + radius;
segs[0].handleOut.x = segs[3].handleIn.x = -handleLength;
segs[4].point.x = segs[7].point.x = r - radius;
segs[4].handleOut.x = segs[7].handleIn.x = handleLength;
segs[1].point.y = segs[6].point.y = b - radius;
segs[1].handleIn.y = segs[6].handleOut.y = handleLength;
segs[2].point.y = segs[5].point.y = t + radius;
segs[2].handleOut.y = segs[5].handleIn.y = -handleLength;
}
```
***Edit:*** I just found a much easier way using a shape. Not sure which approach performs faster.
Here is the implementation using a `Shape` ([*Click here for the Sketch*](http://sketch.paperjs.org/#S/VVHBTsMwDP0VK6dklNIJ7bJRLkjckBA7Eg6h9daoW1Kl3kBs/XectIcuJ/v5+fnZuQhnjijWYtsiVY3IROXrmJ9NgN7+IZSwLIqNdhEIWBEDDn9g25gO8w8GjNsfUEbsNnv31pHk5iwqqGxsY0kZdbOkrhh+LBTLR+m8p+BbfPEHH3iMFgFrLbg4Tu8Ju2hnMtNhqNCR2UePRWLtTq4i6x149xp4L4lnZii4cJFu+PBmqMmP1skV+xsT8ys5ntHu0kgV7WlKBoOp7ann/nSaxZz8MN1Jk92BnBWeS1gVcL3OyU/seLIF/NJmixLul0lg0G6I6/BvfAc0bRcv2Yv159fwDw==)).
```
var size = 100;
var rect = new Shape.Rectangle(new Rectangle(new Point(100, 100), new Size(size, size)), 30);
rect.strokeColor = "red";
var step = 1;
var percentage = 0;
function onFrame(event) {
percentage = Math.min(50, Math.max(0, percentage + step));
rect.radius = size * percentage / 100;
if (percentage >= 50 || percentage <= 0) {
step *= -1;
}
}
```
|
Change the corner size to the following
```
var cornerSize = circle.radius / 1;
```
| 8,471
|
51,308,114
|
I am again stuck with extract and compare list elements.
I have following list of lists:
```
list = [['laravel', 1.0, 54],
['laravel', 1.0, 3615],
['php', 1.0, 1405],
['php', 1.0, 5175],
['php', 1.0, 5176],
['php', 1.0, 54],
['php', 1.0, 5252],
['php', 1.0, 5279],
['python', 1.0, 54],
['laravel', 0.8333333333333334, 54],
['python',0.8333333333333334, 3615]]
```
we can see ID 54 have 3 skills (laravel,python,php) and 3615 have 2 skills
Now, My Desire output as below:
```
[{
id :54
No_matched_skills: 3
skills: laravel,python,php
},
{
id : 3615
No_matched_skills : 2
skills: laravel,python
}]
```
Can anyone please tell me how can I do?
|
2018/07/12
|
[
"https://Stackoverflow.com/questions/51308114",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/9758339/"
] |
Using '`Counter`' and '`defaultdict`' from Python:
```
l = [['laravel', 1.0, 54],
['laravel', 1.0, 3615],
['php', 1.0, 1405],
['php', 1.0, 5175],
['php', 1.0, 5176],
['php', 1.0, 54],
['php', 1.0, 5252],
['php', 1.0, 5279],
['python', 1.0, 54],
['laravel', 0.8333333333333334, 54],
['python',0.8333333333333334, 3615]]
from pprint import pprint
from collections import Counter, defaultdict
c = Counter(i[2] for i in l)
d = defaultdict(lambda: defaultdict(int))
for i in l:
if c[i[2]] > 1:
d[i[2]][i[0]] += 1
rv = []
for k, v in d.items():
rv.append({'id': k, 'No_matched_skills': len(v), 'skills': [*v]})
pprint(rv, width=10)
```
Output:
```
[{'No_matched_skills': 3,
'id': 54,
'skills': ['laravel',
'php',
'python']},
{'No_matched_skills': 2,
'id': 3615,
'skills': ['laravel',
'python']}]
```
|
You could use something like this,
```
my_list = [['laravel', 1.0, 54],
['laravel', 1.0, 3615],
['php', 1.0, 1405],
['php', 1.0, 5175],
['php', 1.0, 5176],
['php', 1.0, 54],
['php', 1.0, 5252],
['php', 1.0, 5279],
['python', 1.0, 54],
['laravel', 0.8333333333333334, 54],
['python',0.8333333333333334, 3615]]
compute_dict = {}
for l in my_list:
compute_dict.setdefault(l[2], [])
compute_dict[l[2]].append(l[0])
final_list = []
for k,v in compute_dict.items():
final_list.append({"id":k,"No_matched_skills":len(set(v)),"skills":", ".join(set(v))})
```
Basically, the first step is to create a dictionary with IDs as keys and programming language as values. Therefore, `compute_dict` will look like
```
>>> {54: ['laravel', 'php', 'python', 'laravel'], 3615: ['laravel','python'], 1405: ['php'], 5175: ['php'], 5176: ['php'], 5252: ['php'], 5279: ['php']}
```
So, from there were able to create a list with the expected output. Note that I'm using `set()` in order to remove duplicates from the original dict.
| 8,472
|
53,853,038
|
I have a python list `l` containing instances of the class `Element`:
```py
class Element:
def __init__(self, id, value):
self.id = id
self.value = value
l = [Element(1, 100), Element(1, 200), Element(2, 1), Element(3, 4), Element(3, 4)]
```
Now I want to sum all `value` members of the classes `Elements` if their `id` is equal to obtain this list:
```py
l = [Element(1, 300), Element(2, 1), Element(3, 8)]
```
What is the most pythonic way to do this?
|
2018/12/19
|
[
"https://Stackoverflow.com/questions/53853038",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/7362422/"
] |
There is (almost?) nothing that `itertools` cannot do. Take a look at [`groupby`](https://docs.python.org/3/library/itertools.html#itertools.groupby):
```
from itertools import groupby
from operator import attrgetter
class Element:
def __init__(self, id, value):
self.id = id
self.value = value
def __repr__(self): # kudos @mesejo
return "Element({}, {})".format(self.id, self.value)
l = [Element(1, 100), Element(1, 200), Element(2, 1), Element(3, 4), Element(3, 4)]
l.sort(key=attrgetter('id')) # if it is already sorted by 'id', comment-out
res = [Element(g, sum(sub.value for sub in k)) for g, k in groupby(l, key=attrgetter('id'))]
```
which results in:
```
print(res) # [Element(1, 300), Element(2, 1), Element(3, 8)]
```
|
One way would be to create a [`defaultdict`](https://docs.python.org/3/library/collections.html#collections.defaultdict) that maps ids to sums of values. Then we can take those results and use them to build a new list of `Elements`. One way to do that is to use [`starmap`](https://docs.python.org/3/library/itertools.html#itertools.starmap) to map the items of that dictionary to the arguments to `Element`
```
from collections import defaultdict
from itertools import starmap
class Element:
def __init__(self, id, value):
self.id = id
self.value = value
def __repr__(self):
return "Element({}, {})".format(self.id, self.value)
l = [Element(1, 100), Element(1, 200), Element(2, 1), Element(3, 4), Element(3, 4)]
d = defaultdict(int)
for e in l:
d[e.id] += e.value
print(list(starmap(Element, d.items())))
# [Element(1, 300), Element(2, 1), Element(3, 8)]
```
| 8,482
|
26,710,578
|
I am using **python 2.7** .I am creating 3 lists (float values (if it matters at all)), i am using json object to save it in a file.
**Say for eg.**
```
L1=[1,2,3,4,5]
L2=[11,22,33,44,55]
L3=[22,33,44,55,66]
b={}
b[1]=L1
b[2]=L2
b[3]=L3
json.dump(b,open("file.txt","w"))
```
I need to read these values back from this "file.txt" into the 3 list.
Can anyone please point me to the resource?
How do i proceed with retrieving these values?
|
2014/11/03
|
[
"https://Stackoverflow.com/questions/26710578",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2126725/"
] |
try
```
content = json.load(open('file.txt'))
```
or using a the [with context manager](https://stackoverflow.com/questions/1369526/what-is-the-python-keyword-with-used-for) to close the file for you:
```
with open('file.txt') as f:
content = json.load(f)
```
Also, read the library's [documentation](https://docs.python.org/2/library/json.html)
|
I used this following code:
```
import json
path=r"file.txt"
for line in open(path):
obj = json.loads(line)
x=obj['1']
y=obj['2']
z=obj['3']
```
Now, i will have the List *L1 in x*, *L2 in y* and *L3 in z*
| 8,485
|
8,711,794
|
I am looking for the simplest **generic** way to convert this python list:
```
x = [
{"foo":"A", "bar":"R", "baz":"X"},
{"foo":"A", "bar":"R", "baz":"Y"},
{"foo":"B", "bar":"S", "baz":"X"},
{"foo":"A", "bar":"S", "baz":"Y"},
{"foo":"C", "bar":"R", "baz":"Y"},
]
```
into:
```
foos = [
{"foo":"A", "bars":[
{"bar":"R", "bazs":[ {"baz":"X"},{"baz":"Y"} ] },
{"bar":"S", "bazs":[ {"baz":"Y"} ] },
]
},
{"foo":"B", "bars":[
{"bar":"S", "bazs":[ {"baz":"X"} ] },
]
},
{"foo":"C", "bars":[
{"bar":"R", "bazs":[ {"baz":"Y"} ] },
]
},
]
```
The combination "foo","bar","baz" is unique, and as you can see the list is not necessarily ordered by this key.
|
2012/01/03
|
[
"https://Stackoverflow.com/questions/8711794",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/248922/"
] |
```
#!/usr/bin/env python3
from itertools import groupby
from pprint import pprint
x = [
{"foo":"A", "bar":"R", "baz":"X"},
{"foo":"A", "bar":"R", "baz":"Y"},
{"foo":"B", "bar":"S", "baz":"X"},
{"foo":"A", "bar":"S", "baz":"Y"},
{"foo":"C", "bar":"R", "baz":"Y"},
]
def fun(x, l):
ks = ['foo', 'bar', 'baz']
kn = ks[l]
kk = lambda i:i[kn]
for k,g in groupby(sorted(x, key=kk), key=kk):
kg = [dict((k,v) for k,v in i.items() if k!=kn) for i in g]
d = {}
d[kn] = k
if l<len(ks)-1:
d[ks[l+1]+'s'] = list(fun(kg, l+1))
yield d
pprint(list(fun(x, 0)))
```
---
```
[{'bars': [{'bar': 'R', 'bazs': [{'baz': 'X'}, {'baz': 'Y'}]},
{'bar': 'S', 'bazs': [{'baz': 'Y'}]}],
'foo': 'A'},
{'bars': [{'bar': 'S', 'bazs': [{'baz': 'X'}]}], 'foo': 'B'},
{'bars': [{'bar': 'R', 'bazs': [{'baz': 'Y'}]}], 'foo': 'C'}]
```
---
**note:** dict is unordered! but it's the same as yours.
|
I would define a function that performs a single grouping step like this:
```
from itertools import groupby
def group(items, key, subs_name):
return [{
key: g,
subs_name: [dict((k, v) for k, v in s.iteritems() if k != key)
for s in sub]
} for g, sub in groupby(sorted(items, key=lambda item: item[key]),
lambda item: item[key])]
```
and then do
```
[{'foo': g['foo'], 'bars': group(g['bars'], "bar", "bazs")} for g in group(x,
"foo", "bars")]
```
which gives the desired result for `foos`.
| 8,486
|
8,552,556
|
I have never used python in my life. I need to make a little fix to a given code.
I need to replace this
```
new_q = q[:q.index('?')] + str(random.randint(1,rand_max)) + q[q.index('?')+1:]
```
with something that replace all of the occurrence of ? with a random, different number.
how can I do that?
|
2011/12/18
|
[
"https://Stackoverflow.com/questions/8552556",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/182416/"
] |
```
import re
import random
a = 'abc?def?ghi?jkl'
rand_max = 9
re.sub(r'\?', lambda x:str(random.randint(1,rand_max)), a)
# returns 'abc3def4ghi6jkl'
```
or without regexp:
```
import random
a = 'abc?def?ghi?jkl'
rand_max = 9
while '?' in a:
a = a[:a.index('?')] + str(random.randint(1,rand_max)) + a[a.index('?')+1:]
```
|
If you need all the numbers to be different, just using a new random number for each occurrence of `?` won't be enough -- a random number might occur twice. You could use the following code in this case:
```
random_numbers = iter(random.sample(range(1, rand_max + 1), q.count("?")))
new_q = "".join(c if c != "?" else str(next(random_numbers)) for c in q)
```
| 8,488
|
37,803,628
|
I'm trying to create a CNN using Tensorflow that classifies images into **16 classes**.
My original image size is 72x72x1, and my network is structured like this:
```
# Network
n_input = dim
n_output = nclass # 16
weights = {
'wc1': tf.Variable(tf.random_normal([5, 5, 1, 32], stddev=0.1)),
'wc2': tf.Variable(tf.random_normal([5, 5, 32, 64], stddev=0.1)),
'wd1': tf.Variable(tf.random_normal([9*9*128, 1024], stddev=0.1)),
'wd2': tf.Variable(tf.random_normal([1024, n_output], stddev=0.1))
}
biases = {
'bc1': tf.Variable(tf.random_normal([32], stddev=0.1)),
'bc2': tf.Variable(tf.random_normal([64], stddev=0.1)),
'bd1': tf.Variable(tf.random_normal([1024], stddev=0.1)),
'bd2': tf.Variable(tf.random_normal([n_output], stddev=0.1))
}
```
Here is my conv net function:
```
def conv_basic(_input, _w, _b, _keepratio):
# Input
_input_r = tf.reshape(_input, shape=[-1, 72, 72, 1])
# Conv1
_conv1 = tf.nn.relu(tf.nn.bias_add(
tf.nn.conv2d(_input_r, _w['wc1'], strides=[1, 1, 1, 1], padding='SAME')
, _b['bc1']))
_pool1 = tf.nn.max_pool(_conv1, ksize=[1, 2, 2, 1], strides=[1, 2, 2, 1], padding='SAME')
mean, var = tf.nn.moments(_pool1, [0, 1, 2])
_pool1 = tf.nn.batch_norm_with_global_normalization(_pool1, mean, var, 1., 0., 1e-7, 0)
_pool_dr1 = tf.nn.dropout(_pool1, _keepratio)
# Conv2
_conv2 = tf.nn.relu(tf.nn.bias_add(
tf.nn.conv2d(_pool_dr1, _w['wc2'], strides=[1, 1, 1, 1], padding='SAME')
, _b['bc2']))
_pool2 = tf.nn.max_pool(_conv2, ksize=[1, 2, 2, 1], strides=[1, 2, 2, 1], padding='SAME')
mean, var = tf.nn.moments(_pool2, [0, 1, 2])
_pool2 = tf.nn.batch_norm_with_global_normalization(_pool2, mean, var, 1., 0., 1e-7, 0)
_pool_dr2 = tf.nn.dropout(_pool2, _keepratio)
# Vectorize
_dense1 = tf.reshape(_pool_dr2, [-1, _w['wd1'].get_shape().as_list()[0]])
# Fc1
_fc1 = tf.nn.relu(tf.add(tf.matmul(_dense1, _w['wd1']), _b['bd1']))
_fc_dr1 = tf.nn.dropout(_fc1, _keepratio)
# Fc2
_out = tf.add(tf.matmul(_fc_dr1, _w['wd2']), _b['bd2'])
# Return everything
out = {
'input_r': _input_r,
'conv1': _conv1,
'pool1': _pool1,
'pool1_dr1': _pool_dr1,
'conv2': _conv2,
'pool2': _pool2,
'pool_dr2': _pool_dr2,
'dense1': _dense1,
'fc1': _fc1,
'fc_dr1': _fc_dr1,
'out': _out
}
return out
```
When I try to run this, I get an error: `"tensorflow.python.framework.errors.InvalidArgumentError: logits and labels must be same size: logits_size=[6,16] labels_size=[1,16]"`
on the line `cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(_pred, y))`
I've tried changing the wd1 weight values around, and apart from saying that requested shape requires a multiple of xxx, it just changes the values in the brackets.
These values (especially 6) seem very arbitrary, idk where they are coming from. It would be nice for someone to explain to me how FC layer neuron amounts are chosen, as it also seems a bit arbitrary.
Thanks
EDIT: My full code <https://gist.github.com/EricZeiberg/f0b138d859b9ed00ce045dc6b341e0a7>
|
2016/06/14
|
[
"https://Stackoverflow.com/questions/37803628",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1578098/"
] |
Given your code (and guessing what is missing in it), I think you have these parameters and results (correct me if wrong):
* `batch_size`: 1
* `num_classes`: 16
* labels `y`: type int, shape `[batch_size, 1]`
* outputs `_pred`: type float32, **should be** shape `[batch_size, num_classes]`
---
In your code, you only use 2 max pooling, which reduce the input feature map from `[1, 72, 72, 1]` to `[1, 18, 18, 64]`.
At this step, you should write:
```
# Vectorize
_dense1 = tf.reshape(_pool_dr2, [1, 18*18*64])
```
You also should replace your matrix `wd1` with:
```
'wd1': tf.Variable(tf.random_normal([18*18*64, 1024], stddev=0.1))
```
---
In general in these situations, you need to print each shape, step after step, and realize **by yourself** where the shape doesn't correspond to what you expect.
|
Its hard to tell from what you provided, but it seems like you feed inputs with a batch size of 6, but only provide one label for them. Where does your data come from?
| 8,489
|
20,133,316
|
I have the following code which works:
```
import xml.etree.ElementTree as etree
def get_path(self):
parent = ''
path = self.tag
sibs = self.parent.findall(self.tag)
if len(sibs) > 1:
path = path + '[%s]'%(sibs.index(self)+1)
current_node = self
while True:
parent = current_node.parent
if not parent:
break
ptag = parent.tag
path = ptag + '/' + path
current_node = parent
return path
etree._Element.get_path = get_path
etree._Element.parent = None
class XmlDoc(object):
def __init__(self):
self.root = etree.Element('root')
self.doc = etree.ElementTree(self.root)
def SubElement(self, parent, tag):
new_node = etree.SubElement(parent, tag)
new_node.parent = parent
return new_node
doc = XmlDoc()
a1 = doc.SubElement(doc.root, 'a')
a2 = doc.SubElement(doc.root, 'a')
b = doc.SubElement(a2, 'b')
print etree.tostring(doc.root), '\n'
print 'element:'.ljust(15), a1
print 'path:'.ljust(15), a1.get_path()
print 'parent:'.ljust(15), a1.parent, '\n'
print 'element:'.ljust(15), a2
print 'path:'.ljust(15), a2.get_path()
print 'parent:'.ljust(15), a2.parent, '\n'
print 'element:'.ljust(15), b
print 'path:'.ljust(15), b.get_path()
print 'parent:'.ljust(15), b.parent
```
Which results in this output:
```
<root><a /><a><b /></a></root>
element: <Element a at 87e3d6c>
path: root/a[1]
parent: <Element root at 87e3cec>
element: <Element a at 87e3fac>
path: root/a[2]
parent: <Element root at 87e3cec>
element: <Element b at 87e758c>
path: root/a/b
parent: <Element a at 87e3fac>
```
Now this is drastically changed from the original code, but I'm not allowed to share that.
The functions aren't too inefficient but there is a dramatic performance decrease when switching from cElementTree to ElementTree which I expected, but from my experiments it seems like monkey patching cElementTree is impossible so I had to switch.
What I need to know is whether there is either a way to add a method to cElementTree or if there is a more efficient way of doing this so I can gain some of my performance back.
Just to let you know I am thinking of as a last resort implementing selected static typing and to compile with cython, but for certain reasons I really don't want to do that.
Thanks for taking a look.
EDIT: Sorry for the wrong use of the term late binding. Sometimes my vocabulary leaves something to be desired. What I meant was "monkey patching."
EDIT: @Corley Brigman, Guy: Thank you very much for your answers which do address the question, however (and I should have stated this in the original post) I had completed this project before using lxml which is a wonderful library that made coding a breeze but due to new requirements (This needs to be implemented as an addon to a product called Splunk) which ties me to the python 2.7 interpreter shipped with Splunk and eliminates the possibility of adding third party libraries with the exception of django.
|
2013/11/21
|
[
"https://Stackoverflow.com/questions/20133316",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2723675/"
] |
If you *need* parents, use lxml instead - it tracks parents internally, and is still C behind the scenes so it's very fast.
However... be aware that there is a tradeoff in tracking parents, in that a given node can only have a single parent. This isn't usually a problem, however, if you do something like the following, you will get different results in cElementTree vs. lxml:
```
p = Element('x')
q = Element('y')
r = SubElement(p, 'z')
q.append(r)
```
cElementTree:
```
dump(p)
<x><z /></x>
dump(q)
<y><z /></y>
```
lxml:
```
dump(p)
<x/>
dump(q)
<y>
<z/>
</y>
```
Since parents are tracked, a node can only have one parent, obviously. As you can see, the element `r` is *copied* to both trees in cElementTree, and *reparented/moved* in lxml.
There are probably only a small number of use cases where this matters, but something to keep in mind.
|
you can just use xpath, for example:
```
import lxml.html
def get_path():
for e in doc.xpath("//b//*"):
print e
```
should work, didn't test it though...
| 8,490
|
66,320,831
|
TLDR;
=====
It's possible to configure the Beam portable runner with the spark configurations? More precisely, it's possible to configure the `spark.driver.host` in the Portable Runner?
Motivation
==========
Currently, we have airflow implemented in a Kubernetes cluster, and aiming to use TensorFlow Extended we need to use Apache beam. For our use case Spark would be the appropriate runner to be used, and as airflow and TensorFlow are coded in python we would need to use the Apache Beam's Portable Runner (<https://beam.apache.org/documentation/runners/spark/#portability>).
The problem
===========
The portable runner creates the spark context inside its container and does not leave space for the driver DNS configuration making the executors inside the worker pods non-communicable to the driver (the job server).
Setup
=====
1. Following the beam documentation, the job serer was implemented in the same pod as the airflow to use the local network between these two containers.
Job server config:
```yaml
- name: beam-spark-job-server
image: apache/beam_spark_job_server:2.27.0
args: ["--spark-master-url=spark://spark-master:7077"]
```
Job server/airflow service:
```yaml
apiVersion: v1
kind: Service
metadata:
name: airflow-scheduler
labels:
app: airflow-k8s
spec:
type: ClusterIP
selector:
app: airflow-scheduler
ports:
- port: 8793
protocol: TCP
targetPort: 8793
name: scheduler
- port: 8099
protocol: TCP
targetPort: 8099
name: job-server
- port: 7077
protocol: TCP
targetPort: 7077
name: spark-master
- port: 8098
protocol: TCP
targetPort: 8098
name: artifact
- port: 8097
protocol: TCP
targetPort: 8097
name: java-expansion
```
The ports 8097,8098 and 8099 are related to the job server, 8793 to airflow, and 7077 to the spark master.
Development/Errors
==================
1. When testing a simple beam example `python -m apache_beam.examples.wordcount --output ./data_test/ --runner=PortableRunner --job_endpoint=localhost:8099 --environment_type=LOOPBACK` from the airflow container I get the following response on the airflow pod:
```
Defaulting container name to airflow-scheduler.
Use 'kubectl describe pod/airflow-scheduler-local-f685b5bc7-9d7r6 -n airflow-main-local' to see all of the containers in this pod.
airflow@airflow-scheduler-local-f685b5bc7-9d7r6:/opt/airflow$ python -m apache_beam.examples.wordcount --output ./data_test/ --runner=PortableRunner --job_endpoint=localhost:8099 --environment_type=LOOPBACK
INFO:apache_beam.internal.gcp.auth:Setting socket default timeout to 60 seconds.
INFO:apache_beam.internal.gcp.auth:socket default timeout is 60.0 seconds.
INFO:oauth2client.client:Timeout attempting to reach GCE metadata service.
WARNING:apache_beam.internal.gcp.auth:Unable to find default credentials to use: The Application Default Credentials are not available. They are available if running in Google Compute Engine. Otherwise, the environment variable GOOGLE_APPLICATION_CREDENTIALS must be defined pointing to a file defining the credentials. See https://developers.google.com/accounts/docs/application-default-credentials for more information.
Connecting anonymously.
INFO:apache_beam.runners.worker.worker_pool_main:Listening for workers at localhost:35837
WARNING:root:Make sure that locally built Python SDK docker image has Python 3.7 interpreter.
INFO:root:Default Python SDK image for environment is apache/beam_python3.7_sdk:2.27.0
INFO:apache_beam.runners.portability.portable_runner:Environment "LOOPBACK" has started a component necessary for the execution. Be sure to run the pipeline using
with Pipeline() as p:
p.apply(..)
This ensures that the pipeline finishes before this program exits.
INFO:apache_beam.runners.portability.portable_runner:Job state changed to STOPPED
INFO:apache_beam.runners.portability.portable_runner:Job state changed to STARTING
INFO:apache_beam.runners.portability.portable_runner:Job state changed to RUNNING
```
And the worker log:
```
21/02/19 19:50:00 INFO Worker: Asked to launch executor app-20210219194804-0000/47 for BeamApp-root-0219194747-7d7938cf_51452c51-dffe-4c61-bcb7-60c7779e3256
21/02/19 19:50:00 INFO SecurityManager: Changing view acls to: root
21/02/19 19:50:00 INFO SecurityManager: Changing modify acls to: root
21/02/19 19:50:00 INFO SecurityManager: Changing view acls groups to:
21/02/19 19:50:00 INFO SecurityManager: Changing modify acls groups to:
21/02/19 19:50:00 INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(root); groups with view permissions: Set(); users with modify permissions: Set(root); groups with modify permissions: Set()
21/02/19 19:50:00 INFO ExecutorRunner: Launch command: "/usr/local/openjdk-8/bin/java" "-cp" "/opt/spark/conf/:/opt/spark/jars/*" "-Xmx1024M" "-Dspark.driver.port=44447" "org.apache.spark.executor.CoarseGrainedExecutorBackend" "--driver-url" "spark://CoarseGrainedScheduler@airflow-scheduler-local-f685b5bc7-9d7r6:44447" "--executor-id" "47" "--hostname" "172.18.0.3" "--cores" "1" "--app-id" "app-20210219194804-0000" "--worker-url" "spark://Worker@172.18.0.3:35837"
21/02/19 19:50:02 INFO Worker: Executor app-20210219194804-0000/47 finished with state EXITED message Command exited with code 1 exitStatus 1
21/02/19 19:50:02 INFO ExternalShuffleBlockResolver: Clean up non-shuffle files associated with the finished executor 47
21/02/19 19:50:02 INFO ExternalShuffleBlockResolver: Executor is not registered (appId=app-20210219194804-0000, execId=47)
21/02/19 19:50:02 INFO Worker: Asked to launch executor app-20210219194804-0000/48 for BeamApp-root-0219194747-7d7938cf_51452c51-dffe-4c61-bcb7-60c7779e3256
21/02/19 19:50:02 INFO SecurityManager: Changing view acls to: root
21/02/19 19:50:02 INFO SecurityManager: Changing modify acls to: root
21/02/19 19:50:02 INFO SecurityManager: Changing view acls groups to:
21/02/19 19:50:02 INFO SecurityManager: Changing modify acls groups to:
21/02/19 19:50:02 INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(root); groups with view permissions: Set(); users with modify permissions: Set(root); groups with modify permissions: Set()
21/02/19 19:50:02 INFO ExecutorRunner: Launch command: "/usr/local/openjdk-8/bin/java" "-cp" "/opt/spark/conf/:/opt/spark/jars/*" "-Xmx1024M" "-Dspark.driver.port=44447" "org.apache.spark.executor.CoarseGrainedExecutorBackend" "--driver-url" "spark://CoarseGrainedScheduler@airflow-scheduler-local-f685b5bc7-9d7r6:44447" "--executor-id" "48" "--hostname" "172.18.0.3" "--cores" "1" "--app-id" "app-20210219194804-0000" "--worker-url" "spark://Worker@172.18.0.3:35837"
21/02/19 19:50:04 INFO Worker: Executor app-20210219194804-0000/48 finished with state EXITED message Command exited with code 1 exitStatus 1
21/02/19 19:50:04 INFO ExternalShuffleBlockResolver: Clean up non-shuffle files associated with the finished executor 48
21/02/19 19:50:04 INFO ExternalShuffleBlockResolver: Executor is not registered (appId=app-20210219194804-0000, execId=48)
21/02/19 19:50:04 INFO Worker: Asked to launch executor app-20210219194804-0000/49 for BeamApp-root-0219194747-7d7938cf_51452c51-dffe-4c61-bcb7-60c7779e3256
21/02/19 19:50:04 INFO SecurityManager: Changing view acls to: root
21/02/19 19:50:04 INFO SecurityManager: Changing modify acls to: root
21/02/19 19:50:04 INFO SecurityManager: Changing view acls groups to:
21/02/19 19:50:04 INFO SecurityManager: Changing modify acls groups to:
21/02/19 19:50:04 INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(root); groups with view permissions: Set(); users with modify permissions: Set(root); groups with modify permissions: Set()
21/02/19 19:50:04 INFO ExecutorRunner: Launch command: "/usr/local/openjdk-8/bin/java" "-cp" "/opt/spark/conf/:/opt/spark/jars/*" "-Xmx1024M" "-Dspark.driver.port=44447" "org.apache.spark.executor.CoarseGrainedExecutorBackend" "--driver-url" "spark://CoarseGrainedScheduler@airflow-scheduler-local-f685b5bc7-9d7r6:44447" "--executor-id" "49" "--hostname" "172.18.0.3" "--cores" "1" "--app-id" "app-20210219194804-0000" "--worker-url" "spark://Worker@172.18.0.3:35837"
.
.
.
```
As we can see, the executor is being exited constantly, and by what I know this issue is created by the missing communication between the executor and the driver (the job server in this case). Also, the "--driver-url" is translated to the driver pod name using the random port "-Dspark.driver.port".
As we can't define the name of the service, the worker tries to use the original name from the driver and to use a randomly generated port. As the configuration comes from the driver, changing the default conf files in the worker/master doesn't create any results.
Using [this answer](https://stackoverflow.com/questions/64488060/apache-beam-cannot-increase-executor-memory-it-is-fixed-at-1024m-despite-usi) as an example, I tried to use the env variable `SPARK_PUBLIC_DNS` in the job server but this didn't result in any changes in the worker logs.
Obs
---
Using directly in kubernetes a spark job
`kubectl run spark-base --rm -it --labels="app=spark-client" --image bde2020/spark-base:2.4.5-hadoop2.7 -- bash ./spark/bin/pyspark --master spark://spark-master:7077 --conf spark.driver.host=spark-client`
having the service:
```yaml
apiVersion: v1
kind: Service
metadata:
name: spark-client
spec:
selector:
app: spark-client
clusterIP: None
```
I get a full working pyspark shell. If I omit the --conf parameter I get the same behavior as the first setup (exiting executors indefinitely)
```
21/02/19 20:21:02 INFO Worker: Executor app-20210219202050-0002/4 finished with state EXITED message Command exited with code 1 exitStatus 1
21/02/19 20:21:02 INFO ExternalShuffleBlockResolver: Clean up non-shuffle files associated with the finished executor 4
21/02/19 20:21:02 INFO ExternalShuffleBlockResolver: Executor is not registered (appId=app-20210219202050-0002, execId=4)
21/02/19 20:21:02 INFO Worker: Asked to launch executor app-20210219202050-0002/5 for Spark shell
21/02/19 20:21:02 INFO SecurityManager: Changing view acls to: root
21/02/19 20:21:02 INFO SecurityManager: Changing modify acls to: root
21/02/19 20:21:02 INFO SecurityManager: Changing view acls groups to:
21/02/19 20:21:02 INFO SecurityManager: Changing modify acls groups to:
21/02/19 20:21:02 INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(root); groups with view permissions: Set(); users with modify permissions: Set(root); groups with modify permissions: Set()
21/02/19 20:21:02 INFO ExecutorRunner: Launch command: "/usr/local/openjdk-8/bin/java" "-cp" "/opt/spark/conf/:/opt/spark/jars/*" "-Xmx1024M" "-Dspark.driver.port=46161" "org.apache.spark.executor.CoarseGrainedExecutorBackend" "--driver-url" "spark://CoarseGrainedScheduler@spark-base:46161" "--executor-id" "5" "--hostname" "172.18.0.20" "--cores" "1" "--app-id" "app-20210219202050-0002" "--worker-url" "spark://Worker@172.18.0.20:45151"
```
|
2021/02/22
|
[
"https://Stackoverflow.com/questions/66320831",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/13454548/"
] |
I have three solutions to choose from depending on your deployment requirements. In order of difficulty:
1. Use the Spark "uber jar" job server. This starts an embedded job server inside the Spark master, instead of using a standalone job server in a container. This would simplify your deployment a lot, since you would not need to start the `beam_spark_job_server` container at all.
```sh
python -m apache_beam.examples.wordcount \
--output ./data_test/ \
--runner=SparkRunner \
--spark_submit_uber_jar \
--spark_master_url=spark://spark-master:7077 \
--environment_type=LOOPBACK
```
2. You can pass the properties through a Spark configuration file. Create the Spark configuration file, and add `spark.driver.host` and whatever other properties you need. In the `docker run` command for the job server, mount that configuration file to the container, and set the `SPARK_CONF_DIR` environment variable to point to that directory.
3. If that neither of those work for you, you can alternatively build your own customized version of the job server container. Pull Beam source from Github. Check out the release branch you want to use (e.g. `git checkout origin/release-2.28.0`). Modify the entrypoint [spark-job-server.sh](https://github.com/apache/beam/blob/92ddb5c0bf61174e2153eef2a294a61bc924156f/runners/spark/job-server/container/spark-job-server.sh#L28) and set `-Dspark.driver.host=x` there. Then build the container using `./gradlew :runners:spark:job-server:container:docker -Pdocker-repository-root="your-repo" -Pdocker-tag="your-tag"`.
|
Let me revise the answer. The Job server need to able to communicate with the workers vice verse. The error of keep exiting is due to this. You need to configure such that they can communicate. A k8s headless service able to solve this.
reference of workable example at <https://github.com/cometta/python-apache-beam-spark> . If it is useful for you, can help me to 'Star' the repository
| 8,491
|
55,799,546
|
I am trying to make a simple app in kivy(a python package) that gets a text from a TextInput field and when a button is clicked it returns a text in Hebrew that will displayed on another TextInput, Everything seems to be working just fine but I encounter the problem that a TextInput field in Kivy could not show the Hebrew text I am trying to show.
This is what I get:
[](https://i.stack.imgur.com/tDSzF.png)
As you can see, It shows this weird text instead of the text I need to show...
My code, My main script:
```
import kivy
from kivy.app import App
from kivy.uix.button import Button
from kivy.uix.gridlayout import GridLayout
from kivy.uix.textinput import TextInput
from kivy.uix.label import Label
from kivy.uix.widget import Widget
from kivy.properties import ObjectProperty
import getData
class MainScreen(Widget):
ttc = ObjectProperty(None)
ct = ObjectProperty(None)
def btn(self):
self.ct.text = getData.HE_EN(text=self.ttc.text.lower())
pass
class MyApp(App):
def build(self):
return MainScreen()
if __name__ == "__main__":
MyApp().run()
```
My "my.kv" file:
```
<MainScreen>:
ttc: ttc
ct: ct
GridLayout:
size: root.width, root.height
cols: 1
TextInput:
text: ""
id: ttc
Button:
text: "CONVERT"
on_press: root.btn()
TextInput:
text: "CONVERTED TEXT"
id: ct
```
There is no need to show the getData.py script that returns the text in Hebrew because it doesn't really matter...
The expected result is to get the text I want in the TextInput even thought I don't really manage to.
Please help me fixing my issue, I really do need that...
|
2019/04/22
|
[
"https://Stackoverflow.com/questions/55799546",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/10139792/"
] |
Okay! So it didn't take a long time because someone on a discord server helped me and all I had to do was to just switch the text area font because the previous one didn't have an Hebrew font. To do it I downloaded the font "Arial" added it to my folder with the main script, I imported `from kivy.core.text import LabelBase` and then registered the font: `LabelBase.register(name="Arial", fn_regular="Arial.ttf")`, To tell the TextInput that I want to set the font to that I just added to my .kv file under the widget 'font\_name: "Arial"' and that solved the problem.
|
you should also reverse the text that the user type, i did this:
```
class HebrowTextInput(TextInput):
def __init__(self, **kwargs):
super(HebrowTextInput, self).__init__(font_name='DejaVuSans.ttf', halign="right", **kwargs)
self.multiline = False
def keyboard_on_key_down(self, window, keycode, text, modifiers):
if keycode[1] == "backspace":
self.text = self.text[1:]
def insert_text(self, theText, from_undo=False):
self.text = theText + self.text
```
| 8,492
|
34,178,172
|
I have created a table:
```
cursor.execute("CREATE TABLE articles (title varchar PRIMARY KEY, pubDate timestamp with time zone);")
```
I inserted a timestamp like this:
```
timestamp = date_datetime.strftime("%Y-%m-%d %H:%M:%S+00")
cursor.execute("INSERT INTO articles VALUES (%s, %s)",
(title, timestamp))
```
When I run a SELECT statement to retrieve the timestamps, it returns tuples instead:
```
cursor.execute("SELECT pubDate FROM articles")
rows = cursor.fetchall()
for row in rows:
print(row)
```
This is the returned row:
```
(datetime.datetime(2015, 12, 9, 6, 47, 4, tzinfo=psycopg2.tz.FixedOffsetTimezone(offset=660, name=None)),)
```
How can I retrieve the datetime object directly?
I've looked up a few other related questions (see [here](https://stackoverflow.com/questions/32184465/how-to-cast-date-to-string-in-psycopg2) and [here](https://stackoverflow.com/questions/28724968/python-tuple-returns-datetime-datetime)) but can't seem to find the answer. Probably overlooking something simple here but any help would be much appreciated!
|
2015/12/09
|
[
"https://Stackoverflow.com/questions/34178172",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4772958/"
] |
Python's `datetime` objects are automatically [adapted](http://initd.org/psycopg/docs/usage.html#python-types-adaptation) into SQL by `psycopg2`, you don't need to stringify them:
```
cursor.execute("INSERT INTO articles VALUES (%s, %s)",
(title, datetime_obj))
```
To read the rows returned by a `SELECT` you can use the cursor as an iterator, unpacking the row tuples as needed:
```
cursor.execute("SELECT pubDate FROM articles")
for pub_date, in cursor: # note the comma after `pub_date`
print(pub_date)
```
|
After some more googling I think I figured it out. If I change:
```
print(row)
```
to
```
print(row[0])
```
It actually works. I guess this is because row is a tuple and this is way to unpack the tuple correctly.
| 8,493
|
31,581,902
|
How to clone with disabled SSL checking, using GitPython library. The following code ...
```
import git
x = git.Repo.clone_from('https://xxx', '/home/xxx/lala')
```
... yields this error:
```
Error: fatal: unable to access 'xxx': server certificate verification failed. CAfile: /etc/ssl/certs/ca-certificates.crt CRLfile: none
```
I know about "export GIT\_SSL\_NO\_VERIFY=1", but how to implement it in a python library ?
|
2015/07/23
|
[
"https://Stackoverflow.com/questions/31581902",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4319148/"
] |
The two following methods have been tested with GitPython 2.0.8 but should be working at least since 1.0.2 (from the doc).
As suggested by @Byron:
```py
git.Repo.clone_from(
'https://example.net/path/to/repo.git',
'local_destination',
branch='master', depth=1,
env={'GIT_SSL_NO_VERIFY': '1'},
)
```
As suggested by [@Christopher](https://stackoverflow.com/a/11622001/248390):
```py
git.Repo.clone_from(
'https://example.net/path/to/repo.git',
'local_destination',
branch='master', depth=1,
config='http.sslVerify=false',
)
```
|
It seems easiest to pass the `GIT_SSL_NO_VERIFY` environment variable to all git invocations. Unfortunately [`Git.update_environment(...)`](http://gitpython.readthedocs.org/en/stable/reference.html?highlight=update_environment#git.cmd.Git.update_environment) can only be used on an existing instance, which is why you would have to manipulate python's environment like so:
```
import git
import os
os.environ['GIT_SSL_NO_VERIFY'] = "1"
repo = git.Repo.clone_from('https://xxx', '/home/xxx/lala')
```
| 8,496
|
45,994,973
|
I have a Numpy one-dimensional array of 1 and 0. for e.g
```
a = np.array([0,1,1,1,0,0,0,0,0,0,0,1,0,1,1,0,0,0,1,1,0,0])
```
I want to count the continuous 0s and 1s in the array and output something like this
```
[1,3,7,1,1,2,3,2,2]
```
What I do atm is
```
np.diff(np.where(np.abs(np.diff(a)) == 1)[0])
```
and it outputs
```
array([3, 7, 1, 1, 2, 3, 2])
```
as you can see it is missing the first count 1.
I've tried `np.split` and then get the sizes of each segments but it does not seem to be optimistic.
Is there more elegant "pythonic" solution?
|
2017/09/01
|
[
"https://Stackoverflow.com/questions/45994973",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1947744/"
] |
Here's one vectorized approach -
```
np.diff(np.r_[0,np.flatnonzero(np.diff(a))+1,a.size])
```
Sample run -
```
In [208]: a = np.array([0,1,1,1,0,0,0,0,0,0,0,1,0,1,1,0,0,0,1,1,0,0])
In [209]: np.diff(np.r_[0,np.flatnonzero(np.diff(a))+1,a.size])
Out[209]: array([1, 3, 7, 1, 1, 2, 3, 2, 2])
```
Faster one with `boolean` concatenation -
```
np.diff(np.flatnonzero(np.concatenate(([True], a[1:]!= a[:-1], [True] ))))
```
**Runtime test**
For the setup, let's create a bigger dataset with islands of `0s` and `1s` and for a fair benchmarking as with the given sample, let's have the island lengths vary between `1` and `7` -
```
In [257]: n = 100000 # thus would create 100000 pair of islands
In [258]: a = np.repeat(np.arange(n)%2, np.random.randint(1,7,(n)))
# Approach #1 proposed in this post
In [259]: %timeit np.diff(np.r_[0,np.flatnonzero(np.diff(a))+1,a.size])
100 loops, best of 3: 2.13 ms per loop
# Approach #2 proposed in this post
In [260]: %timeit np.diff(np.flatnonzero(np.concatenate(([True], a[1:]!= a[:-1], [True] ))))
1000 loops, best of 3: 1.21 ms per loop
# @Vineet Jain's soln
In [261]: %timeit [ sum(1 for i in g) for k,g in groupby(a)]
10 loops, best of 3: 61.3 ms per loop
```
|
Using `groupby` from `itertools`
```
from itertools import groupby
a = np.array([0,1,1,1,0,0,0,0,0,0,0,1,0,1,1,0,0,0,1,1,0,0])
grouped_a = [ sum(1 for i in g) for k,g in groupby(a)]
```
| 8,497
|
14,241,239
|
Can I have any highlight kind of things using Python 2.7? Say when my script clicking on the `submit button`,feeding data into the `text field` or selecting values from the `drop-down field`, just to highlight on that element to make sure to the script runner that his/her script doing what he/she wants.
***EDIT***
I am using selenium-webdriver with python to automate some web based work on a third party application.
Thanks
|
2013/01/09
|
[
"https://Stackoverflow.com/questions/14241239",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2767755/"
] |
This is something you need to do with javascript, not python.
|
***[NOTE: I'm leaving this answer for historical purposes but readers should note that the original question has changed from concerning itself with Python to concerning itself with Selenium]***
Assuming you're talking about a browser based application being served from a Python back-end server (and it's just a guess since there's *no information* in your post):
If you are constructing a response in your Python back-end, wrap the stuff that you want to highlight in a `<span>` tag and set a `class` on the span tag. Then, in your CSS define that class with whatever highlighting properties you want to use.
However, if you want to accomplish this highlighting in an already-loaded browser page without generating new HTML on the back end and returning that to the browser, then Python (on the server) has no knowledge of or ability to affect the web page in browser. You must accomplish this using Javascript or a Javascript library or framework in the browser.
| 8,500
|
26,691,784
|
Example:
```
class Planet(Enum):
MERCURY = (mass: 3.303e+23, radius: 2.4397e6)
def __init__(self, mass, radius):
self.mass = mass # in kilograms
self.radius = radius # in meters
```
Ref: <https://docs.python.org/3/library/enum.html#planet>
Why do I want to do this? If there are a few primitive types (int, bool) in the constructor list, it would be nice to used named arguments.
|
2014/11/01
|
[
"https://Stackoverflow.com/questions/26691784",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/257299/"
] |
While you can't use named arguments the way you describe with enums, you can get a similar effect with a [`namedtuple`](https://docs.python.org/3/library/collections.html#collections.namedtuple) mixin:
```
from collections import namedtuple
from enum import Enum
Body = namedtuple("Body", ["mass", "radius"])
class Planet(Body, Enum):
MERCURY = Body(mass=3.303e+23, radius=2.4397e6)
VENUS = Body(mass=4.869e+24, radius=6.0518e6)
EARTH = Body(mass=5.976e+24, radius=3.3972e6)
# ... etc.
```
... which to my mind is cleaner, since you don't have to write an `__init__` method.
Example use:
```
>>> Planet.MERCURY
<Planet.MERCURY: Body(mass=3.303e+23, radius=2439700.0)>
>>> Planet.EARTH.mass
5.976e+24
>>> Planet.VENUS.radius
6051800.0
```
Note that, as per [the docs](https://docs.python.org/3/library/enum.html#others), "mix-in types must appear before `Enum` itself in the sequence of bases".
|
The accepted answer by @zero-piraeus can be slightly extended to allow default arguments as well. This is very handy when you have a large enum with most entries having the same value for an element.
```
class Body(namedtuple('Body', "mass radius moons")):
def __new__(cls, mass, radius, moons=0):
return super().__new__(cls, mass, radius, moons)
def __getnewargs__(self):
return (self.mass, self.radius, self.moons)
class Planet(Body, Enum):
MERCURY = Body(mass=3.303e+23, radius=2.4397e6)
VENUS = Body(mass=4.869e+24, radius=6.0518e6)
EARTH = Body(5.976e+24, 3.3972e6, moons=1)
```
Beware pickling will not work without the `__getnewargs__`.
```
class Foo:
def __init__(self):
self.planet = Planet.EARTH # pickle error in deepcopy
from copy import deepcopy
f1 = Foo()
f2 = deepcopy(f1) # pickle error here
```
| 8,501
|
74,542,597
|
i have a a number of xml files with me, whose format is:
```
<objects>
<object>
<record>
<invoice_source>EMAIL</invoice_source>
<invoice_capture_date>2022-11-18</invoice_capture_date>
<document_type>INVOICE</document_type>
<data_capture_provider_code>00001</data_capture_provider_code>
<data_capture_provider_reference>1264</data_capture_provider_reference>
<document_capture_provide_code>00002</document_capture_provide_code>
<document_capture_provider_ref>1264</document_capture_provider_ref>
<rows/>
</record>
</object>
</objects>
```
there is two root objects in this xml. i want to remove one of them using. i want the xml to look like this:
```
<objects>
<record>
<invoice_source>EMAIL</invoice_source>
<invoice_capture_date>2022-11-18</invoice_capture_date>
<document_type>INVOICE</document_type>
<data_capture_provider_code>00001</data_capture_provider_code>
<data_capture_provider_reference>1264</data_capture_provider_reference>
<document_capture_provide_code>00002</document_capture_provide_code>
<document_capture_provider_ref>1264</document_capture_provider_ref>
<rows/>
</record>
</objects>
```
i have a folder full of this files. i want to do it using python. is there any way.
|
2022/11/23
|
[
"https://Stackoverflow.com/questions/74542597",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/20397498/"
] |
So here is what I would do.
Instead of controlling the stamina in multiple places and hve forth and back references (=dependencies) between all your scripts I would rather keep this authority within the `PlayerController`.
Your `StaminaBar` component should be purely **listening** and visualizing the current value without having the authority to modify it.
Next step would be to decide for a general code structure
* Who is responsible for what?
* Who knows / controls what?
There are many possible answers to those but for now an this specific case
* You can either say the `PlayerController` "knows" the `StaminaBar` just like it also knows the `InputManager` and can't live without both
* Or you could decouple them and let the `PlayerController` work without having the visualization via the `StaminaBar` but rather let the `StaminaBar` listen to the value and just display it .. or not if you want to remove or change this later on
Personally I would go with the second so I will try and give you an example how I would deal with this:
```
public class PlayerController : MonoBehaviour
{
[Header("Own References")]
[SerializeField] private CharacterController _controller;
[Header("Scene References")]
[SerializeField] private Transform _cameraTransform;
[SerializeField] private InputManager _inputManager;
// In general always make you stuff as encapsulated as possible
// -> nobody should be able to change these except you via the Inspector
// (Values you are anyway not gonna change at all you could also convert to "const")
[Header("Settings")]
[SerializeField] private float _maxHealth = 100f;
[SerializeField] private float _maxStamina = 100f;
[SerializeField] private float _staminaDrainPerSecond = 2f;
[SerializeField] private float _secondsDelayBeforeStaminaRegen = 1f;
[SerializeField] private float _staminaRegenPerSecond = 2f;
[SerializeField] private float _playerSpeed = 1f;
[SerializeField] private float _playerRunSpeed = 2f;
[SerializeField] private float _jumpHeight = 1f;
[SerializeField] private float _gravityValue = -9.81f;
// Your runtime valus
private float _staminaRegenDelayTimer;
private float _currentHealt;
private float _currentStamina;
// You only need a single float for this
private float _currentYVelocity;
// EVENTS we expose so other classes can react to those
public UnityEvent OnDeath;
public UnityEvent<float> OnHealthChanged;
public UnityEvent<float> OnStaminaChanged;
// Provide public read-only access to the settings so your visuals can access those for their setup
public float MaxHealth => _maxHealth;
public float MaxStamina => _maxStamina;
// And then use properties for your runtime values
// whenever you set the value you do additional stuff like cleaning the value and invoke according events
public float currentHealth
{
get => _currentHealt;
private set
{
_currentHealt = Mathf.Clamp(value, 0, _maxHealth);
OnHealthChanged.Invoke(_currentHealt);
if (value <= 0f)
{
OnDeath.Invoke();
}
}
}
public float currentStamina
{
get => _currentStamina;
private set
{
_currentStamina = Mathf.Clamp(value, 0, _maxStamina);
OnStaminaChanged.Invoke(_currentStamina);
}
}
private void Awake()
{
// As a thumb rule to avoid issues with order I usually initialize everything I an in Awake
if (!_controller) _controller = GetComponent<CharacterController>();
currentHealth = MaxHealth;
currentStamina = MaxStamina;
}
private void Start()
{
// in start do the things were you depend on others already being initialized
if (!_inputManager) _inputManager = InputManager.Instance;
if (!_cameraTransform) _cameraTransform = Camera.main.transform;
}
private void Update()
{
UpdateStamina();
UpdateHorizontalMovement();
UpdateVerticalMovement();
}
private void UpdateStamina()
{
if (_inputManager.IsRunning)
{
// drain your stamina -> also informs all listeners
currentStamina -= _staminaDrainPerSecond * Time.deltaTime;
// reset the regen timer
_staminaRegenDelayTimer = _secondsDelayBeforeStaminaRegen;
}
else
{
// only if not pressing run start the regen timer
if (_staminaRegenDelayTimer > 0)
{
_staminaRegenDelayTimer -= Time.deltaTime;
}
else
{
// once timer is finished start regen
currentStamina += _staminaRegenPerSecond * Time.deltaTime;
}
}
}
private void UpdateHorizontalMovement()
{
var movement = _inputManager.PlayerMovement;
var move = _cameraTransform.forward * movement.y + _cameraTransform.right * movement.x;
move.y = 0f;
move *= _inputManager.IsRunning && currentStamina > 0 ? _playerRunSpeed : _playerSpeed;
_controller.Move(move * Time.deltaTime);
}
private void UpdateVerticalMovement()
{
if (_controller.isGrounded)
{
if (_inputManager.JumpedThisFrame)
{
_currentYVelocity += Mathf.Sqrt(_jumpHeight * -3.0f * _gravityValue);
}
else if (_currentYVelocity < 0)
{
_currentYVelocity = 0f;
}
}
else
{
_currentYVelocity += _gravityValue * Time.deltaTime;
}
_controller.Move(Vector3.up * _currentYVelocity * Time.deltaTime);
}
}
```
And then your `StaminaBar` shinks down to really only being a display. The `PlayerController` doesn't care/even know it exists and can fully work without it.
```
public class StaminaBar : MonoBehaviour
{
[SerializeField] private Slider _staminaSlider;
[SerializeField] private PlayerController _playerController;
private void Awake()
{
// or wherever you get the reference from
if (!_playerController) _playerController = FindObjectOfType<PlayerController>();
// poll the setting from the player
_staminaSlider.maxValue = _playerController.MaxStamina;
// attach a callback to the event
_playerController.OnStaminaChanged.AddListener(OnStaminaChanged);
// just to be sure invoke the callback once immediately with the current value
// so we don't have to wait for the first actual event invocation
OnStaminaChanged(_playerController.currentStamina);
}
private void OnDestroy()
{
if(_playerController) _playerController.OnStaminaChanged.RemoveListener(OnStaminaChanged);
}
// This will now be called whenever the stamina has changed
private void OnStaminaChanged(float stamina)
{
_staminaSlider.value = stamina;
}
}
```
And just for completeness - I also refactored your `InputManager` a bit on the fly ^^
```
public class InputManager : MonoBehaviour
{
[Header("Own references")]
[SerializeField] private Transform _bulletParent;
[SerializeField] private Transform _barrelTransform;
[Header("Scene references")]
[SerializeField] private Transform _cameraTransform;
// By using the correct component right away you can later skip "GetComponent"
[Header("Assets")]
[SerializeField] private BulletController _bulletPrefab;
[Header("Settings")]
[SerializeField] private float _bulletHitMissDistance = 25f;
[SerializeField] private float _damage = 100;
[SerializeField] private float _impactForce = 30;
[SerializeField] private float _fireRate = 8f;
public static InputManager Instance { get; private set; }
// Again I would use properties here
// You don't want anything else to set the "isRunning" flag
// And the others don't need to be methods either
public bool IsRunning { get; private set; }
public Vector2 PlayerMovement => _playerControls.Player.Movement.ReadValue<Vector2>();
public Vector2 MouseDelta => _playerControls.Player.Look.ReadValue<Vector2>();
public bool JumpedThisFrame => _playerControls.Player.Jump.triggered;
private Coroutine _fireCoroutine;
private PlayerControls _playerControls;
private WaitForSeconds _rapidFireWait;
private void Awake()
{
if (Instance != null && Instance != this)
{
Destroy(gameObject);
}
else
{
Instance = this;
}
_playerControls = new PlayerControls();
//Cursor.visible = false;
_rapidFireWait = new WaitForSeconds(1 / _fireRate);
_cameraTransform = Camera.main.transform;
_playerControls.Player.RunStart.performed += _ => Running();
_playerControls.Player.RunEnd.performed += _ => RunningStop();
_playerControls.Player.Shoot.started += _ => StartFiring();
_playerControls.Player.Shoot.canceled += _ => StopFiring();
}
private void OnEnable()
{
_playerControls.Enable();
}
private void OnDisable()
{
_playerControls.Disable();
}
private void StartFiring()
{
_fireCoroutine = StartCoroutine(RapidFire());
}
private void StopFiring()
{
if (_fireCoroutine != null)
{
StopCoroutine(_fireCoroutine);
_fireCoroutine = null;
}
}
private void Shooting()
{
var bulletController = Instantiate(_bulletPrefab, _barrelTransform.position, Quaternion.identity, _bulletParent);
if (Physics.Raycast(_cameraTransform.position, _cameraTransform.forward, out var hit, Mathf.Infinity))
{
bulletController.target = hit.point;
bulletController.hit = true;
if (hit.transform.TryGetComponent<Enemy>(out var enemy))
{
enemy.TakeDamage(_damage);
}
if (hit.rigidbody != null)
{
hit.rigidbody.AddForce(-hit.normal * _impactForce);
}
}
else
{
bulletController.target = _cameraTransform.position + _cameraTransform.forward * _bulletHitMissDistance;
bulletController.hit = false;
}
}
private IEnumerator RapidFire()
{
while (true)
{
Shooting();
yield return _rapidFireWait;
}
}
private void Running()
{
IsRunning = true;
}
private void RunningStop()
{
IsRunning = false;
}
}
```
|
You're decreasing and increasing the stamina in the same scope. I think you should let the stamina to be drained when sprint is pressed and start regenerating only if it is released.
| 8,507
|
59,573,454
|
I am trying to find a simple way to calculate soft cosine similarity between two sentences.
Here is my attempt and learning:
```
from gensim.matutils import softcossim
sent_1 = 'Dravid is a cricket player and a opening batsman'.split()
sent_2 = 'Leo is a cricket player too He is a batsman,baller and keeper'.split()
print(softcossim(sent_1, sent_2, similarity_matrix))
```
I'm unable to understand about `similarity_matrix`. Please help me find so, and henceforth the soft cosine similarity in python.
|
2020/01/03
|
[
"https://Stackoverflow.com/questions/59573454",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4763959/"
] |
As of the current version of Gensim, 3.8.3, some of the method calls from both the question and previous answers have been deprecated. Those functions deprecated have been removed from the 4.0.0 beta. Can't seem to provide code in a reply to @EliadL, so adding a new comment.
The current method for solving this problem in Gensim 3.8.3 and 4.0.0 is as follows:
```py
import gensim.downloader as api
from gensim import corpora
from gensim.matutils import softcossim
sent_1 = 'Dravid is a cricket player and a opening batsman'.split()
sent_2 = 'Leo is a cricket player too He is a batsman,baller and keeper'.split()
# Download the FastText model
fasttext_model300 = api.load('fasttext-wiki-news-subwords-300')
# Prepare a dictionary and a corpus.
documents = [sent_1, sent_2]
dictionary = corpora.Dictionary(documents)
# Prepare the similarity matrix
similarity_index = WordEmbeddingSimilarityIndex(fasttext_model300)
similarity_matrix = SparseTermSimilarityMatrix(similarity_index, dictionary)
# Convert the sentences into bag-of-words vectors.
sent_1 = dictionary.doc2bow(sent_1)
sent_2 = dictionary.doc2bow(sent_2)
# Compute soft cosine similarity
print(similarity_matrix.inner_product(sent_1, sent_2, normalized=True))
#> 0.68463486
```
For users of Gensim v. 3.8.3, I've also found this [Notebook](https://github.com/RaRe-Technologies/gensim/blob/release-3.8.3/docs/notebooks/soft_cosine_tutorial.ipynb) to be helpful in understanding Soft Cosine Similarity and how to apply Soft Cosine Similarity using Gensim.
As of now, for users of Gensim 4.0.0 beta this [Notebook](https://github.com/RaRe-Technologies/gensim/blob/develop/docs/notebooks/soft_cosine_tutorial.ipynb) is the one to look at.
|
Going by [this tutorial](https://www.machinelearningplus.com/nlp/gensim-tutorial/#18howtocomputesimilaritymetricslikecosinesimilarityandsoftcosinesimilarity):
```
import gensim.downloader as api
from gensim import corpora
from gensim.matutils import softcossim
sent_1 = 'Dravid is a cricket player and a opening batsman'.split()
sent_2 = 'Leo is a cricket player too He is a batsman,baller and keeper'.split()
# Download the FastText model
fasttext_model300 = api.load('fasttext-wiki-news-subwords-300')
# Prepare a dictionary and a corpus.
documents = [sent_1, sent_2]
dictionary = corpora.Dictionary(documents)
# Prepare the similarity matrix
similarity_matrix = fasttext_model300.similarity_matrix(dictionary)
# Convert the sentences into bag-of-words vectors.
sent_1 = dictionary.doc2bow(sent_1)
sent_2 = dictionary.doc2bow(sent_2)
# Compute soft cosine similarity
print(softcossim(sent_1, sent_2, similarity_matrix))
#> 0.7909639717134869
```
| 8,508
|
510,972
|
How do I find out the name of the class used to create an instance of an object in Python?
I'm not sure if I should use the [`inspect`](https://docs.python.org/2/library/inspect.html "inspect — Inspect live objects") module or parse the `__class__` attribute.
|
2009/02/04
|
[
"https://Stackoverflow.com/questions/510972",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/18909/"
] |
```
class A:
pass
a = A()
str(a.__class__)
```
The sample code above (when input in the interactive interpreter) will produce `'__main__.A'` as opposed to `'A'` which is produced if the `__name__` attribute is invoked. By simply passing the result of `A.__class__` to the `str` constructor the parsing is handled for you. However, you could also use the following code if you want something more explicit.
```
"{0}.{1}".format(a.__class__.__module__,a.__class__.__name__)
```
This behavior can be preferable if you have classes with the same name defined in separate modules.
**The sample code provided above was tested in Python 2.7.5.**
|
In Python 2,
```
type(instance).__name__ != instance.__class__.__name__
# if class A is defined like
class A():
...
type(instance) == instance.__class__
# if class A is defined like
class A(object):
...
```
Example:
```
>>> class aclass(object):
... pass
...
>>> a = aclass()
>>> type(a)
<class '__main__.aclass'>
>>> a.__class__
<class '__main__.aclass'>
>>>
>>> type(a).__name__
'aclass'
>>>
>>> a.__class__.__name__
'aclass'
>>>
>>> class bclass():
... pass
...
>>> b = bclass()
>>>
>>> type(b)
<type 'instance'>
>>> b.__class__
<class __main__.bclass at 0xb765047c>
>>> type(b).__name__
'instance'
>>>
>>> b.__class__.__name__
'bclass'
>>>
```
| 8,511
|
70,014,480
|
I've been hosting my static site via an google app engine standard python setup for years without a problem. Today I started seeing the error below. Note: there used to be a page on GCP explaining how to host a static page using python GAE standard, but I can't find it now. Is it maybe the case where now it's recommended to use a bucket instead?
```
gunicorn.errors.HaltServer
Traceback (most recent call last): File "/layers/google.python.pip/pip/bin/gunicorn", line 8, in sys.exit(run()) File "/layers/google.python.pip/pip/lib/python3.8/site-packages/gunicorn/app/wsgiapp.py", line 58, in run WSGIApplication("%(prog)s [OPTIONS] [APP_MODULE]").run() File "/layers/google.python.pip/pip/lib/python3.8/site-packages/gunicorn/app/base.py", line 228, in run super().run() File "/layers/google.python.pip/pip/lib/python3.8/site-packages/gunicorn/app/base.py", line 72, in run Arbiter(self).run() File "/layers/google.python.pip/pip/lib/python3.8/site-packages/gunicorn/arbiter.py", line 229, in run self.halt(reason=inst.reason, exit_status=inst.exit_status) File "/layers/google.python.pip/pip/lib/python3.8/site-packages/gunicorn/arbiter.py", line 342, in halt self.stop() File "/layers/google.python.pip/pip/lib/python3.8/site-packages/gunicorn/arbiter.py", line 393, in stop time.sleep(0.1) File "/layers/google.python.pip/pip/lib/python3.8/site-packages/gunicorn/arbiter.py", line 242, in handle_chld self.reap_workers() File "/layers/google.python.pip/pip/lib/python3.8/site-packages/gunicorn/arbiter.py", line 525, in reap_workers raise HaltServer(reason, self.WORKER_BOOT_ERROR) gunicorn.errors.HaltServer:
```
Here's my app.yaml file:
```
runtime: python38
service: webapp
handlers:
# site root -> app
- url: /
static_files: dist/index.html
upload: dist/index.html
expiration: "0m"
secure: always
# urls with no dot in them -> app
- url: /([^.]+?)\/?$ # urls
static_files: dist/index.html
upload: dist/index.html
expiration: "0m"
secure: always
# everything else
- url: /(.*)
static_files: dist/\1
upload: dist/(.*)
expiration: "0m"
secure: always
```
|
2021/11/18
|
[
"https://Stackoverflow.com/questions/70014480",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/10458445/"
] |
This error only happened on Nov 17th and has since not happened again, without any changes by me. Perhaps it was related to something under-the-hood on google app engine servers.
|
Note that you are using Python 3.8 as per your `app.yaml` file, and the document you have shared is for Python 2.7. As Python 2 is no longer supported, migrating from Python 2 to Python 3 runtime will help you remove the error.
The documentation [here](https://cloud.google.com/appengine/docs/standard/python/migrate-to-python3) will help you to migrate to Python 3 standard runtime.
| 8,521
|
50,996,060
|
I'm trying to use ruamel.yaml to modify an AWS CloudFormation template on the fly using python. I added the following code to make the safe\_load working with CloudFormation functions such as `!Ref`. However, when I dump them out, those values with !Ref (or any other functions) will be wrapped by quotes. CloudFormation is not able to identify that.
See example below:
```
import sys, json, io, boto3
import ruamel.yaml
def funcparse(loader, node):
node.value = {
ruamel.yaml.ScalarNode: loader.construct_scalar,
ruamel.yaml.SequenceNode: loader.construct_sequence,
ruamel.yaml.MappingNode: loader.construct_mapping,
}[type(node)](node)
node.tag = node.tag.replace(u'!Ref', 'Ref').replace(u'!', u'Fn::')
return dict([ (node.tag, node.value) ])
funcnames = [ 'Ref', 'Base64', 'FindInMap', 'GetAtt', 'GetAZs', 'ImportValue',
'Join', 'Select', 'Split', 'Split', 'Sub', 'And', 'Equals', 'If',
'Not', 'Or' ]
for func in funcnames:
ruamel.yaml.SafeLoader.add_constructor(u'!' + func, funcparse)
txt = open("/space/tmp/a.template","r")
base = ruamel.yaml.safe_load(txt)
base["foo"] = {
"name": "abc",
"Resources": {
"RouteTableId" : "!Ref aaa",
"VpcPeeringConnectionId" : "!Ref bbb",
"yourname": "dfw"
}
}
ruamel.yaml.safe_dump(
base,
sys.stdout,
default_flow_style=False
)
```
The input file is like this:
```
foo:
bar: !Ref barr
aa: !Ref bb
```
The output is like this:
```
foo:
Resources:
RouteTableId: '!Ref aaa'
VpcPeeringConnectionId: '!Ref bbb'
yourname: dfw
name: abc
```
Notice the '!Ref VpcRouteTable' is been wrapped by single quotes. This won't be identified by CloudFormation. Is there a way to configure dumper so that the output will be like:
```
foo:
Resources:
RouteTableId: !Ref aaa
VpcPeeringConnectionId: !Ref bbb
yourname: dfw
name: abc
```
Other things I have tried:
* pyyaml library, works the same
* Use Ref:: instead of !Ref, works the
same
|
2018/06/22
|
[
"https://Stackoverflow.com/questions/50996060",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3209177/"
] |
Essentially you tweak the loader, to load tagged (scalar) objects as if they were mappings, with the tag the key and the value the scalar. But you don't do anything to distinguish the `dict` loaded from such a mapping from other dicts loaded from normal mappings, nor do you have any specific code to represent such a mapping to "get the tag back".
When you try to "create" a scalar with a tag, you just make a string starting with an exclamation mark, and that needs to get dumped quoted to distinguish it from **real** tagged nodes.
What obfuscates this all, is that your example overwrites the loaded data by assigning to `base["foo"]` so the only thing you can derive from the `safe_load`, and and all your code before that, is that it doesn't throw an exception. I.e. if you leave out the lines starting with `base["foo"] = {` your output will look like:
```
foo:
aa:
Ref: bb
bar:
Ref: barr
```
And in that `Ref: bb` is not distinguishable from a normal dumped dict. If you want to explore this route, then you should make a subclass `TagDict(dict)`, and have `funcparse` return that subclass, *and also add a `representer` for that subclass that re-creates the tag from the key and then dumps the value*. Once that works (round-trip equals input), you can do:
```
"RouteTableId" : TagDict('Ref', 'aaa')
```
If you do that, you should, apart from removing non-used libraries, also change your code to close the file-pointer `txt` in your code, as that can lead to problems. You can do this elegantly be using the `with` statement:
```
with open("/space/tmp/a.template","r") as txt:
base = ruamel.yaml.safe_load(txt)
```
(I also would leave out the `"r"` (or put a space before it); and replace `txt` with a more appropriate variable name that indicates this is an (input) file pointer).
You also have the entry `'Split'` twice in your `funcnames`, which is superfluous.
---
A more generic solution can be achieved by using a `multi-constructor` that matches any tag and having three basic types to cover scalars, mappings and sequences.
```
import sys
import ruamel.yaml
yaml_str = """\
foo:
scalar: !Ref barr
mapping: !Select
a: !Ref 1
b: !Base64 A413
sequence: !Split
- !Ref baz
- !Split Multi word scalar
"""
class Generic:
def __init__(self, tag, value, style=None):
self._value = value
self._tag = tag
self._style = style
class GenericScalar(Generic):
@classmethod
def to_yaml(self, representer, node):
return representer.represent_scalar(node._tag, node._value)
@staticmethod
def construct(constructor, node):
return constructor.construct_scalar(node)
class GenericMapping(Generic):
@classmethod
def to_yaml(self, representer, node):
return representer.represent_mapping(node._tag, node._value)
@staticmethod
def construct(constructor, node):
return constructor.construct_mapping(node, deep=True)
class GenericSequence(Generic):
@classmethod
def to_yaml(self, representer, node):
return representer.represent_sequence(node._tag, node._value)
@staticmethod
def construct(constructor, node):
return constructor.construct_sequence(node, deep=True)
def default_constructor(constructor, tag_suffix, node):
generic = {
ruamel.yaml.ScalarNode: GenericScalar,
ruamel.yaml.MappingNode: GenericMapping,
ruamel.yaml.SequenceNode: GenericSequence,
}.get(type(node))
if generic is None:
raise NotImplementedError('Node: ' + str(type(node)))
style = getattr(node, 'style', None)
instance = generic.__new__(generic)
yield instance
state = generic.construct(constructor, node)
instance.__init__(tag_suffix, state, style=style)
ruamel.yaml.add_multi_constructor('', default_constructor, Loader=ruamel.yaml.SafeLoader)
yaml = ruamel.yaml.YAML(typ='safe', pure=True)
yaml.default_flow_style = False
yaml.register_class(GenericScalar)
yaml.register_class(GenericMapping)
yaml.register_class(GenericSequence)
base = yaml.load(yaml_str)
base['bar'] = {
'name': 'abc',
'Resources': {
'RouteTableId' : GenericScalar('!Ref', 'aaa'),
'VpcPeeringConnectionId' : GenericScalar('!Ref', 'bbb'),
'yourname': 'dfw',
's' : GenericSequence('!Split', ['a', GenericScalar('!Not', 'b'), 'c']),
}
}
yaml.dump(base, sys.stdout)
```
which outputs:
```
bar:
Resources:
RouteTableId: !Ref aaa
VpcPeeringConnectionId: !Ref bbb
s: !Split
- a
- !Not b
- c
yourname: dfw
name: abc
foo:
mapping: !Select
a: !Ref 1
b: !Base64 A413
scalar: !Ref barr
sequence: !Split
- !Ref baz
- !Split Multi word scalar
```
Please note that sequences and mappings are handled correctly and that they can be created as well. There is however no check that:
* the tag you provide is actually valid
* the value associated with the tag is of the proper type for that tag name (scalar, mapping, sequence)
* if you want `GenericMapping` to behave more like `dict`, then you probably want it a subclass of `dict` (and not of `Generic`) and provide the appropriate `__init__` (idem for `GenericSequence`/`list`)
When the assignment is changed to something more close to yours:
```
base["foo"] = {
"name": "abc",
"Resources": {
"RouteTableId" : GenericScalar('!Ref', 'aaa'),
"VpcPeeringConnectionId" : GenericScalar('!Ref', 'bbb'),
"yourname": "dfw"
}
}
```
the output is:
```
foo:
Resources:
RouteTableId: !Ref aaa
VpcPeeringConnectionId: !Ref bbb
yourname: dfw
name: abc
```
which is exactly the output you want.
|
Apart from Anthon's detailed answer above, for the specific question in terms of CloudFormation template, I found another very quick & sweet workaround.
Still using the constructor snippet to load the YAML.
```
def funcparse(loader, node):
node.value = {
ruamel.yaml.ScalarNode: loader.construct_scalar,
ruamel.yaml.SequenceNode: loader.construct_sequence,
ruamel.yaml.MappingNode: loader.construct_mapping,
}[type(node)](node)
node.tag = node.tag.replace(u'!Ref', 'Ref').replace(u'!', u'Fn::')
return dict([ (node.tag, node.value) ])
funcnames = [ 'Ref', 'Base64', 'FindInMap', 'GetAtt', 'GetAZs', 'ImportValue',
'Join', 'Select', 'Split', 'Split', 'Sub', 'And', 'Equals', 'If',
'Not', 'Or' ]
for func in funcnames:
ruamel.yaml.SafeLoader.add_constructor(u'!' + func, funcparse)
```
When we manipulate the data, instead of doing
```
base["foo"] = {
"name": "abc",
"Resources": {
"RouteTableId" : "!Ref aaa",
"VpcPeeringConnectionId" : "!Ref bbb",
"yourname": "dfw"
}
}
```
which will wrap the value `!Ref aaa` with quotes, we can simply do:
```
base["foo"] = {
"name": "abc",
"Resources": {
"RouteTableId" : {
"Ref" : "aaa"
},
"VpcPeeringConnectionId" : {
"Ref" : "bbb
},
"yourname": "dfw"
}
}
```
Similarly, for other functions in CloudFormation, such as !GetAtt, we should use their long form `Fn::GetAtt` and use them as the key of a JSON object. Problem solved easily.
| 8,522
|
62,585,490
|
TF 2.3.0.dev20200620
I got this error during .fit(...) for a model with a sigmoid binary output. I used tf.data.Dataset as the input pipeline.
The strange thing is it depends on the metric:
Don't work:
```
model.compile(
optimizer=tf.keras.optimizers.Adam(lr=1e-4, decay=1e-6),
loss=tf.keras.losses.BinaryCrossentropy(),
metrics=['accuracy']
)
```
work:
```
model.compile(
optimizer=tf.keras.optimizers.Adam(lr=1e-4, decay=1e-6),
loss=tf.keras.losses.BinaryCrossentropy(),
metrics=[tf.keras.metrics.BinaryAccuracy()]
)
```
But as I understood, 'accuracy' should be fine. In fact, instead of using my own tf.data.Dataset custom setup (can be provided if needed), using tf.keras.preprocessing.image\_dataset\_from\_directory give no such error. This is the case from tutorial <https://keras.io/examples/vision/image_classification_from_scratch>.
Trace is pasted below. Notice this is diff from other 2 older questions. it involves somehow the metrics.
ValueError: in user code:
```
/usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/engine/training.py:806 train_function *
return step_function(self, iterator)
/usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/engine/training.py:796 step_function **
outputs = model.distribute_strategy.run(run_step, args=(data,))
/usr/local/lib/python3.6/dist-packages/tensorflow/python/distribute/distribute_lib.py:1211 run
return self._extended.call_for_each_replica(fn, args=args, kwargs=kwargs)
/usr/local/lib/python3.6/dist-packages/tensorflow/python/distribute/distribute_lib.py:2526 call_for_each_replica
return self._call_for_each_replica(fn, args, kwargs)
/usr/local/lib/python3.6/dist-packages/tensorflow/python/distribute/distribute_lib.py:2886 _call_for_each_replica
return fn(*args, **kwargs)
/usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/engine/training.py:789 run_step **
outputs = model.train_step(data)
/usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/engine/training.py:759 train_step
self.compiled_metrics.update_state(y, y_pred, sample_weight)
/usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/engine/compile_utils.py:388 update_state
self.build(y_pred, y_true)
/usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/engine/compile_utils.py:319 build
self._metrics, y_true, y_pred)
/usr/local/lib/python3.6/dist-packages/tensorflow/python/util/nest.py:1139 map_structure_up_to
**kwargs)
/usr/local/lib/python3.6/dist-packages/tensorflow/python/util/nest.py:1235 map_structure_with_tuple_paths_up_to
*flat_value_lists)]
/usr/local/lib/python3.6/dist-packages/tensorflow/python/util/nest.py:1234 <listcomp>
results = [func(*args, **kwargs) for args in zip(flat_path_list,
/usr/local/lib/python3.6/dist-packages/tensorflow/python/util/nest.py:1137 <lambda>
lambda _, *values: func(*values), # Discards the path arg.
/usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/engine/compile_utils.py:419 _get_metric_objects
return [self._get_metric_object(m, y_t, y_p) for m in metrics]
/usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/engine/compile_utils.py:419 <listcomp>
return [self._get_metric_object(m, y_t, y_p) for m in metrics]
/usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/engine/compile_utils.py:440 _get_metric_object
y_t_rank = len(y_t.shape.as_list())
/usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/tensor_shape.py:1190 as_list
raise ValueError("as_list() is not defined on an unknown TensorShape.")
ValueError: as_list() is not defined on an unknown TensorShape.
```
|
2020/06/25
|
[
"https://Stackoverflow.com/questions/62585490",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1762295/"
] |
Had exactly the same problem when using 'accuracy' metric.
I followed <https://github.com/tensorflow/tensorflow/issues/32912#issuecomment-550363802> example:
```
def _fixup_shape(images, labels, weights):
images.set_shape([None, None, None, 3])
labels.set_shape([None, 19]) # I have 19 classes
weights.set_shape([None])
return images, labels, weights
dataset = dataset.map(_fixup_shape)
```
which helped me solve the problem.
But, in my case, instead of using one map function, as kawingkelvin did above, to load and set\_shape inside, **I needed to use two map functions** because of some errors in the TF code.
The final solution for me was to use the following order:
`dataset.batch.map(get_data).map(fix_shape).prefetch`
NOTE: batch can be done both before and after map(get\_data) depending on how your get\_data function is created. Fix\_shape must be done after.
|
I am able to fix this in such a way as to keep the metrics 'accuracy' (rather than using BinaryAccuracy). However, I do not quite understand why this is needed for 'accuracy', but not needed for other closely related one (e.g. BinaryAccuracy).
2 things:
1. construct a ds such that the batch label has shape of (batch\_size, 1) but not (batch\_size,). Following the keras.io tutorial mentioned, it should have been ok with the latter. This change aims to get rid of the "unknown" in the TensorShape.
2. add this to the ds pipeline:
label.set\_shape([1])
```
def process_path(file_path):
label = get_label(file_path)
img = tf.io.read_file(file_path)
img = tf.image.decode_jpeg(img, channels=3)
label.set_shape([1])
return img, label
ds = ds.map(process_path, num_parallel_calls=AUTO).shuffle(1024).repeat().batch(batch_size).prefetch(buffer_size=AUTO)
```
This is the state before .batch(...) so a single sample should have 1 as the shape (and therefore, (batch\_size, 1) after batching.
After doing so, the error didn't happen, and I used the exact same metrics 'accuracy' as in
<https://keras.io/examples/vision/image_classification_from_scratch>
Hope this helps anyone who got hit. I have to admit I don't truly understand why it didn't work in the first place. It still seems like a TF bug to me.
| 8,523
|
49,989,188
|
I have function like this one:
```
def get_list_of_movies(table):
#some code here
print(a_list)
return a_list
```
Reason why I want to use print and return is that I'm using this function in many places. So after calling this function from menu I want to get printed list of content.
This same function I'm using in another function - just to get list. Problem is - when I call this function it prints list as well.
Question: How to prevent function from executing print line when its used in another function just to get list?
This is part of exercise so I can't define more functions / split this or soo - I'm kind of limited to this one function.
*Edit: Thank you for all answers! I'm just beginner but you showed me ways in python(programming in general) that I never thought of! Using second parameter (boolin) is very cleaver. I do learn here a lot!*
|
2018/04/23
|
[
"https://Stackoverflow.com/questions/49989188",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/8773813/"
] |
Add a separate argument with a default value of `None` to control the printing. Pass
```
def get_list_of_movies(table, printIt=False):
...
if printIt:
print(a_list)
return a_list
...
movies = get_list_of_movies(table, printIt=True)
```
Another approach is to pass `print` itself as the argument, where the default value is a no-op:
```
def get_list_of_movies(table, printer=lambda *args: None):
...
printer(a_list)
return a_list
...
movies = get_list_of_movies(table, printer=print)
```
This opens the door to being able to customize exactly how the result is print; you are effectively adding an arbitrary callback to be performed on the return value, which admittedly can be handled with a custom pass-through function as well:
```
def print_it_first(x):
print(x)
return x
movies = print_it_first(get_list_of_movies(table))
```
This doesn't require any special treatment of `get_list_of_movies` itself, so is probably preferable from a design standpoint.
|
A completely different approach is to *always* print the list, but control where it gets printed *to*:
```
def get_list_of_movies(table, print_to=os.devnull):
...
print(a_list, file=location)
return a_list
movies = get_list_of_movies(table, print_to=sys.stdout)
```
The `print_to` argument can be any file-like object, with the default ensuring no output is written anywhere.
| 8,524
|
11,882,194
|
I have a django web application running on our **apache2** production server using **mod\_python**, but no static files are found (css,images ... )
All our static stuff is under `/var/my.site/example/static`
```
/var/my.site/example/static/
|-admin/
|-css/
|-img/
|-css/
|-js/
|-img/
```
Now I thought I just could alias all requests to my static stuff like so:
This is the apache2 conf:
```
<VirtualHost 123.123.123:443>
... SSL stuff ...
RewriteEngine On
ReWriteOptions Inherit
<Location "/example">
SetHandler python-program
PythonHandler django.core.handlers.modpython
SetEnv DJANGO_SETTINGS_MODULE example.settings
PythonPath "[ \
'/home/me/Envs/ex/lib/python2.6/site-packages',\
'/var/my.site',\
'/home/me/Envs/ex/lib/python2.6/site-packages/django',\
'/home/me/Envs/ex/lib/python2.6/site-packages/MySQLdb',\
'/var/my.site/example',\
'/var/my.site/example/static'] + sys.path"
PythonDebug Off
</Location>
Alias /example/static /var/my.site/example/static
<Directory /var/my.site/example/static>
Order allow,deny
Allow from all
</Directory>
</VirtualHost>
```
This is my settings.py
```
...
STATIC_ROOT = '/var/my.site'
STATIC_URL = '/example/static/'
STATICFILES_DIRS = (
"/var/my.site/example/static",
)
...
```
There is no errors in the apache-error log. But here log from apache-secure\_access.log
```
[09/Aug/2012:12:37:55 +0200] "GET /example/admin/ HTTP/1.1" 200 6694
[09/Aug/2012:12:37:55 +0200] "GET /example/static/css/base.css HTTP/1.1" 301 468
[09/Aug/2012:12:37:55 +0200] "GET /example/static/img/logo.png HTTP/1.1" 403 766
[09/Aug/2012:12:37:55 +0200] "GET /example/static/css/base.css/ HTTP/1.1" 500 756
[09/Aug/2012:12:37:55 +0200] "GET /example/static/admin/css/dashboard.css HTTP/1.1" 301 622
```
But this doesn't work and I'm not sure, if I even, is on the right track. It does work when I set `DEBUG = True` But that's just because django serves all the static files.
**What am I doing wrong?**
**Does anyone know about a good tutorial or example?**
|
2012/08/09
|
[
"https://Stackoverflow.com/questions/11882194",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/481406/"
] |
After @supervacuo suggestion that I strip down everything from django, I got apache to serve the static files and realized what was wrong.
The problem was that `<Location "/example">` got priority over `Alias /example/static`. It didn't matter where I put the `Alias` (above or below the `<Location> - tag`).
To fix it I changed the `STATIC_URL` and `STATIC_ROOT`, than I could change the `Alias` not to interfere with the `<Location> - tag`
**From:**
```
STATIC_ROOT = '/var/my.site'
STATIC_URL = '/example/static/'
Alias /example/static /var/my.site/example/static
```
**To:**
```
STATIC_ROOT = '/var/my.site/example'
STATIC_URL = '/static/'
Alias /static /var/my.site/example/static
```
|
Try to eliminate the problem step-by-step.
Loading static files should work completely independently of Django. Try commenting out all lines relating to Django in your `VirtualHost` config. (Remember to reload Apache after changing the configuration)
If that works, it may be that you need to take more steps to avoid Django trampling over URLs in the same namespace (perhaps using `SetHandler`?).
If not, there's a more basic problem with your static files. If you can't resolve it, perhaps [ServerFault](https://serverfault.com/) can help?
| 8,525
|
61,966,894
|
So I was using flask\_login for my login system on my mac, and I seem to have run into a problem. When I ran the code, it says I have not set my secret key even if I had done so.
My code was:
```py
from flask import Flask, render_template, request, session, redirect, url_for, jsonify
from flask_session import Session
from flask_login import LoginManager, login_user,logout_user, login_required, current_user
from models.model import *
from sqlalchemy import or_, and_
app = Flask(__name__)
app.secret_key = "<Some secret key>"
app.config["SQLALCHEMY_DATABASE_URI"] = '<Some Uri>'
app.config["SQLALCHEMY_TRACK_MODIFICATIONS"] = False
db.init_app(app)
Session(app)
login_manager = LoginManager()
login_manager.login_view = 'index'
login_manager.init_app(app)
@app.route("/")
def index():
if current_user.is_authenticated:
return render_template("home.html")
return render_template("login.html", tip="You have to log in first.")
@login_manager.user_loader
def load_user(user_id):
return User.query.get(int(user_id))
@app.route("/login", methods=['POST'])
def verif():
""" Gets name and password from form """
username = request.form.get("username")
password = request.form.get("password")
""" Checks user and password. """
userpassCheck = User.query.filter(and_(User.username == username, User.password == password)).first()
if not userpassCheck:
return render_template("index.html", tip="Incorrect Username or Password.")
login_user(userpassCheck)
return redirect(url_for('index'))
if __name__ == '__main__':
app.run(debug=True, use_reloader=True)
```
The whole error traceback was:
```
Traceback (most recent call last):
File "/Users/scythia/Desktop/Project1/venv/lib/python3.7/site-packages/flask/app.py", line 2464, in __call__
return self.wsgi_app(environ, start_response)
File "/Users/scythia/Desktop/Project1/venv/lib/python3.7/site-packages/flask/app.py", line 2450, in wsgi_app
response = self.handle_exception(e)
File "/Users/scythia/Desktop/Project1/venv/lib/python3.7/site-packages/flask/app.py", line 1867, in handle_exception
reraise(exc_type, exc_value, tb)
File "/Users/scythia/Desktop/Project1/venv/lib/python3.7/site-packages/flask/_compat.py", line 39, in reraise
raise value
File "/Users/scythia/Desktop/Project1/venv/lib/python3.7/site-packages/flask/app.py", line 2447, in wsgi_app
response = self.full_dispatch_request()
File "/Users/scythia/Desktop/Project1/venv/lib/python3.7/site-packages/flask/app.py", line 1952, in full_dispatch_request
rv = self.handle_user_exception(e)
File "/Users/scythia/Desktop/Project1/venv/lib/python3.7/site-packages/flask/app.py", line 1821, in handle_user_exception
reraise(exc_type, exc_value, tb)
File "/Users/scythia/Desktop/Project1/venv/lib/python3.7/site-packages/flask/_compat.py", line 39, in reraise
raise value
File "/Users/scythia/Desktop/Project1/venv/lib/python3.7/site-packages/flask/app.py", line 1950, in full_dispatch_request
rv = self.dispatch_request()
File "/Users/scythia/Desktop/Project1/venv/lib/python3.7/site-packages/flask/app.py", line 1936, in dispatch_request
return self.view_functions[rule.endpoint](**req.view_args)
File "/Users/scythia/Desktop/Project1/application.py", line 54, in verif
login_user(userpassCheck)
File "/Users/scythia/Desktop/Project1/venv/lib/python3.7/site-packages/flask_login/utils.py", line 170, in login_user
session['_user_id'] = user_id
File "/Users/scythia/Desktop/Project1/venv/lib/python3.7/site-packages/werkzeug/local.py", line 350, in __setitem__
self._get_current_object()[key] = value
File "/Users/scythia/Desktop/Project1/venv/lib/python3.7/site-packages/flask/sessions.py", line 103, in _fail
"The session is unavailable because no secret "
RuntimeError: The session is unavailable because no secret key was set. Set the secret_key on the application to something unique and secret.
```
My expectations for the code was for it to be completely able to run. I haven't included the two HTML files because I think it is not related. I can add it if you think it is.
Using:
Python 3.7.7;
Flask 1.1.2;
|
2020/05/23
|
[
"https://Stackoverflow.com/questions/61966894",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/13600242/"
] |
As @Harmandeep Kalsi said in the comments, I added `app.config['SESSION_TYPE']` and it worked.
|
For anyone else that is still getting an error after the other answers, if you're using the format `app.confing['<string>']` make sure that you include the underscore between "SECRET" and "KEY".
That ended up being my issue, but it results in the same error code so searching for it lead me here. So, I thought I'd provide my solution to anyone that comes across this in the future.
So what worked for me was switching:
```
app.config['SECRET KEY']
```
with:
```
app.config['SECRET_KEY']
```
| 8,528
|
11,170,414
|
I just upgraded from SnowLeapord to Lion and now cannot create virtualenvs. I understand that there are new Python installations after the upgrade and no site packages and have tried installing pip and virtualenv again as well as upgrading to Xcode4 but I always get this error:
```
~ > virtualenv --distribute env
New python executable in env/bin/python
Installing distribute........
Complete output from command /Users/jaderberg/env/bin/python -c "#!python
\"\"\"Bootstra... main(sys.argv[1:])
" --always-copy -U distribute:
Traceback (most recent call last):
File "<string>", line 23, in <module>
File "/Users/jaderberg/env/lib/python2.7/distutils/__init__.py", line 16, in <module>
exec(open(os.path.join(distutils_path, '__init__.py')).read())
IOError: [Errno 2] No such file or directory: 'System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/distutils/__init__.py'
----------------------------------------
...Installing distribute...done.
Traceback (most recent call last):
File "/usr/local/bin/virtualenv", line 9, in <module>
load_entry_point('virtualenv==1.7.2', 'console_scripts', 'virtualenv')()
File "/Library/Python/2.7/site-packages/virtualenv-1.7.2-py2.7.egg/virtualenv.py", line 942, in main
never_download=options.never_download)
File "/Library/Python/2.7/site-packages/virtualenv-1.7.2-py2.7.egg/virtualenv.py", line 1049, in create_environment
search_dirs=search_dirs, never_download=never_download)
File "/Library/Python/2.7/site-packages/virtualenv-1.7.2-py2.7.egg/virtualenv.py", line 603, in install_distribute
search_dirs=search_dirs, never_download=never_download)
File "/Library/Python/2.7/site-packages/virtualenv-1.7.2-py2.7.egg/virtualenv.py", line 570, in _install_req
cwd=cwd)
File "/Library/Python/2.7/site-packages/virtualenv-1.7.2-py2.7.egg/virtualenv.py", line 1020, in call_subprocess
% (cmd_desc, proc.returncode))
OSError: Command /Users/jaderberg/env/bin/python -c "#!python
\"\"\"Bootstra... main(sys.argv[1:])
" --always-copy -U distribute failed with error code 1
```
I am a bit of a unix/python novice and just cannot work out how to get this working. Any ideas? Without using the --distribute tag I get this error:
```
~ > virtualenv env
New python executable in env/bin/python
Installing setuptools.............
Complete output from command /Users/jaderberg/env/bin/python -c "#!python
\"\"\"Bootstra...sys.argv[1:])
" /Library/Python/2.7/...ols-0.6c11-py2.7.egg:
Traceback (most recent call last):
File "", line 279, in
File "", line 207, in main
File "/Library/Python/2.7/site-packages/distribute-0.6.27-py2.7.egg/setuptools/__init__.py", line 2, in
from setuptools.extension import Extension, Library
File "/Library/Python/2.7/site-packages/distribute-0.6.27-py2.7.egg/setuptools/extension.py", line 2, in
import distutils.core
File "/Users/jaderberg/env/lib/python2.7/distutils/__init__.py", line 16, in
exec(open(os.path.join(distutils_path, '__init__.py')).read())
IOError: [Errno 2] No such file or directory: '/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/distutils/__init__.py'
----------------------------------------
...Installing setuptools...done.
Traceback (most recent call last):
File "/usr/local/bin/virtualenv", line 9, in
load_entry_point('virtualenv==1.7.2', 'console_scripts', 'virtualenv')()
File "/Library/Python/2.7/site-packages/virtualenv-1.7.2-py2.7.egg/virtualenv.py", line 942, in main
never_download=options.never_download)
File "/Library/Python/2.7/site-packages/virtualenv-1.7.2-py2.7.egg/virtualenv.py", line 1052, in create_environment
search_dirs=search_dirs, never_download=never_download)
File "/Library/Python/2.7/site-packages/virtualenv-1.7.2-py2.7.egg/virtualenv.py", line 598, in install_setuptools
search_dirs=search_dirs, never_download=never_download)
File "/Library/Python/2.7/site-packages/virtualenv-1.7.2-py2.7.egg/virtualenv.py", line 570, in _install_req
cwd=cwd)
File "/Library/Python/2.7/site-packages/virtualenv-1.7.2-py2.7.egg/virtualenv.py", line 1020, in call_subprocess
% (cmd_desc, proc.returncode))
OSError: Command /Users/jaderberg/env/bin/python -c "#!python
\"\"\"Bootstra...sys.argv[1:])
" /Library/Python/2.7/...ols-0.6c11-py2.7.egg failed with error code 1
```
|
2012/06/23
|
[
"https://Stackoverflow.com/questions/11170414",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/883845/"
] |
Turns out that although I upgraded Xcode to version 4, it does not automatically install the command line tools. I followed this <http://blog.cingusoft.org/mac-osx-lion-virtualenv-and-could-not-call-in>.
Basically, install Xcode, go into Preferences and then Downloads and install "Command Line Tools". It works now.
The Command Line Tools are also available directly from <https://developer.apple.com/downloads/index.action#>
|
I also had to upgrade my setuptools.
`pip install setuptools --upgrade`
| 8,529
|
57,151,931
|
I've created a python script together with selenium to parse a specific content from a webpage. I can get this result `AARONS INC` located under `QUOTE` in many different ways but the way I wish to scrape that is by using ***`pseudo selector`*** which unfortunately selenium doesn't support. The commented out line within the script below represents that selenium doesn't support `pseudo selector`.
However, when I use `pseudo selector` within `driver.execute_script()` then I can parse it flawlessly. To make this work I had to use hardcoded delay for the element to be avilable. Now, I wish to do the same wrapping this `driver.execute_script()` within `Explicit Wait` condition.
```
import time
from selenium import webdriver
from selenium.webdriver.common.by import By
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
driver = webdriver.Chrome()
wait = WebDriverWait(driver, 20)
driver.get("https://www.nyse.com/quote/XNYS:AAN")
time.sleep(15)
# item = wait.until(EC.visibility_of_element_located((By.CSS_SELECTOR, "span:contains('AARONS')")))
item = driver.execute_script('''return $('span:contains("AARONS")')[0];''')
print(item.text)
```
***How can I wrap `driver.execute_script()` within Explicit Wait condition?***
|
2019/07/22
|
[
"https://Stackoverflow.com/questions/57151931",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/7180194/"
] |
This is one of the ways you can achieve that. Give it a shot.
```
from selenium import webdriver
from selenium.webdriver.support.ui import WebDriverWait
with webdriver.Chrome() as driver:
wait = WebDriverWait(driver, 10)
driver.get('https://www.nyse.com/quote/XNYS:AAN')
item = wait.until(
lambda driver: driver.execute_script('''return $('span:contains("AARONS")')[0];''')
)
print(item.text)
```
|
Here is the simple approach.
```
url = 'https://www.nyse.com/quote/XNYS:AAN'
driver.get(url)
# wait for the elment to be presented
ele = WebDriverWait(driver, 30).until(lambda driver: driver.execute_script('''return $('span:contains("AARONS")')[0];'''))
# print the text of the element
print (ele.text)
```
| 8,531
|
15,669,924
|
I'm trying to get tumblr "liked" posts for a user at the <http://api.tumblr.com/v2/user/likes> url. I have registered my app with tumblr and authorized the app to access the user's tumblr data, so I have `oauth_consumer_key`,
`oauth_consumer_secret`, `oauth_token`, and `oauth_token secret`. However, I'm not sure what to do with these details when I make the api call. I'm trying to create a command line script that will just output json for further processing, so a solution in bash (cURL), Perl, or python would be ideal.
|
2013/03/27
|
[
"https://Stackoverflow.com/questions/15669924",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1132385/"
] |
Well if you don't mind using Python I can recommend [rauth](https://github.com/litl/rauth). There isn't a Tumblr example, but there are [real world, working examples](https://github.com/litl/rauth/tree/master/examples) for both OAuth 1.0/a and OAuth 2.0. The API is intended to be simple and straight forward. I'm not sure what other requirements you might have, but maybe worth giving it a shot?
Here's a working example to go by if you're interested:
```
from rauth import OAuth1Service
import re
import webbrowser
# Get a real consumer key & secret from http://www.tumblr.com/oauth/apps
tumblr = OAuth1Service(
consumer_key='gKRR414Bc2teq0ukznfGVUmb41EN3o0Nu6jctJ3dYx16jiiCsb',
consumer_secret='DcKJMlhbCHM8iBDmHudA9uzyJWIFaSTbDFd7rOoDXjSIKgMYcE',
name='tumblr',
request_token_url='http://www.tumblr.com/oauth/request_token',
access_token_url='http://www.tumblr.com/oauth/access_token',
authorize_url='http://www.tumblr.com/oauth/authorize',
base_url='https://api.tumblr.com/v2/')
request_token, request_token_secret = tumblr.get_request_token()
authorize_url = tumblr.get_authorize_url(request_token)
print 'Visit this URL in your browser: ' + authorize_url
webbrowser.open(authorize_url)
authed_url = raw_input('Copy URL from your browser\'s address bar: ')
verifier = re.search('\oauth_verifier=([^#]*)', authed_url).group(1)
session = tumblr.get_auth_session(request_token,
request_token_secret,
method='POST',
data={'oauth_verifier': verifier})
user = session.get('user/info').json()['response']['user']
print 'Currently logged in as: {name}'.format(name=user['name'])
```
Full disclosure, I maintain rauth.
|
I sort of found an answer. I ended up using OAuth::Consumer in perl to connect to the tumblr API. It's the simplest solution I've found so far and it just works.
| 8,533
|
64,063,248
|
My Python version:`Python 3.8.3`
`python -m pip install IPython` gives me `Successfully installed IPython-7.18.1`
Still gives me the following error:
```
from IPython.display import Image
/usr/bin/python3 "/home/sanyifeju/Desktop/python/ML/decision_trees.py"
Traceback (most recent call last):
File "/home/sanyifeju/Desktop/python/ML/decision_trees.py", line 4, in <module>
from IPython.display import Image
ModuleNotFoundError: No module named 'IPython'
```
'
What am I missing?
I am on Ubuntu 20.04.1 , not sure if that makes any difference.
if I run `python -m pip install ipython`
I get Requirement already satisfied.
```
Requirement already satisfied: ipython in /home/sanyifeju/anaconda3/lib/python3.8/site-packages (7.18.1)
Requirement already satisfied: setuptools>=18.5 in /home/sanyifeju/anaconda3/lib/python3.8/site-packages (from ipython) (49.6.0.post20200814)
Requirement already satisfied: decorator in /home/sanyifeju/anaconda3/lib/python3.8/site-packages (from ipython) (4.4.2)
Requirement already satisfied: pexpect>4.3; sys_platform != "win32" in /home/sanyifeju/anaconda3/lib/python3.8/site-packages (from ipython) (4.8.0)
Requirement already satisfied: prompt-toolkit!=3.0.0,!=3.0.1,<3.1.0,>=2.0.0 in /home/sanyifeju/anaconda3/lib/python3.8/site-packages (from ipython) (3.0.7)
Requirement already satisfied: jedi>=0.10 in /home/sanyifeju/anaconda3/lib/python3.8/site-packages (from ipython) (0.17.1)
Requirement already satisfied: pygments in /home/sanyifeju/anaconda3/lib/python3.8/site-packages (from ipython) (2.7.1)
Requirement already satisfied: backcall in /home/sanyifeju/anaconda3/lib/python3.8/site-packages (from ipython) (0.2.0)
Requirement already satisfied: pickleshare in /home/sanyifeju/anaconda3/lib/python3.8/site-packages (from ipython) (0.7.5)
Requirement already satisfied: traitlets>=4.2 in /home/sanyifeju/anaconda3/lib/python3.8/site-packages (from ipython) (4.3.3)
Requirement already satisfied: ptyprocess>=0.5 in /home/sanyifeju/anaconda3/lib/python3.8/site-packages (from pexpect>4.3; sys_platform != "win32"->ipython) (0.6.0)
Requirement already satisfied: wcwidth in /home/sanyifeju/anaconda3/lib/python3.8/site-packages (from prompt-toolkit!=3.0.0,!=3.0.1,<3.1.0,>=2.0.0->ipython) (0.2.5)
Requirement already satisfied: parso<0.8.0,>=0.7.0 in /home/sanyifeju/anaconda3/lib/python3.8/site-packages (from jedi>=0.10->ipython) (0.7.0)
Requirement already satisfied: ipython-genutils in /home/sanyifeju/anaconda3/lib/python3.8/site-packages (from traitlets>=4.2->ipython) (0.2.0)
Requirement already satisfied: six in /home/sanyifeju/anaconda3/lib/python3.8/site-packages (from traitlets>=4.2->ipython) (1.15.0)
```
|
2020/09/25
|
[
"https://Stackoverflow.com/questions/64063248",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1107591/"
] |
I had the same issue, and the problem was that the `python` command was linked to the python2 version:
```
$ ls -l /usr/bin/python
lrwxrwxrwx 1 root root 7 Apr 15 2020 /usr/bin/python -> python2*
```
The following commands fixed it for me:
```
$ sudo rm /usr/bin/python
$ sudo ln -s python3 /usr/bin/python
```
|
Try installing-
```
python -m pip install ipython
```
| 8,534
|
13,218,362
|
I'm currently learning python and tried to make a little game using the pygame librabry. I use python 3.2.3 and pygame 1.9.2a with Windows Xp. Everything works fine, except one thing : if I go on another window when my game is running, it crashes and I get an error message in the console :
```
Fatal Python error: (pygame parachute) Segmentation Fault
```
This piece of code that I took out of my program seems to be causing the error, however I can't see anything wrong with it :
```
import pygame
from pygame.locals import *
pygame.init()
fenetre = pygame.display.set_mode((800, 600))
go = 1
while go:
for event in pygame.event.get():
if event.type == QUIT:
go = 0
```
Thanks for your help !
|
2012/11/04
|
[
"https://Stackoverflow.com/questions/13218362",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1717248/"
] |
I know thread is old, but I was getting the same error "Fatal Python error: (pygame parachute) Segmentation Fault" in linux when I resized a pygame window continuously for several seconds. Just in case this helps anyone else, it turned out to be caused by blitting to the window surface in one thread when I was resizing it in another thread by calling pygame.display.set\_mode(screen\_size, 0). I fixed it by acquiring a lock before drawing to or resizing the window.
|
I don't know if you have anything after the last line that you're not putting in, but if you don't, you should replace your last line with
```
pygame.quit()
sys.exit()
```
As an alternative, you could put those two lines outside of the `while` loop and keep what you have. Don't forget to `import sys`.
| 8,535
|
68,532,863
|
I currently have a multiple regression that generates an OLS summary based on the life expectancy and the variables that impact it, however that does not include RMSE or standard deviation. Does statsmodels have a rsme library, and is there a way to calculate standard deviation from my code?
I have found a previous example of this problem: [regression model statsmodel python](https://stackoverflow.com/questions/52562664/regression-model-statsmodel-python) , and I read the statsmodels info page: <https://www.statsmodels.org/stable/generated/statsmodels.tools.eval_measures.rmse.html> and testing I am still not able to get this problem resolved.
```
import pandas as pd
import openpyxl
import statsmodels.formula.api as smf
import statsmodels.formula.api as ols
df = pd.read_excel(C:/Users/File1.xlsx, sheet_name = 'States')
dfME = df[(df[State] == "Maine")]
pd.set_option('display.max_columns', None)
dfME.head()
model = smf.ols(Life Expectancy ~ Race + Age + Weight + C(Pets), data = dfME)
modelfit = model.fit()
modelfit.summary
```
|
2021/07/26
|
[
"https://Stackoverflow.com/questions/68532863",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/12374203/"
] |
The canonical dplyr-way would be to write a custom predicate function that returns `TRUE` or `FALSE` for each column depending on whether the conditions are matched and use this function inside `across(where(predicate_function), ...)`.
Below I borrow the example data from @Tob and add some variations (one column is `0`, `1` but double, one column contains `NA`s, one column is a numeric column which contains other values).
```r
library(dplyr)
test_data <- tibble(strings = c("a", "b", "c", "d", "e"),
col_2 = c(1, 0, 0, 0, NA),
col_3 = as.double(c(0, 1, 1, 0, 1)),
col_4 = c(0L, 1L, 1L, 0L, 1L),
col_5 = 1:5)
# let's have a look at the data and the column types
test_data
#> # A tibble: 5 x 5
#> strings col_2 col_3 col_4 col_5
#> <chr> <dbl> <dbl> <int> <int>
#> 1 a 1 0 0 1
#> 2 b 0 1 1 2
#> 3 c 0 1 1 3
#> 4 d 0 0 0 4
#> 5 e NA 1 1 5
# predicate function
is_01_col <- function(x) {
all(unique(x) %in% c(0, 1, NA))
}
test_data %>%
mutate(across(where(is_01_col), as.factor)) %>%
glimpse
#> Rows: 5
#> Columns: 5
#> $ strings <chr> "a", "b", "c", "d", "e"
#> $ col_2 <fct> 1, 0, 0, 0, NA
#> $ col_3 <fct> 0, 1, 1, 0, 1
#> $ col_4 <fct> 0, 1, 1, 0, 1
#> $ col_5 <int> 1, 2, 3, 4, 5
```
Created on 2021-07-26 by the [reprex package](https://reprex.tidyverse.org) (v0.3.0)
|
This is what I might do but I don't know how fast it will if your data is large
```
# Create some data
test_data <- data.frame(strings = c("a", "b", "c", "d", "e"),
col_2 = c(1, 0, 0, 0, 1),
col_3 = c( 0,1, 1, 0, 1))
# Find columns that are only 0s and 1s
cols_to_convert <- names(test_data)[lapply(test_data, function(x) identical(sort(unique(x)), c(0,1))) == TRUE]
# Convert these columns to factors
new_data <- test_data %>% mutate(across(all_of(cols_to_convert), ~ as.factor(.x)))
# Check that the columns are factors
lapply(new_data, class)
```
| 8,536
|
20,338,360
|
I am looking for a production database to use with python/django for web development. I've installed MySQL successfully. I believe the python connector is not working and I don't know how to make it work. Please point me in the right direction. Thanks.
If I try importing `MySQLdb`:
```
import MySQLdb
```
I get the following exception.
```
Traceback (most recent call last):
File "/Users/vantran/tutorial/scrape_yf/mysql.py", line 3, in <module>
import MySQLdb
ImportError: No module named MySQLdb
```
I've tried using MySQL but I am struggling with getting the connector package to install or work properly. <http://dev.mysql.com/downloads/file.php?id=414340>
I've also tried to look at the other SO questions regarding installing MySQL python connectors, but they all seem to be unnecessarily complicated.
I've also tried
1. <http://www.tutorialspoint.com/python/python_database_access.htm>
2. <http://zetcode.com/db/mysqlpython/>
3. <https://github.com/PyMySQL/PyMySQL>
...but nothing seems to work.
|
2013/12/02
|
[
"https://Stackoverflow.com/questions/20338360",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2424253/"
] |
If your problem is with the `MySQLdb` module, not the MySQL server itself, you might want to consider [`PyMySQL`](https://github.com/PyMySQL/PyMySQL) instead. It's much simpler to set up. Of course it's also somewhat different.
The key difference is that it's a pure Python implementation of the MySQL protocol, not a wrapper around `libmysql`. So it has minimal requirements, but in a few use cases it may not be as performant. Also, since they're completely different libraries, there are a few rare things that one supports but not the other, and various things that they support differently. (For example, `MySQLdb` handles all MySQL warnings as Python warnings; `PyMySQL` handles them as information for you to process.)
|
I would recommend postgres.app : <http://postgresapp.com>
Tried and never left
My preference for the driver is <http://initd.org/psycopg/>
You'll find a list of drivers at <http://wiki.postgresql.org/wiki/Python>
| 8,539
|
3,947,878
|
I have a appengine webapp where i need to set HTTP Location variable to redirect to another page in the same webapp. For that i need to produce a absolute link. for portability reason i cannot use directly the domain name which i am currently using.
Is it possible to produce the domain name on which the webapp is hosted in the python code.
|
2010/10/16
|
[
"https://Stackoverflow.com/questions/3947878",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/329292/"
] |
I don't think I quite fully understand the need to getdomain name. But check out if redirect API provided by google app engine will do the job.
<http://code.google.com/appengine/docs/python/tools/webapp/redirects.html>
|
this question is poorly answered. You need the domain name to generate pages like: robots.txt, sitemap.xml and many many more things which are not relative links. Ive tried using this:
```
from google.appengine.api.app_identity import get_default_version_hostname
host = get_default_version_hostname()
```
but... it is not good when you upgrade to your own domain... because I still get the appspot.com name.
| 8,540
|
71,288,828
|
I am trying to extract book names from oreilly media website using python beautiful soup.
However I see that the book names are not in the page source html.
I am using this link to see the books:
[https://www.oreilly.com/search/?query=\*&extended\_publisher\_data=true&highlight=true&include\_assessments=false&include\_case\_studies=true&include\_courses=true&include\_playlists=true&include\_collections=true&include\_notebooks=true&include\_sandboxes=true&include\_scenarios=true&is\_academic\_institution\_account=false&source=user&formats=book&formats=article&formats=journal&sort=date\_added&facet\_json=true&json\_facets=true&page=0&include\_facets=true&include\_practice\_exams=true](https://www.oreilly.com/search/?query=*&extended_publisher_data=true&highlight=true&include_assessments=false&include_case_studies=true&include_courses=true&include_playlists=true&include_collections=true&include_notebooks=true&include_sandboxes=true&include_scenarios=true&is_academic_institution_account=false&source=user&formats=book&formats=article&formats=journal&sort=date_added&facet_json=true&json_facets=true&page=0&include_facets=true&include_practice_exams=true)
Attached is a screenshot that shows the webpage with the first two books alongside with chrome developer tool with arrows pointing to the elements i'd like to extract.
[](https://i.stack.imgur.com/3A2vS.png)
I looked at the page source but could not find the book names - maybe they are hidden inside some other links inside the main html.
I tried to open some of the links inside the html and searched for the book names but could not find anything.
is it possible to extract the first or second book names from the website using beautiful soup?
if not is there any other python package that can do that? maybe selenium?
Or as a last resort any other tool...
|
2022/02/27
|
[
"https://Stackoverflow.com/questions/71288828",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/7895331/"
] |
So if you investigate into network tab, when loading page, you are sending request to API
[](https://i.stack.imgur.com/kKfDY.png)
It returns json with books.
After some investigation by me, you can get your titles via
```
import json
import requests
response_json = json.loads(requests.get(
"https://www.oreilly.com/api/v2/search/?query=*&extended_publisher_data=true&highlight=true&include_assessments=false&include_case_studies=true&include_courses=true&include_playlists=true&include_collections=true&include_notebooks=true&include_sandboxes=true&include_scenarios=true&is_academic_institution_account=false&source=user&formats=book&formats=article&formats=journal&sort=date_added&facet_json=true&json_facets=true&page=0&include_facets=true&include_practice_exams=true&orm-service=search-frontend").text)
for book in response_json['results']:
print(book['highlights']['title'][0])
```
|
To solve this issue you need to know beautiful soup can deal with websites that use plan html. so the the websites that use JavaScript in their page beautiful soup cant's get all page data that you looking for bcz you need a browser like to load the JavaScript data in the website.
and here you need to use Selenium bcz it open a browser page and load all data of the page, and you can use both as a combine like this:
```
from bs4 import BeautifulSoup
from selenium import webdriver
from selenium.webdriver.chrome.options import Options
import lxml
# This will make selenium run in backround
chrome_options = Options()
chrome_options.add_argument("--headless")
# You need to install driver
driver = webdriver.Chrome('#Dir of the driver' ,options=chrome_options)
driver.get('#url')
html = driver.page_source
soup = BeautifulSoup(html, 'lxml')
```
and with this you can get all data that you need, and dont forget to
write this at end to quit selenium in background.
```
driver.quit()
```
| 8,545
|
63,424,301
|
I am trying to refresh power b.i. more frequently than current capability of gateway schedule refresh.
I found this:
<https://github.com/dubravcik/pbixrefresher-python>
Installed and verified I have all required packages installed to run.
Right now it works fine until the end - where after it refreshes a Save function seems to execute correctly - but the report does not save - and when it tries Publish function - a prompt is created asking if user would like to save and there is a timeout.
I have tried increasing the time-out argument and adding more wait time in the routine (along with a couple of other suggested ideas from the github issues thread).
Below is what cmd looks like along with the error - I also added the Main routine of the pbixrefresher file in case there is a different way to save (hotkeys) or something worth trying. I tried this both as my user and admin in CMD - but wasn't sure if it's possible a permissions setting could block the report from saving. Thank you for reading any help is greatly appreciated.
```
Starting Power BI
Waiting 15 sec
Identifying Power BI window
Refreshing
Waiting for refresh end (timeout in 100000 sec)
Saving
Publish
Traceback (most recent call last):
File "c:\python36\lib\site-packages\pywinauto\application.py", line 258, in __resolve_control
criteria)
File "c:\python36\lib\site-packages\pywinauto\timings.py", line 458, in wait_until_passes
raise err
pywinauto.timings.TimeoutError
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "c:\python36\lib\runpy.py", line 193, in _run_module_as_main
"main", mod_spec)
File "c:\python36\lib\runpy.py", line 85, in run_code
exec(code, run_globals)
File "C:\Python36\Scripts\pbixrefresher.exe_main.py", line 9, in
File "c:\python36\lib\site-packages\pbixrefresher\pbixrefresher.py", line 77, in main
publish_dialog.child_window(title = WORKSPACE, found_index=0).click_input()
File "c:\python36\lib\site-packages\pywinauto\application.py", line 379, in getattribute
ctrls = self.__resolve_control(self.criteria)
File "c:\python36\lib\site-packages\pywinauto\application.py", line 261, in __resolve_control
raise e.original_exception
File "c:\python36\lib\site-packages\pywinauto\timings.py", line 436, in wait_until_passes
func_val = func(*args, **kwargs)
File "c:\python36\lib\site-packages\pywinauto\application.py", line 222, in __get_ctrl
ctrl = self.backend.generic_wrapper_class(findwindows.find_element(**ctrl_criteria))
File "c:\python36\lib\site-packages\pywinauto\findwindows.py", line 87, in find_element
raise ElementNotFoundError(kwargs)
pywinauto.findwindows.ElementNotFoundError: {'auto_id': 'KoPublishToGroupDialog', 'top_level_only': False, 'parent': <uia_element_info.UIAElementInfo - 'Simple - Power BI Desktop', WindowsForms10.Window.8.app.0.1bb715_r6_ad1, 8914246>, 'backend': 'uia'}
```
The main routine from pbixrefresher:
```
def main():
# Parse arguments from cmd
parser = argparse.ArgumentParser()
parser.add_argument("workbook", help = "Path to .pbix file")
parser.add_argument("--workspace", help = "name of online Power BI service work space to publish in", default = "My workspace")
parser.add_argument("--refresh-timeout", help = "refresh timeout", default = 30000, type = int)
parser.add_argument("--no-publish", dest='publish', help="don't publish, just save", default = True, action = 'store_false' )
parser.add_argument("--init-wait", help = "initial wait time on startup", default = 15, type = int)
args = parser.parse_args()
timings.after_clickinput_wait = 1
WORKBOOK = args.workbook
WORKSPACE = args.workspace
INIT_WAIT = args.init_wait
REFRESH_TIMEOUT = args.refresh_timeout
# Kill running PBI
PROCNAME = "PBIDesktop.exe"
for proc in psutil.process_iter():
# check whether the process name matches
if proc.name() == PROCNAME:
proc.kill()
time.sleep(3)
# Start PBI and open the workbook
print("Starting Power BI")
os.system('start "" "' + WORKBOOK + '"')
print("Waiting ",INIT_WAIT,"sec")
time.sleep(INIT_WAIT)
# Connect pywinauto
print("Identifying Power BI window")
app = Application(backend = 'uia').connect(path = PROCNAME)
win = app.window(title_re = '.*Power BI Desktop')
time.sleep(5)
win.wait("enabled", timeout = 300)
win.Save.wait("enabled", timeout = 300)
win.set_focus()
win.Home.click_input()
win.Save.wait("enabled", timeout = 300)
win.wait("enabled", timeout = 300)
# Refresh
print("Refreshing")
win.Refresh.click_input()
#wait_win_ready(win)
time.sleep(5)
print("Waiting for refresh end (timeout in ", REFRESH_TIMEOUT,"sec)")
win.wait("enabled", timeout = REFRESH_TIMEOUT)
# Save
print("Saving")
type_keys("%1", win)
#wait_win_ready(win)
time.sleep(5)
win.wait("enabled", timeout = REFRESH_TIMEOUT)
# Publish
if args.publish:
print("Publish")
win.Publish.click_input()
publish_dialog = win.child_window(auto_id = "KoPublishToGroupDialog")
publish_dialog.child_window(title = WORKSPACE).click_input()
publish_dialog.Select.click()
try:
win.Replace.wait('visible', timeout = 10)
except Exception:
pass
if win.Replace.exists():
win.Replace.click_input()
win["Got it"].wait('visible', timeout = REFRESH_TIMEOUT)
win["Got it"].click_input()
#Close
print("Exiting")
win.close()
# Force close
for proc in psutil.process_iter():
if proc.name() == PROCNAME:
proc.kill()
if __name__ == '__main__':
try:
main()
except Exception as e:
print(e)
sys.exit(1)
```
|
2020/08/15
|
[
"https://Stackoverflow.com/questions/63424301",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/11854373/"
] |
If this is your command line:
```
g++ -std=c++17 -pthread -o http_test.out http_test.cpp -lssl -lcrypto && ./http_test.out
```
Aren't you missing "-O2"? It looks like you are building without optimizations. Which will be considerably slower.
|
From what i know of bitmex engine, having a latency around 10ms for order execution is the best you can get, and it will be worse during high volatility periods. Check <https://bonfida.com/latency-monitor> to get an idea of latencies. On crypto world, latency are far higher than traditional hft
| 8,546
|
31,687,690
|
I just got a new MackBook Pro and installed Python 3.4.
I ran the terminal and typed
```
python3.4
```
I got:
```
Python 3.4.3 (v3.4.3:9b73f1c3e601, Feb 23 2015, 02:52:03)
[GCC 4.2.1 (Apple Inc. build 5666) (dot 3)] on darwin
Type "help", "copyright", "credits" or "license" for more information.
```
I typed:
```
>>> print("Hello world")
Hello world
```
All good, but when I tried to do something a bit more complex I ran into trouble, I did:
```
>>>counter = 5
>>>
>>> while counter > 0:
... counter -= 1
... print()
... print("Hello World")
```
I get the error:
```
File "<stdin>", line 4
print("Hello World")
^
SyntaxError: invalid syntax
```
My guess is that the error is on the 'print("Hello World")' but I have no clue as to why, I don't need to indent it if I want it to run after the loop is finished. Any help will be appreciated.
|
2015/07/28
|
[
"https://Stackoverflow.com/questions/31687690",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4996405/"
] |
Notice the "..." prompt? That's telling you that the interactive interpreter knows you are in a block. You'll have to enter a blank line to terminate the block, before doing the final print statement.
This is an artifact of running interactively -- the blank line isn't required when you type your code into a file.
|
You have to use space for indentation (and ";" to separate two instruction :
```
>>> counter = 5
>>> while counter > 0:
counter -= 1
print("Hello")
Hello
Hello
Hello
Hello
Hello
>>>
```
| 8,547
|
28,242,066
|
my code :
```
def isModuleBlink(modulename):
f = '/tmp/'+modulename + '.blink'
if(os.path.isfile(f)):
with open(f) as fii:
res = fii.read()
print 'res',res
print res is '1'
if(res is '1'):
print 'return true'
return True
return False
```
and print out :
```
res 1
False
```
Why python return false for condition?
When I test `print '1' is '1'` in python terminal terunt `true` but in this script return False?
|
2015/01/30
|
[
"https://Stackoverflow.com/questions/28242066",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3585139/"
] |
`res` is `1 \n` and not `1` ... in condition i replaced `1 in res` and work ...
thanks
|
`is` tests if things are *identical*. You want to test if two strings are equal, not necessarily that they occupy the same memory address. So you want `==`.
| 8,553
|
38,249,606
|
Say I have a vector of values from a tokenizing function, `tokenize()`. I know it will only have two values. I want to store the first value in `a` and the second in `b`. In Python, I would do:
```python
a, b = string.split(' ')
```
I could do it as such in an ugly way:
```cpp
vector<string> tokens = tokenize(string);
string a = tokens[0];
string b = tokens[1];
```
But that requires two extra lines of code, an extra variable, and less readability.
How would I do such a thing in C++ in a clean and efficient way?
**EDIT:** I must emphasize that efficiency is very important. Too many answers don't satisfy this. This includes modifying [my tokenization function](https://gist.github.com/CrazyPython/be933ac13243f4db9c2a6155c63ae9b2).
**EDIT 2**: I am using C++11 for reasons outside of my control and I also cannot use Boost.
|
2016/07/07
|
[
"https://Stackoverflow.com/questions/38249606",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1459669/"
] |
With structured bindings (definitely will be in C++17), you'd be able to write something like:
```
auto [a,b] = as_tuple<2>(tokenize(str));
```
where `as_tuple<N>` is some to-be-declared function that converts a `vector<string>` to a `tuple<string, string, ... N times ...>`, probably throwing if the sizes don't match. You can't destructure a `std::vector` since it's size isn't known at compile time. This will necessarily do extra moves of the `string` so you're losing some efficiency in order to gain some code clarity. Maybe that's ok.
Or maybe you write a `tokenize<N>` that returns a `tuple<string, string, ... N times ...>` directly, avoiding the extra move. In that case:
```
auto [a, b] = tokenize<2>(str);
```
is great.
---
Before C++17, what you have is what you can do. But just make your variables references:
```
std::vector<std::string> tokens = tokenize(str);
std::string& a = tokens[0];
std::string& b = tokens[1];
```
Yeah, it's a couple extra lines of code. That's not the end of the world. It's easy to understand.
|
Ideally you'd rewrite the `tokenize()` function so that it returns a pair of strings rather than a vector:
```
std::pair<std::string, std::string> tokenize(const std::string& str);
```
Or you would pass two references to empty strings to the function as parameters.
```
void tokenize(const std::string& str, std::string& result_1, std::string& result_2);
```
If you have no control over the tokenize function the best you can do is move the strings out of the vector in an optimal way.
```
std::vector<std::string> tokens = tokenize(str);
std::string a = std::move(tokens.first());
std::string b = std::move(tokens.last());
```
| 8,554
|
8,055,132
|
I have the script below which I'm using to send say 10 messages myself<->myself. However, I've noticed that Python really takes a while to do that. Last year I needed a system to send about 200 emails with attachments and text and I implemented it with msmtp + bash. As far as I remember it was much faster.
Moving the while loop inside (around the smtp\_serv.sendmail(sender, recepient, msg) function yields similar results).
Am I doing something wrong? Surely it can't be slower than bash + msmtp (and I'm only sending a 'hi' message, no attachments).
```
#! /usr/bin/python3.1
def sendmail(recepient, msg):
import smtplib
# Parameters
sender = 'login@gmail.com'
password = 'password'
smtpStr = 'smtp.gmail.com'
smtpPort = 587
# /Parameters
smtp_serv = smtplib.SMTP(smtpStr, smtpPort)
smtp_serv.ehlo_or_helo_if_needed()
smtp_serv.starttls()
smtp_serv.ehlo()
recepientExists = smtp_serv.verify(recepient)
if recepientExists[0] == 250:
smtp_serv.login(sender, password)
try:
smtp_serv.sendmail(sender, recepient, msg)
except smtplib.SMTPException:
print(recepientExists[1])
else:
print('Error', recepientExists[0], ':', recepientExists[1])
smtp_serv.quit()
for in in range(10):
sendmail('receiver@gmail.com', 'hi')
```
|
2011/11/08
|
[
"https://Stackoverflow.com/questions/8055132",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1030287/"
] |
You are opening the connection to the SMTP server and then closing it for each email. It would be more efficient to keep the connection open while sending all of the emails.
|
The real answer here is "profile that code!". Time how long different parts of the code take so you know where most of the time is spent. That way you'll have a real answer without guesswork.
Still, my guess would be that it is the calls to `smtp_serv.verify(recipient)` may be the slow ones. Reasons might be that the server sometimes needs to ask other SMTP servers for info, or that it does throttling on these operations to avoid having spammers use them massively to gather email addresses.
Also, try pinging the SMTP server. If the ping-pong takes significant time, I would expect sending each email would take at least that long.
| 8,557
|
34,162,320
|
I want to execute bash command
```
'/bin/echo </verbosegc> >> /tmp/jruby.log'
```
in python using Popen. The code does not raise any exception, but none change is made on the jruby.log after execution. The python code is shown below.
```
>>> command='/bin/echo </verbosegc> >> '+fullpath
>>> command
'/bin/echo </verbosegc> >> /tmp/jruby.log'
>>process = subprocess.Popen(command.split(), stdout=subprocess.PIPE, stderr=subprocess.PIPE, close_fds=True)
>>> output= process.communicate()[0]
>>> output
'</verbosegc> >> /tmp/jruby.log\n
```
I also print out the process.pid and then check the pid using ps -ef | grep pid. The result shows that the process pid has been finished.
|
2015/12/08
|
[
"https://Stackoverflow.com/questions/34162320",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/910118/"
] |
The first argument to `subprocess.Popen` is the array `['/bin/echo', '</verbosegc>', '>>', '/tmp/jruby.log']`. When the first argument to `subprocess.Popen` is an array, it does not launch a shell to run the command, and the shell is what's responsible for interpreting `>> /tmp/jruby.log` to mean "write output to jruby.log".
In order to make the `>>` redirection work in this command, you'll need to pass `command` directly to `subprocess.Popen()` without splitting it into a list. You'll also need to quote the first argument (or else the shell will interpret the "<" and ">" characters in ways you don't want):
```
command = '/bin/echo "</verbosegc>" >> /tmp/jruby.log'
process = subprocess.Popen(command, stdout=subprocess.PIPE, stderr=subprocess.PIPE, close_fds=True)
```
|
Have you tried without splitting the command `and using shell=True`? My usual format is:
```
process = subprocess.Popen(command, stdout=subprocess.PIPE, stderr=subprocess.STDOUT, shell=True)
output = process.stdout.read() # or .readlines()
```
| 8,562
|
46,668,481
|
I'm trying to use PyQt\_Fit. I installed it from pip install pyqt\_fit but when I import it does not work and show me this message:
```
----------------------------------------------------------------------- ImportError Traceback (most recent call last) <ipython-input-8-36ec621967a7> in <module>()
----> 1 import pyqt_fit
/home/yuri/anaconda2/lib/python2.7/site-packages/pyqt_fit/__init__.py in <module>()
12 'functions', 'residuals', 'CurveFitting']
13
---> 14 from . import functions
15 from . import residuals
16 from .curve_fitting import CurveFitting
/home/yuri/anaconda2/lib/python2.7/site-packages/pyqt_fit/functions/__init__.py in <module>()
4
5 from ..utils import namedtuple
----> 6 from .. import loader
7 import os
8 from path import path
/home/yuri/anaconda2/lib/python2.7/site-packages/pyqt_fit/loader.py in <module>()
1 from __future__ import print_function, absolute_import
2 import inspect
----> 3 from path import path
4 import imp
5 import sys
ImportError: cannot import name path
```
I'm using Ubuntu 16.04.
How can I fix it ?
|
2017/10/10
|
[
"https://Stackoverflow.com/questions/46668481",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/8752769/"
] |
I faced the same problem with you. when I install the pyqt\_fit package successfully by
```
sudo pip install git+https://github.com/Multiplicom/pyqt-fit.git
```
It will install the path.py (The last version) and pyqt\_fit at the same time.
Then When I import the package, I faced the follow error
```
import pyqt_fit
Traceback (most recent call last):
File "<ipython-input-253-36ec621967a7>", line 1, in <module>
import pyqt_fit
File "/Users/mengxinpan/anaconda3/lib/python3.6/site-packages/pyqt_fit/__init__.py", line 14, in <module>
from . import functions, residuals
File "/Users/mengxinpan/anaconda3/lib/python3.6/site-packages/pyqt_fit/residuals/__init__.py", line 7, in <module>
from path import path
ImportError: cannot import name 'path'
```
The error is caused by the path.path function has been revised to path.Path in the last version path.py package.
So my solution is open all the file in the pyqt\_fit folder, like 'site-packages/pyqt\_fit/residuals/**init**.py', change all the
```
from path import path
```
to
```
from path import Path as path
```
Then I can import the pyqt\_fit successfully.
I try to install the old version path.py by
```
sudo pip install -I path.py==7.7.1
```
But it still not working.
|
This seems to be happening for quite some time. Check this recent issue report [on the repo](https://github.com/sergeyfarin/pyqt-fit/issues/5).
I've installed the package and tested myself and I got the same problem. Checked the solution provided on the possible duplicate and seems to have fixed the problem.
You might not have pip3 installed, so try with:
```
sudo pip install -I path.py==7.7.1
```
Edit:
You can also try installing the package directly from [this forked repo](https://github.com/Multiplicom/pyqt-fit/pull/1) that seems to have fixed it:
```
sudo pip install git+https://github.com/Multiplicom/pyqt-fit.git
```
| 8,567
|
72,179,492
|
Recently, I updated to Ubuntu 22. I am using python 3.10.
After installing matplotlib and other required libraries for python, I am trying to plot some graphs.
Everytime I am facing this error while running my code.
I followed all the solutions given in stackoverflow or Google but no luck.
This is the error I am getting:
```
File ~/.local/lib/python3.10/site-packages/prettyplotlib/_eventplot.py:3, in <module>
1 __author__ = 'jgosmann'
----> 3 from matplotlib.cbook import iterable
5 from prettyplotlib.utils import remove_chartjunk, maybe_get_ax
6 from prettyplotlib.colors import set2
ImportError: cannot import name 'iterable' from 'matplotlib.cbook'
```
When I imported matplotlib, there is no issue.
How can I get rid of this error?
Any help or suggestion would be appreciated.
Thank you
|
2022/05/10
|
[
"https://Stackoverflow.com/questions/72179492",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/16780162/"
] |
```
md$Children[which(x >= 8 & y < 60)] = NA
```
|
@AbdurRohman's answer is good.
You could use
```r
md <- within(md,
Children[Children >= 8 & Age <60] <- NA
)
```
for slightly clearer code. (You should definitely tell your instructor you got help on Stack Overflow.)
| 8,569
|
9,794,616
|
I'm using the following code which will generate a wav file which contains a tone at 440 Hz lasting for 2 seconds.
```
from scipy.io.wavfile import write
from numpy import linspace,sin,pi,int16
def note(freq, len, amp=1, rate=44100):
t = linspace(0,len,len*rate)
data = sin(2*pi*freq*t)*amp
return data.astype(int16) # two byte integers
tone = note(440,2,amp=10000)
write('440hzAtone.wav',44100,tone) # writing the sound to a file
```
I was wondering if I could modify the code, basing it off the note method, in order to actually generate a tune with python.
I tried adding two different tones, and as expected the two tones play simultaneously, creating something which sounds a bit like a dial tone:
```
tone1 = note(440,2,amp=10000)
tone2 = note(480,2,amp=10000)
tone = tone1+tone2
write('440hzAtone.wav',44100,tone)
```
I also tried multiplying the two tones, but this just generates static.
I also tried genreating tones of different lengths and adding them, however this causes an exception to be raised, like so:
```
tone1 = note(440,2,amp=10000)
tone2 = note(480,1,amp=10000)
tone = tone1+tone2
write('440hzAtone.wav',44100,tone)
```
causes:
```
ValueError: operands could not be broadcast together with shapes (88200) (44100)
```
So, I was wondering - how can I concatenate different tones like this to make a tune?
|
2012/03/20
|
[
"https://Stackoverflow.com/questions/9794616",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/840973/"
] |
You can do this using numpy.concatenate, (as already posted). You need to specify the concatenation axis also. Using very low rate to illustrate:
```
from scipy.io.wavfile import write
from numpy import linspace,sin,pi,int16,concatenate
def note(freq, len, amp=1, rate=5):
t = linspace(0,len,len*rate)
data = sin(2*pi*freq*t)*amp
return data.astype(int16) # two byte integers
tone1 = note(440,2,amp=10)
tone2 = note(140,2,amp=10)
print tone1
print tone2
print concatenate((tone2,tone1),axis=1)
#output:
[ 0 -9 -3 8 6 -6 -8 3 9 0]
[ 0 6 9 8 3 -3 -8 -9 -6 0]
[ 0 6 9 8 3 -3 -8 -9 -6 0 0 -9 -3 8 6 -6 -8 3 9 0]
```
|
`numpy.linspace` creates a numpy array. To concatenate the tones, you'd want to concatenate the corresponding arrays. For this, a bit of Googling indicates that Numpy provides the helpfully named [`numpy.concatenate` function](http://docs.scipy.org/doc/numpy/reference/generated/numpy.concatenate.html).
| 8,570
|
5,151,898
|
I realized there is a memory leak in one python script. Which occupied around 25MB first, and after 15 days it is more than 500 MB.
I followed many different ways, and not able to get into the root of the problem as am a python newbie...
Finally, I got this following
```
objgraph.show_most_common_types(limit=20)
tuple 37674
function 9156
dict 3935
list 1646
wrapper_descriptor 1468
weakref 888
builtin_function_or_method 874
classobj 684
method_descriptor 551
type 533
instance 483
Kind 470
getset_descriptor 404
ImmNodeSet 362
module 342
IdentitySetMulti 333
PartRow 331
member_descriptor 264
cell 185
FontEntry 170
```
I set a break point, and after every iteration this is what is happening...
```
objgraph.show_growth()
tuple 37674 +10
```
What is the best way to proceed ?
```
(Pdb) c
(Pdb) objgraph.show_growth()
tuple 37684 +10
```
I guess printing out all the tuples, and cross checking -- what this 10 tuple gets added everytime will give me some clue ? Kindly let me know how to do that..
Or is there any other way to find out this memory leak. Am using python 2.4.3, and because of many other product dependencies - Unfortunately i cannot / should not upgrade.
|
2011/03/01
|
[
"https://Stackoverflow.com/questions/5151898",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/379997/"
] |
Am I reading correctly that the same script is running for 15 days non-stop?
For such long-running processes periodic restart is a good practice and it's much easier to do than eliminating all memory leaks.
*Update*: Look at [this answer](https://stackoverflow.com/questions/1641231/python-working-around-memory-leaks/1641280#1641280), it seems to do exactly what you need -- print all newly added objects that were not garbage collected.
|
My first thought is, probably you are creating new objects in you script and accumulating them in some sort of global list. It is usually easier to go over your script and make sure that you are not generating any persistent data than debugging the garbage. I think the utility you are using, objgraph, also allows you to print the garbage object with the number of references to it. You could try that.
| 8,571
|
10,928,313
|
Is anyone familiar with DRAKON?
I quite like the idea of the DRAKON visual editor and have been playing with it using Python -- more info: <http://drakon-editor.sourceforge.net/python/python.html>
The only thing I've had a problem with so far is python's try: except: exceptions. The only way I've attempted it is to use branches and then define try: and except: as separate actions below the branch. The only thing with this is that DRAKON doesn't pick up the try: and automatically indent the exception code afterwards.
Is there any way to handle try: except: in a visual way in DRAKON, or perhaps you've heard of another similar visual editor project for python?
Thanks.
|
2012/06/07
|
[
"https://Stackoverflow.com/questions/10928313",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/507286/"
] |
You could put the whole "try: except:" construct inside one "Action" icon like this:

Both spaces and tabs can be used for indentation inside an icon.
|
There are limitation exist in Drakon since it is a code generator, but what you can do is to re-factor the code as much as possible and stuff it inside action block:
```
try:
function_1()
function_2()
except:
function_3()
```
Drakon works best if you follow suggested rules(skewer,happy route,branching etc).
Once you construct algorithm base on this, it can help you solve complex problems fast.
Hope thats help.
| 8,572
|
15,063,936
|
I have a script reading in a csv file with very huge fields:
```
# example from http://docs.python.org/3.3/library/csv.html?highlight=csv%20dictreader#examples
import csv
with open('some.csv', newline='') as f:
reader = csv.reader(f)
for row in reader:
print(row)
```
However, this throws the following error on some csv files:
```
_csv.Error: field larger than field limit (131072)
```
How can I analyze csv files with huge fields? Skipping the lines with huge fields is not an option as the data needs to be analyzed in subsequent steps.
|
2013/02/25
|
[
"https://Stackoverflow.com/questions/15063936",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1251007/"
] |
This could be because your CSV file has embedded single or double quotes. If your CSV file is tab-delimited try opening it as:
```
c = csv.reader(f, delimiter='\t', quoting=csv.QUOTE_NONE)
```
|
You can use the `error_bad_lines` option of `pd.read_csv` to skip these lines.
```py
import pandas as pd
data_df = pd.read_csv('data.csv', error_bad_lines=False)
```
This works since the "bad lines" as defined in pandas include lines that one of their fields exceed the csv limit.
Be careful that this solution is valid only when the fields in your csv file *shouldn't* be this long.
If you expect to have big field sizes, this will throw away your data.
| 8,573
|
48,206,553
|
I am trying to make a view that i can use in multiple apps with different redirect urls:
Parent function:
```
def create_order(request, redirect_url):
data = dict()
if request.method == 'POST':
form = OrderForm(request.POST)
if form.is_valid():
form.save()
return redirect(redirect_url)
else:
form = OrderForm()
data['form'] = form
return render(request, 'core/order_document.html', data)
```
Child function:
```
@login_required()
def admin_order_document(request):
redirect_url = 'administrator:order_waiting_list'
return create_order(request, redirect_url)
```
When i'm trying to call admin\_order\_document function i'm getting:
```
Traceback (most recent call last):
File "/home/project/venv/lib/python3.5/site-packages/django/core/handlers/exception.py", line 41, in inner
response = get_response(request)
File "/home/project/venv/lib/python3.5/site-packages/django/core/handlers/base.py", line 187, in _get_response
response = self.process_exception_by_middleware(e, request)
File "/home/project/venv/lib/python3.5/site-packages/django/core/handlers/base.py", line 185, in _get_response
response = wrapped_callback(request, *callback_args, **callback_kwargs)
TypeError: create_order() missing 1 required positional argument: 'redirect_url'
```
If i remove redirect\_url from both functions and manually add 'administrator:order\_waiting\_list' to redirect() it works, but i need to redirect to multiple urls. So, why am i getting this error?
|
2018/01/11
|
[
"https://Stackoverflow.com/questions/48206553",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/7161215/"
] |
```
url(r'^orders/create/', views.create_order, name='create_order')
```
This clearly is not going to work, since `create_order` requires `redirect_url` but there is no `redirect_url` kwarg in the regex `r'^orders/create/'`.
Perhaps you want to use the `admin_order_document` view here instead:
```
url(r'^orders/create/', views.admin_order_document, name='create_order')
```
Note you should add a trailing dollar, i.e. `r'^orders/create/$'` unless you want to match `orders/create/something-else` as well as `orders/create/`.
|
If you didn't changed the regular url
```
urlpatterns = [
url(r'^admin/', admin_site.urls),
...
]
```
of your admin site you need to call your function like that:
```
@login_required()
def admin_order_document(request):
redirect_url = 'admin:order_waiting_list'
return create_order(request, redirect_url)
```
That should fix your problem.
| 8,583
|
35,438,785
|
I have a list of numbers and I want to make rows and columns out of the list.
I can brute force it and do the following below in Python 2.7.
```
l = [1,2,3,4,5,6,7,8,9]
r1 = [l[0], l[1], l[2]]
r2 = [l[3], l[4], l[5]]
r3 = [l[6], l[7], l[8]]
c1 = [l[0], l[3], l[6]]
```
But I can't seem to create a function in python to make it work. Is my syntax wrong?
```
def make_row(r, li, arg1, arg2, arg3):
r = [li[arg1], li[arg2], li[arg3]]
make_row(r1, l, 0, 1, 2)
make_row(r2, l, 3, 4, 5)
make_row(r3, l, 6, 7, 8)
```
Can anybody tell me what I'm doing wrong? The `make_row` function does not seem to work correctly.
|
2016/02/16
|
[
"https://Stackoverflow.com/questions/35438785",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5936229/"
] |
Your error is a misunderstanding in [how Python passes arguments](http://robertheaton.com/2014/02/09/pythons-pass-by-object-reference-as-explained-by-philip-k-dick/). `r` in the function `make_row` is just a name. When you assign into it, it simply points that name to something new, in the context of your function, leaving the old object, and old name outside the function, unchanged.
If you return the result of `make_row` you can see it generates the correct output, it just does not save it into the variables as you are thinking it would.
---
### However there are easier (and more Pythonic) ways to do what you are trying to do:
This will return a list of your **rows**:
```
[l[i:i+3] for i in xrange(0, len(l), 3)]
```
And this is the equivalent for **columns**:
```
[l[i::3] for i in xrange(0, 3)]
```
If you want rows/columns of a different length, just substitute that number in place of the 3's in these statements.
|
Your function `make_row` works as far as I can tell (I have not tested it), but you need to `return r`.
| 8,584
|
42,230,691
|
For a beginner in Tkinter, and just average in Python, it's hard to find proper stuff on tkinter. Here is the problem I met (and begin to solve). I think Problem came from python version.
I'm trying to do a GUI, in OOP, and I got difficulty in combining different classes.
Let say I have a "small box" (for example, a menu bar), and in want to put it in a "big box". Working from this tutorial (<http://sebsauvage.net/python/gui/index.html>), I'm trying the following code
```
#!usr/bin/env python3.5
# coding: utf-8
import tkinter as tki
class SmallBox(tki.Tk):
def __init__(self,parent):
tki.Tk.__init__(self,parent)
self.parent = parent
self.grid()
self.box = tki.LabelFrame(self,text="small box")
self.box.grid()
self.graphicalStuff = tki.Entry(self.box) # something graphical
self.graphicalStuff.grid()
class BigBox(tki.Tk):
def __init__(self,parent):
tki.Tk.__init__(self,parent)
self.parent = parent
self.grid()
self.box = tki.LabelFrame(self,text='big box containing the small one')
self.graphStuff = tki.Entry(self.box) # something graphical
self.sbox = SmallBox(self)
self.graphStuff.grid()
self.box.grid()
self.sbox.grid()
```
But I got the following error.
```
File "/usr/lib/python3.5/tkinter/__init__.py", line 1871, in __init__
self.tk = _tkinter.create(screenName, baseName, className, interactive, wantobjects, useTk, sync, use)
TypeError: create() argument 1 must be str or None, not BigBox
```
|
2017/02/14
|
[
"https://Stackoverflow.com/questions/42230691",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6721930/"
] |
The tutorial you are using has an incorrect example. The `Tk` class doesn't have a parent.
Also, you must only create a single instance of `Tk` (or subclass of `Tk`). Tkinter widgets exist in a tree-like hierarchy with a single root. This root widget is `Tk()`. You cannot have more than one root.
|
The code looks quite similar at this one : [Best way to structure a tkinter application](https://stackoverflow.com/questions/17466561/best-way-to-structure-a-tkinter-application)
But there is one slight difference, we're not working on Frame here. And the error asks for a problem in screenName, etc. which, intuitively, looks more like a Frame.
In fact, I would say that in Python3 you can't use anymore the version of the first tutorial and you have to use Frame, and code something like that :
```
#!usr/bin/env python3.5
# coding: utf-8
import tkinter as tki
class SmallBox(tki.Frame):
def __init__(self,parent):
tki.Frame.__init__(self,parent)
self.parent = parent
self.grid()
self.box = tki.LabelFrame(self,text="small box")
self.box.grid()
self.graphicalStuff = tki.Entry(self.box) # something graphical
self.graphicalStuff.grid()
class BigBox(tki.Frame):
def __init__(self,parent):
tki.Frame.__init__(self,parent)
self.parent = parent
self.grid()
self.box = tki.LabelFrame(self,text='big box containing the small one')
self.graphStuff = tki.Entry(self.box) # something graphical
self.sbox = SmallBox(self)
self.graphStuff.grid()
self.box.grid()
self.sbox.grid()
if __name__ == '__main__':
tg = BigBox(None)
tg.mainloop()
```
We don't find (especially for French people, or maybe people not "natural" in english) many examples and docs, and the one I use is quite common, so maybe it will be useful to someone.
| 8,587
|
45,733,399
|
I have a Javascript file `Commodity.js` like this:
```
commodityInfo = [
["GLASS ITEM", 1.0, 1.0, ],
["HOUSEHOLD GOODS", 3.0, 2.0, ],
["FROZEN PRODUCTS", 1.0, 3.0, ],
["BEDDING", 1.0, 4.0, ],
["PERFUME", 1.0, 5.0, ],
["HARDWARE", 5.0, 6.0, ],
["CURTAIN", 1.0, 7.0, ],
["CLOTHING", 24.0, 8.0, ],
["ELECTRICAL ITEMS", 1.0, 9.0, ],
["PLUMBING MATERIAL", 1.0, 10.0, ],
["FLOWER", 7.0, 11.0, ],
["PROCESSED FOODS.", 1.0, 12.0, ],
["TILES", 1.0, 13.0, ],
["ELECTRICAL", 9.0, 14.0, ],
["PLUMBING", 1.0, 15.0, ]
];
```
I want to iterate through each of the item like GLASS ITEM, HOUSEHOLD GOODS, FROZEN PRODUCTS and use the number beside it for some calculations using python.
Can someone tell me how to open and iterate through the items like that in python.
Thanking You.
|
2017/08/17
|
[
"https://Stackoverflow.com/questions/45733399",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6624726/"
] |
The following code may not be the most efficient, but it works for your case.
What I'm doing here: turn the string (the content of the file) into valid JSON and then load the JSON string into a Python variable.
Note: It would be easier if the content of your JS file was already valid JSON!
```
import re
import json
# for the sake of this code, we will assume you can successfully load the content of your JS file
# into a variable called "file_content"
# E.G. with the following code:
#
# with open('Commodity.js', 'r') as f: #open the file
# file_content = f.read()
# since I do not have such a file, I will fill the variable "manually", based on your sample data
file_content = """
commodityInfo = [
["GLASS ITEM", 1.0, 1.0, ],
["HOUSEHOLD GOODS", 3.0, 2.0, ],
["FROZEN PRODUCTS", 1.0, 3.0, ],
["BEDDING", 1.0, 4.0, ],
["PERFUME", 1.0, 5.0, ],
["HARDWARE", 5.0, 6.0, ],
["CURTAIN", 1.0, 7.0, ],
["CLOTHING", 24.0, 8.0, ],
["ELECTRICAL ITEMS", 1.0, 9.0, ],
["PLUMBING MATERIAL", 1.0, 10.0, ],
["FLOWER", 7.0, 11.0, ],
["PROCESSED FOODS.", 1.0, 12.0, ],
["TILES", 1.0, 13.0, ],
["ELECTRICAL", 9.0, 14.0, ],
["PLUMBING", 1.0, 15.0, ]
];
"""
# get rid of leading/trailing line breaks
file_content = file_content.strip()
# get rid of "commodityInfo = " and the ";" and make the array valid JSON
r = re.match(".*=", file_content)
json_str = file_content.replace(r.group(), "").replace(";", "").replace(", ]", "]")
# now we can load the JSON into a Python variable
# in this case, it will be a list of lists, just as the source is an array of array
l = json.loads(json_str)
# now we can do whatever we want with the list, e.g. iterate it
for item in l:
print(item)
```
|
You can use `for` loops to achieve that.
Something like this would work:
```
for commodity in commodityInfo:
commodity[0] # the first element (e.g: GLASS ITEM)
commodity[1] # the second element (e.g: 1.0)
print(commodity[1] + commodity[2]) #calculate two values
```
You can learn more about `for` loops [here](https://www.tutorialspoint.com/python/python_for_loop.htm)
| 8,588
|
20,554,040
|
I'm new with Django and I follow a tuto. The problem is that the tuto uses Sqlite but I want to use MySql server instead. I changed the parameters following documentation but I have the following error when I try to run the server. I already found some resolve but it didn't work...
For your information, I installed MySql-Python and reinstall Django with PIP. Without any success. I hope you will be able to help me.
Traceback :
```
Microsoft Windows [Version 6.1.7601]
Copyright (c) 2009 Microsoft Corporation. All rights reserved.
C:\Users\adescamp>cd C:\Users\adescamp\agregmail
C:\Users\adescamp\agregmail>python manage.py runserver 8000
Traceback (most recent call last):
File "manage.py", line 10, in <module>
execute_from_command_line(sys.argv)
File "C:\Python27\lib\site-packages\django\core\management\__init__.py", line 399, in execute_from_command_line
utility.execute()
File "C:\Python27\lib\site-packages\django\core\management\__init__.py", line 392, in execute
self.fetch_command(subcommand).run_from_argv(self.argv)
File "C:\Python27\lib\site-packages\django\core\management\base.py", line 242, in run_from_argv
self.execute(*args, **options.__dict__)
File "C:\Python27\lib\site-packages\django\core\management\base.py", line 280, in execute
translation.activate('en-us')
File "C:\Python27\lib\site-packages\django\utils\translation\__init__.py", line 130, in activate
return _trans.activate(language)
File "C:\Python27\lib\site-packages\django\utils\translation\trans_real.py", line 188, in activate
_active.value = translation(language)
File "C:\Python27\lib\site-packages\django\utils\translation\trans_real.py", line 177, in translation
default_translation = _fetch(settings.LANGUAGE_CODE)
File "C:\Python27\lib\site-packages\django\utils\translation\trans_real.py", line 159, in _fetch
app = import_module(appname)
File "C:\Python27\lib\site-packages\django\utils\importlib.py", line 40, in import_module
__import__(name)
File "C:\Python27\lib\site-packages\django\contrib\admin\__init__.py", line 6, in <module>
from django.contrib.admin.sites import AdminSite, site
File "C:\Python27\lib\site-packages\django\contrib\admin\sites.py", line 4, in <module>
from django.contrib.admin.forms import AdminAuthenticationForm
File "C:\Python27\lib\site-packages\django\contrib\admin\forms.py", line 6, in <module>
from django.contrib.auth.forms import AuthenticationForm
File "C:\Python27\lib\site-packages\django\contrib\auth\forms.py", line 17, in <module>
from django.contrib.auth.models import User
File "C:\Python27\lib\site-packages\django\contrib\auth\models.py", line 48, in <module>
class Permission(models.Model):
File "C:\Python27\lib\site-packages\django\db\models\base.py", line 96, in __new__
new_class.add_to_class('_meta', Options(meta, **kwargs))
File "C:\Python27\lib\site-packages\django\db\models\base.py", line 264, in add_to_class
value.contribute_to_class(cls, name)
File "C:\Python27\lib\site-packages\django\db\models\options.py", line 124, in contribute_to_class
self.db_table = truncate_name(self.db_table, connection.ops.max_name_length())
File "C:\Python27\lib\site-packages\django\db\__init__.py", line 34, in __getattr__
return getattr(connections[DEFAULT_DB_ALIAS], item)
File "C:\Python27\lib\site-packages\django\db\utils.py", line 198, in __getitem__
backend = load_backend(db['ENGINE'])
File "C:\Python27\lib\site-packages\django\db\utils.py", line 113, in load_backend
return import_module('%s.base' % backend_name)
File "C:\Python27\lib\site-packages\django\utils\importlib.py", line 40, in import_module
__import__(name)
File "C:\Python27\lib\site-packages\django\db\backends\mysql\base.py", line 17, in <module>
raise ImproperlyConfigured("Error loading MySQLdb module: %s" % e) django.core.exceptions.ImproperlyConfigured: Error loading MySQLdb module: No module named MySQLdb
```
Thank you!
|
2013/12/12
|
[
"https://Stackoverflow.com/questions/20554040",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2904080/"
] |
Your problem is most likely related to buffering in your system, not anything intrinsically wrong with your line of code. I was able to create a test scenario where I could reproduce it - then make it go away. I hope it will work for you too.
Here is my test scenario. First I write a short script that writes the time to a file every 100 ms (approx) - this is my "log file" that generates enough data that `uniq -c` should give me an interesting output every second:
```
#!/bin/ksh
while :
do
echo The time is `date` >> a.txt
sleep 0.1
done
```
(Note - I had to use `ksh` which has the ability to do a sub-second `sleep`)
In another window, I type
```
tail -f a.txt | uniq -c
```
Sure enough, you get the following output appearing every second:
```
9 The time is Thu Dec 12 21:01:05 EST 2013
10 The time is Thu Dec 12 21:01:06 EST 2013
10 The time is Thu Dec 12 21:01:07 EST 2013
9 The time is Thu Dec 12 21:01:08 EST 2013
10 The time is Thu Dec 12 21:01:09 EST 2013
9 The time is Thu Dec 12 21:01:10 EST 2013
10 The time is Thu Dec 12 21:01:11 EST 2013
10 The time is Thu Dec 12 21:01:12 EST 2013
```
etc. No delays. Important to note - **I did not attempt to cut out the time**. Next, I did
```
tail -f a.txt | cut -f7 -d' ' | uniq -c
```
And your problem reproduced - it would "hang" for quite a while (until there was 4k of characters in the buffer, and then it would vomit it all out at once).
A bit of searching online ( <https://stackoverflow.com/a/16823549/1967396> ) told me of a utility called [stdbuf](http://linux.die.net/man/1/stdbuf) . At that reference, it specifically mentions almost exactly your scenario, and they provide the following workaround (paraphrasing to match my scenario above):
```
tail -f a.txt | stdbuf -oL cut -f7 -d' ' | uniq -c
```
And that would be great… except that this utility doesn't exist on my machine (Mac OS) - it is specific to GNU coreutils. This left me unable to test - although it may be a good solution for you.
Never fear - I found the following workaround, based on the `socat` command (which I honestly barely understand, but I adapted from the answer given at <https://unix.stackexchange.com/a/25377> ).
Make a small file called `tailcut.sh` (this is the "long\_running\_command" from the link above):
```
#!/bin/ksh
tail -f a.txt | cut -f7 -d' '
```
Give it execute permissions with `chmod 755 tailcut.sh` . Then issue the following command:
```
socat EXEC:./tailcut.sh,pty,ctty STDIO | uniq -c
```
And hey presto - your lumpy output is lumpy no more. The `socat` sends the output from the script straight to the next pipe, and `uniq` can do its thing.
|
Consider how `uniq -c` is working.
In order to print the count, it needs to read all the unique lines and only once a line that is different from the previous one, it can print the line and number of occurences.
That's just how the algorithm fundamentally works and there is no way around it.
You can test this by running
```
touch a
tail -F a | uniq -c
```
And then one after another
```
echo 1 >> a
echo 1 >> a
echo 1 >> a
```
nothing happens. Only after you run
```
echo 2 >> a
```
`uniq` can print there were 3 "1\n" occurences.
| 8,589
|
6,929,981
|
I'm trying to build a regex that joins numbers in a string when they have spaces between them, ex:
```
$string = "I want to go home 8890 7463 and then go to 58639 6312 the cinema"
```
The regex should output:
```
"I want to go home 88907463 and then go to 586396312 the cinema"
```
The regex can be either in python or php language.
Thanks!
|
2011/08/03
|
[
"https://Stackoverflow.com/questions/6929981",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/797495/"
] |
Use a look-ahead to see if the next block is a set of numbers and remove the trailing space. That way, it works for any number of sets (which I suspected you might want):
```
$string = "I want to go home 8890 7463 41234 and then go to 58639 6312 the cinema";
$newstring = preg_replace("/\b(\d+)\s+(?=\d+\b)/", "$1", $string);
// Note: Remove the \b on both sides if you want any words with a number combined.
// The \b tokens ensure that only blocks with only numbers are merged.
echo $newstring;
// I want to go home 8890746341234 and then go to 586396312 the cinema
```
|
Python:
```
import re
text = 'abc 123 456 789 xyz'
text = re.sub(r'(\d+)\s+(?=\d)', r'\1', text) # abc 123456789 xyz
```
This works for any number of consecutive number groups, with any amount of spacing in-between.
| 8,592
|
71,583,214
|
In GitBook, the title shows up while mousing over them by default.
[](https://i.stack.imgur.com/nXE8q.png)
I wanna show up the title. I inspect the elements,
```html
<div class="book-header" role="navigation">
<!-- Title -->
<h1>
<i class="fa fa-circle-o-notch fa-spin"></i>
<a href=".." >Preface</a>
</h1>
</div>
```
and related CSS is,
```
.book-header h1 a, .book-header h1 a:hover {
color: inherit;
text-decoration: none;
}
```
I add the following CSS,
```css
.book-header h1 a {
display: block !important;
}
```
but it doesn't work.
---
Update:
I follow the answer from @ED Wu, and add the following code to CSS,
```css
.book-header h1 {
opacity: 1;
}
```
The title does show up. However, the left sidebar doesn't show up while I click on `三` (no action) after adding `opacity:1`. An example is [here](https://dianyao.co/python).
[](https://i.stack.imgur.com/ksrfr.png)
|
2022/03/23
|
[
"https://Stackoverflow.com/questions/71583214",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3067748/"
] |
Because .book-header h1 opacity is 0.
Try add this to your css.
```
.book-header h1 {
opacity:1!important;
}
```
|
Try this! More about `color: inherit` [here](https://www.w3schools.com/cssref/css_inherit.asp). You can use other property like `z-index`, `opacity` and `position` if it doesn't work too. Thanks :)
```css
.book-header h1 a, .book-header h1 a:hover {
display: block !important;
color: #000 !important;
text-decoration: none;
}
```
```html
<div class="book-header" role="navigation">
<!-- Title -->
<h1>
<i class="fa fa-circle-o-notch fa-spin"></i>
<a href=".." >Preface</a>
</h1>
</div>
```
| 8,593
|
46,908,231
|
I'm a noobie, learning to code and i stumbled upon an incorrect output while practicing a code in python, please help me with this. I tried my best to find the problem in the code but i could not find it.
Code:
```
def compare(x,y):
if x>y:
return 1
elif x==y:
return 0
else:
return -1
i=raw_input("enter x\n")
j=raw_input("enter y\n")
print compare(i,j)
```
Output:
```
-> python python.py
enter x
10
enter y
5
-1
```
The output that i had to receive is 1 but the output that i receive is -1. Please help me with the unseen error in my code.
Thank you.
|
2017/10/24
|
[
"https://Stackoverflow.com/questions/46908231",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/8818971/"
] |
`raw_input` returns a string always.
so you have to convert the input values into numbers.
```
i=raw_input("enter x\n")
j=raw_input("enter y\n")
print compare(i,j)
```
should be
```
i=int(raw_input("enter x\n"))
j=int(raw_input("enter y\n"))
print compare(i,j)
```
|
Your issue is that `raw_input()` returns a string, not an integer.
Therefore, what your function is actually doing is checking "10" > "5", which is `False`, therefore it falls through your `if` block and reaches the `else` clause.
To fix this, you'll need to cast your input strings to integers by wrapping the values in `int()`.
i.e.
`i = int(raw_input("enter x\n"))`.
| 8,594
|
66,385,439
|
I'm working on a project where I need to convert a set of data rows from database into `list of OrderedDict` for other purpose and use this `list of OrderedDict` to convert into a `nested JSON` format in `python`. I'm starting to learn python. I was able convert the query response from database which is a `list of lists` to `list of OrderedDict`.
I have the `list of OrderedDict` as below:
```
{
'OUTBOUND': [
OrderedDict([('Leg', 1), ('SessionID', 'W12231fwfegwcaa2'),('FeeCode', 'ATO'),('SeatGroup', '2'),
('Currency', 'MXN'),('Modality', 'VB'),('BookingClass', 'A'),('Price', 145.0),('Num_Pax', 1),('Channel', 'Web')]),
OrderedDict([('Leg', 1),('SessionID', 'W12231fwfegwcaa2'),('FeeCode', 'ATO'),('SeatGroup', '4'),
('Currency', 'MXN'),('Modality', 'VB'),('BookingClass', 'A'),('Price', 111.0),('Num_Pax', 1),('Channel', 'Web')]),
OrderedDict([('Leg', 1),('SessionID', 'W12231fwfegwcaa2'),('FeeCode', 'BDM'),('SeatGroup', 'null'),
('Currency', 'MXN'),('Modality', 'VB'),('BookingClass', 'A'),('Price', 111.0),('Num_Pax', 1),('Channel', 'Web')]),
OrderedDict([('Leg', 2),('SessionID', 'W12231fwfegwcaa2'),('FeeCode', 'ATO'),('SeatGroup', '1'),
('Currency', 'MXN'),('Modality', 'VB'),('BookingClass', 'U'),('Price', 180.0),('Num_Pax', 1),('Channel', 'Web'))]),
OrderedDict([('Leg', 2),('SessionID', 'W12231fwfegwcaa2'),('FeeCode', 'ATO'),('SeatGroup', '4'),
('Currency', 'MXN'),('Modality', 'VB'),('BookingClass', 'U'),('Price', 97.0),('Num_Pax', 1),('Channel', 'Web')]),
OrderedDict([('Leg', 2),('SessionID', 'W12231fwfegwcaa2'),('FeeCode', 'BDM'),('SeatGroup', 'null'),
('Currency', 'MXN'),('Modality', 'VB'),('BookingClass', 'U'),('Price', 97.0),('Num_Pax', 1),('Channel', 'Web')])
]
}
```
And I needed the nested format like below:
```
{
"OUTBOUND": [
{
"Leg": 1,
"SessionID": "W12231fwfegwcaa2",
"Modality": "VB",
"BookingClass": "A",
"FeeCodes":[
{
"FeeCode": "ATO",
"Prices":
[
{
"SeatGroup": "2",
"Price": 145.0,
"Currency": "MXN"
},
{
"SeatGroup": "4",
"Price": 111.0,
"Currency": "MXN"
}
]
},
{
"FeeCode": "VBABDM",
"Prices":
[
{
"SeatGroup": "null",
"Price": 111.0,
"Currency": "MXN"
}
]
}
],
"Num_Pax": 1,
"Channel": "Web"
},
{
"Leg": 2,
"SessionID": "W12231fwfegwcaa2",
"Modality": "VB",
"BookingClass": "U",
"FeeCodes":[
{
"FeeCode": "ATO",
"Prices":
[
{
"SeatGroup": "1",
"Price": 180.0,
"Currency": "MXN"
},
{
"SeatGroup": "4",
"price": 97.0,
"Currency": "MXN"
}
]
},
{
"FeeCode": "VBABDM",
"Prices":
[
{
"SeatGroup": "null",
"price": 97.0,
"Currency": "MXN"
}
]
}
],
"Num_Pax": 1,
"Channel": "Web"
}
]
}
```
If I'm not wrong, I need to group by `Leg`, `SessionID`, `Modality`, `BookingClass`, `NumPax` and `Channel` and group the `FeeCode`, `SeatGroup`, `Price` and `Currency` into nested format as above but unable to move ahead with how to loop and group for nesting.
It would be great if I could get some help. Thanks
|
2021/02/26
|
[
"https://Stackoverflow.com/questions/66385439",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2699684/"
] |
I was able to write a python code to get the format as I needed using simple looping with a couple of changes in the output like the fields SessionID, Num\_Pax and Channel is taken outside then the OUTBOUND field and fields within are generated.
Instead of OrderedDict, I used a list of lists as input which I convert into Pandas DataFrame and work with the DataFrame to get the nested format.
Below is the code I used:
```
outbound_df = pd.DataFrame(response_outbound,columns=All_columns)
Common_columns = ['Leg', 'Modality', 'BookingClass']
### Taking SessionID, AirlineCode,Num_Pax and Channel outside OUTBOUND part as they are common for all the leg level data
response_data['SessionID'] = outbound_df['SessionID'].unique()[0]
response_data['Num_Pax'] = int(outbound_df['Num_Pax'].unique()[0])
response_data['Channel'] = outbound_df['Channel'].unique()[0]
temp_data = []
Legs = outbound_df['Leg'].unique()
for i in Legs:
subdata = outbound_df[outbound_df['Leg']==i]
### Initializing leg_data dict
leg_data = collections.OrderedDict()
### Populating common fields of the leg (Leg, Modality,BookingClass)
for j in Common_columns:
if(j=='Leg'):
leg_data[j] = int(subdata[j].unique()[0])
else:
leg_data[j] = subdata[j].unique()[0]
leg_data['FeeCodes'] = []
FeeCodes = subdata['FeeCode'].unique()
for fc in FeeCodes:
subdata_fees = subdata[subdata['FeeCode']==fc]
Prices = {'FeeCode':fc, "Prices":[]}
for _,rows in subdata_fees.iterrows():
data = {}
data['SeatGroup'] = rows['SeatGroup']
data['Price'] = float(rows['Price'])
data['Currency'] = rows['Currency']
Prices["Prices"].append(data)
leg_data["FeeCodes"].append(Prices)
temp_data.append(leg_data)
response_data["OUTBOUND"] = temp_data
```
I can just do `json.dumps` on `response_data` to get json format which will be sent to the next steps.
Below is the output format I get:
```
{
"SessionID":"W12231fwfegwcaa2",
"Num_Pax":1,
"Channel":"Web",
"OUTBOUND":[
{
"Leg":1,
"Modality":"VB",
"BookingClass":"A",
"FeeCodes":[
{
"FeeCode":"ATO",
"Prices":[
{
"SeatGroup":"2",
"Price":145.0,
"Currency":"MXN"
},
{
"SeatGroup":"4",
"Price":111.0,
"Currency":"MXN"
}
]
},
{
"FeeCode":"VBABDM",
"Prices":[
{
"SeatGroup":"null",
"Price":111.0,
"Currency":"MXN"
}
]
}
]
},
{
"Leg":2,
"Modality":"VB",
"BookingClass":"U",
"FeeCodes":[
{
"FeeCode":"ATO",
"Prices":[
{
"SeatGroup":"1",
"Price":180.0,
"Currency":"MXN"
},
{
"SeatGroup":"4",
"price":97.0,
"Currency":"MXN"
}
]
},
{
"FeeCode":"VBABDM",
"Prices":[
{
"SeatGroup":"null",
"price":97.0,
"Currency":"MXN"
}
]
}
]
}
]
}
```
Please let me know if we can shorten the code in terms of lengthy iterations or any other changes. Thanks.
PS: Sorry for my editing mistakes
|
Assuming that you stored the dictionary to some variable `foo`, you can do:
```py
import json
json.dumps(foo)
```
And be careful, you added extra bracket in the 4th element `OUTBOUND` list
| 8,596
|
67,055,004
|
My Azure devops page will look like :
[](https://i.stack.imgur.com/YBdJx.png)
I have 4 pandas dataframes.
I need to create 4 sub pages in Azure devops wiki from each dataframe.
Say, Sub1 from first dataframe, Sub2 from second dataframe and so on.
My result should be in tab. The result should look like :
[](https://i.stack.imgur.com/B66yA.png)
Is it possible to create subpages thru API?
I have referenced the following docs. But I am unable to make any sense. Any inputs will be helpful. Thanks.
<https://github.com/microsoft/azure-devops-python-samples/blob/main/API%20Samples.ipynb>
<https://learn.microsoft.com/en-us/rest/api/azure/devops/wiki/pages/create%20or%20update?view=azure-devops-rest-6.0>
|
2021/04/12
|
[
"https://Stackoverflow.com/questions/67055004",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/11049287/"
] |
use value() method
```
$user->image()->value('image');
```
From [Eloquent documentation](https://laravel.com/docs/8.x/queries#retrieving-a-single-row-column-from-a-table)
>
> If you don't need an entire row, you may extract a single value from a record using the value method. This method will return the value of the column directly:
>
>
> $email = DB::table('users')->where('name', 'John')->value('email');
>
>
>
You can set it as a user attribute.
```
public function getProfileImageAttribute()
{
return optional($this->image)->image;
//or
return $this->image->image ?? null;
//or
return $this->image->image ?? 'path/of/default/image';
}
```
now you can call it like this
```
$user->profile_image;
```
|
You can create another function inside your model and access the previous method like
```
public function image()
{
return $this->hasOne(UserImages::class, 'user_id', 'id')->latest();
}
public function avatar()
{
return $this->image->image ?: null;
//OR
return $this->image->image ?? null;
//OR
return !is_null($this->image) ? $this->image->image : null;
//OR
return optional($this->image)->image;
}
```
And it will be accessible with `$user->avatar();`
**As from discussion you are sending response to api**
```
$this->user = $this->user->where('id', "!=", $current_user->id)->with(['chats','Image:user_id,image'])->paginate(50);
```
This will help you, but it will be better to use Resources for api responses to transform some specific fields.
| 8,597
|
30,114,579
|
I am running ubuntu 12.04 and running programs through the terminal. I have a file that compiles and runs without any issues when I am in the current directory. Example below,
```
david@block-ubuntu:~/Documents/BudgetAutomation/BillList$ pwd
/home/david/Documents/BudgetAutomation/BillList
david@block-ubuntu:~/Documents/BudgetAutomation/BillList$ python3.4 bill.py
./otherlisted.txt
./monthlisted.txt
david@block-ubuntu:~/Documents/BudgetAutomation/BillList$
```
Now when I go back one directory and try running the same piece of code, I get an error message, `ValueError: need more than 1 value to unpack`. Below is what happens when I run the sample code one folder back and then the sample code below that.
```
david@block-ubuntu:~/Documents/BudgetAutomation$ python3.4 /home/david/Documents/BudgetAutomation/BillList/bill.py
Traceback (most recent call last):
File "/home/david/Documents/BudgetAutomation/BillList/bill.py", line 22, in <module>
bill_no, bill_name, trash = line.split('|', 2)
ValueError: need more than 1 value to unpack
```
The code, `bill.py`, below. This program reads two text files from the folder that it is located in and parses the lines into variables.
```
#!/usr/bin/env python
import glob
# gather all txt files in directory
arr = glob.glob('./*.txt')
arrlen = int(len(arr))
# create array to store list of bill numbers and names
list_num = []
list_name = []
# for loop that parses lines into appropriate variables
for i in range(arrlen):
with open(arr[i]) as input:
w = 0 ## iterative variable for arrays
for line in input:
list_num.append(1) ## initialize arrays
list_name.append(1)
# split line into variables.. trash is rest of line that has no use
bill_no, bill_name, trash = line.split('|', 2)
# stores values in array
list_num[w] = bill_no
list_name[w] = bill_name
w += 1
```
What is going on here? Am I not running the compile and run command in the terminal correctly? Another note to know is that I eventually call this code from another file and it will not run the for loop, I am assuming since it doesn't run unless its called from its own folder/directory?
|
2015/05/08
|
[
"https://Stackoverflow.com/questions/30114579",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4362951/"
] |
Your problem starts in line 5:
```
arr = glob.glob('./*.txt')
```
You are telling glob to look in the local directory for all .txt files. Since you are one directory up you do not have these files.
You are getting a ValueError because the line variable is empty.
As it is written you will need to run it from that directory.
Edit:
The way I see it you have three separate options.
1. You could simply run script with the full path (assuming it is executable)
~/Documents/BudgetAutomation/BillList/bill.py
2. You could put the full path into the file (although not very Pythonic)
arr = glob.glob('/home/[username]/Documents/BudgetAutomation/BillList/\*.txt')
3. You could use [sys.argv](https://docs.python.org/3/library/sys.html?highlight=sys.argv#sys.argv) to pass the path in the file. This would be my personal preferred way. Use os.path.join to put the correct slashes.
arr = glob.glob(os.path.join(sys.argv[1](https://docs.python.org/3/library/sys.html?highlight=sys.argv#sys.argv), '\*.txt'))
|
You don't need to create that range object to iterate over the glob result. You can just do it like this:
```
for file_path in arr:
with open(file_path) as text_file:
#...code below...
```
The reason of why that exception is raised, I guess, is there exist text files contain content not conforming with your need. You read a line from that file, which is something may be like "foo|bar", then the splitting result of it is ["foo", "bar"].
If you want to avoid this exception, you could just catch it:
```
try:
bill_no, bill_name, trash = line.split('|', 2)
except ValueError:
# You can do something more meaningful but just "pass"
pass
```
| 8,599
|
14,965,542
|
I have a huge file from which I need data for specific entries. File structure is:
```
>Entry1.1
#size=1688
704 1 1 1 4
979 2 2 2 0
1220 1 1 1 4
1309 1 1 1 4
1316 1 1 1 4
1372 1 1 1 4
1374 1 1 1 4
1576 1 1 1 4
>Entry2.1
#size=6251
6110 3 1.5 0 2
6129 2 2 2 2
6136 1 1 1 4
6142 3 3 3 2
6143 4 4 4 1
6150 1 1 1 4
6152 1 1 1 4
>Entry3.2
#size=1777
AND SO ON-----------
```
What I have to achieve is that I need to extract all the lines (complete record) for certain entries. For e.x. I need record for Entry1.1 than I can use name of entry '>Entry1.1' till next '>' as markers in REGEX to extract lines in between. But I do not know how to build such complex REGEX expressions. Once I have such expression the I will put it a FOR loop:
```
For entry in entrylist:
GET record from big_file
DO some processing
WRITE in result file
```
What could be the REGEX to perform such extraction of record for specific entries? Is there any more pythonic way to achieve this? I would appreciate your help on this.
AK
|
2013/02/19
|
[
"https://Stackoverflow.com/questions/14965542",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1031842/"
] |
With regex
```
import re
ss = '''
>Entry1.1
#size=1688
704 1 1 1 4
979 2 2 2 0
1220 1 1 1 4
1309 1 1 1 4
1316 1 1 1 4
1372 1 1 1 4
1374 1 1 1 4
1576 1 1 1 4
>Entry2.1
#size=6251
6110 3 1.5 0 2
6129 2 2 2 2
6136 1 1 1 4
6142 3 3 3 2
6143 4 4 4 1
6150 1 1 1 4
6152 1 1 1 4
>Entry3.2
#size=1777
AND SO ON-----------
'''
patbase = '(>Entry *%s(?![^\n]+?\d).+?)(?=>|(?:\s*\Z))'
while True:
x = raw_input('What entry do you want ? : ')
found = re.findall(patbase % x, ss, re.DOTALL)
if found:
print 'found ==',found
for each_entry in found:
print '\n%s\n' % each_entry
else:
print '\n ** There is no such an entry **\n'
```
Explanation of `'(>Entry *%s(?![^\n]+?\d).+?)(?=>|(?:\s*\Z))'` :
1)
==
`%s` receives the reference of entry: 1.1 , 2 , 2.1 etc
2)
==
The portion `(?![^\n]+?\d)` is to do a verification.
`(?![^\n]+?\d)` is a negative look-ahead assertion that says that what is after `%s` must not be `[^\n]+?\d` that is to say any characters `[^\n]+?` before a digit `\d`
I write `[^\n]` to mean "any character except a newline `\n`".
I am obliged to write this instead of simply `.+?` because I put the flag `re.DOTALL` and the pattern portion `.+?` would be acting until the end of the entry.
However, I only want to verify that after the entered reference (represented by %s in the pattern), there won't be supplementary digits before the end OF THE LINE, entered by error
All that is because if there is an Entry2.1 but no Entry2 , and the user enters only 2 because he wants Entry2 and no other, the regex would detect the presence of the Entry2.1 and would yield it, though the user would really like Entry2 in fact.
3)
==
At the end of `'(>Entry *%s(?![^\n]+?\d).+?)` , the part `.+?` will catch the complete block of the Entry, because the dot represents any character, comprised a newline `\n`
It's for this aim that I put the flag `re.DOTALL`in order to make the following pattern portion `.+?` capable to pass the newlines until the end of the entry.
4)
==
I want the matching to stop at the end of the Entry desired, not inside the next one, so that the group defined by the parenthesises in `(>Entry *%s(?![^\n]+?\d).+?)` will catch exactly what we want
Hence, I put at the end a positive look-ahaed assertion `(?=>|(?:\s*\Z))` that says that the character before which the running ungreedy `.+?` must stop to match is either `>` (beginning of the next Entry) or the end of the string `\Z`.
As it is possible that the end of the last Entry wouldn't exactly be the end of the entire string, I put `\s*` that means "possible whitespaces before the very end".
So `\s*\Z` means "there can be whitespaces before to bump into the end of the string"
Whitespaces are a `blank` , `\f`, `\n`, `\r`, `\t`, `\v`
|
Not entirely sure what you're asking. Does this get you any closer? It will put all your entries as dictionary keys and a list of all its entries. Assuming it is formatted like I believe it is. Does it have duplicate entries? Here's what I've got:
```
entries = {}
key = ''
for entry in open('entries.txt'):
if entry.startswith('>Entry'):
key = entry[1:].strip() # removes > and newline
entries[key] = []
else:
entries[key].append(entry)
```
| 8,604
|
5,086,419
|
I wrote the following script in python to convert datetime from any given timezone to EST.
```
from datetime import datetime, timedelta
from pytz import timezone
import pytz
utc = pytz.utc
# Converts char representation of int to numeric representation '121'->121, '-1729'->-1729
def toInt(ch):
ret = 0
minus = False
if ch[0] == '-':
ch = ch[1:]
minus = True
for c in ch:
ret = ret*10 + ord(c) - 48
if minus:
ret *= -1
return ret
# Converts given datetime in tzone to EST. dt = 'yyyymmdd' and tm = 'hh:mm:ss'
def convert2EST(dt, tm, tzone):
y = toInt(dt[0:4])
m = toInt(dt[4:6])
d = toInt(dt[6:8])
hh = toInt(tm[0:2])
mm = toInt(tm[3:5])
ss = toInt(tm[6:8])
# EST timezone and given timezone
est_tz = timezone('US/Eastern')
given_tz = timezone(tzone)
fmt = '%Y-%m-%d %H:%M:%S %Z%z'
# Initialize given datetime and convert it to local/given timezone
local = datetime(y, m, d, hh, mm, ss)
local_dt = given_tz.localize(local)
est_dt = est_tz.normalize(local_dt.astimezone(est_tz))
dt = est_dt.strftime(fmt)
print dt
return dt
```
When I call this method with
convert2EST('20110220', '11:00:00', 'America/Sao\_Paulo')
output is '2011-02-20 08:00:00 EST-0500' but DST in Brazil ended on 20th Feb and correct answer should be '2011-02-20 09:00:00 EST-0500'.
From some experimentation I figured out that according to pytz Brazil's DST ends on 27th Feb which is incorrect.
Does pytz contains wrong data or I am missing something. Any help or comments will be much appreciated.
|
2011/02/23
|
[
"https://Stackoverflow.com/questions/5086419",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/629424/"
] |
Firstly slightly less insane implementation:
```
import datetime
import pytz
EST = pytz.timezone('US/Eastern')
def convert2EST(date, time, tzone):
dt = datetime.datetime.strptime(date+time, '%Y%m%d%H:%M:%S')
tz = pytz.timezone(tzone)
dt = tz.localize(dt)
return dt.astimezone(EST)
```
Now, we try to call it:
```
>>> print convert2EST('20110220', '11:00:00', 'America/Sao_Paulo')
2011-02-20 09:00:00-05:00
```
As we see, we get the correct answer.
Update: I got it!
Brazil changed it's daylight savings in 2008. It's unclear what it was before that, but likely your data is old.
This is probably not pytz fault as pytz is able to use your operating systems database. You probably need to update your operating system. This is (I guess) the reason I got the correct answer even with a pytz from 2005, it used the (updated) data from my OS.
|
Seems like you have answered your own question. If pytz says DST ends on 27 Feb in Brazil, it's wrong. DST in Brazil ends on the [third Sunday of February](http://translate.google.com/translate?js=n&prev=_t&hl=en&ie=UTF-8&layout=2&eotf=1&sl=pt&tl=en&u=http%3A%2F%2Fpcdsh01.on.br%2FDecHV.html), unless that Sunday falls during Carnival; it does not this year, so DST is not delayed.
That said, you seem to be rolling your own converter unnecessarily. You should look at the [`time`](http://docs.python.org/library/time.html) module, which eases conversions between gmt and local time, among other things.
| 8,607
|
30,368,275
|
I have file test.robot with test cases.
How can i get the list of this test cases without activating the tests, from command line or python?
|
2015/05/21
|
[
"https://Stackoverflow.com/questions/30368275",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4923721/"
] |
You can check out [testdoc tool](http://robotframework.org/robotframework/latest/RobotFrameworkUserGuide.html#test-data-documentation-tool-testdoc). Like explained in the doc, "The created documentation is in HTML format and it includes name, documentation and other metadata of each test suite and test case".
|
**For v3.2 and up:**
In RobotFramework 3.2 [the parsing APIs have been rewritten](https://github.com/robotframework/robotframework/blob/master/doc/releasenotes/rf-3.2.rst#parsing-apis-have-been-rewritten), so the answer from Bryan Oakley won't work on these versions anymore.
The proper code that is compatible with both pre-3.2 and post-3.2 versions is the following:
```
from robot.running import TestSuiteBuilder
from robot.model import SuiteVisitor
class TestCasesFinder(SuiteVisitor):
def __init__(self):
self.tests = []
def visit_test(self, test):
self.tests.append(test)
builder = TestSuiteBuilder()
testsuite = builder.build('testsuite/')
finder = TestCasesFinder()
testsuite.visit(finder)
print(*finder.tests)
```
Further reading:
* [Visitor model](https://robot-framework.readthedocs.io/en/latest/autodoc/robot.model.html#module-robot.model.visitor)
* [`TestSuiteBuilder` class reference](https://robot-framework.readthedocs.io/en/latest/autodoc/robot.running.builder.html#robot.running.builder.builders.TestSuiteBuilder)
| 8,608
|
50,254,723
|
I updated the python version from 3.6.4 to 3.6.5 today. This is because, in the process of distributing to Heroku, it recommends version 3.6.5. Therefore, the following power shell contents were confirmed.
```
Writing objects: 100% (35/35), 11.68 KiB | 0 bytes/s, done.
Total 35 (delta 3), reused 0 (delta 0)
remote: Compressing source files... done.
remote: -----> Python app detected
remote: ! The latest version of Python 3 is python-3.6.5 (you are using ÿþpython-3.6.5, which is unsupported).
remote: ! We recommend upgrading by specifying the latest version (python-3.6.5).
remote: Learn More: https://devcenter.heroku.com/articles/python-runtimes
remote: -----> Installing ÿþpython-3.6.5
remote: ! Requested runtime (ÿþpython-3.6.5) is not available for this stack (heroku-16).
remote: ! Aborting. More info: https://devcenter.heroku.com/articles/python-support
remote: ! Push rejected, failed to compile Python app.
remote:
remote: ! Push failed
remote:
remote: ! Push rejected to XXXXXXXX.
remote:
To https://git.heroku.com/XXXXXXXX.git
! [remote rejected] master -> master (pre-receive hook declined)
error: failed to push some refs to 'https://git.heroku.com/XXXXXXXX.git
```
After changing my `runtime.txt` file to UTF-8, I now get the following error:
```
Writing objects: 100% (35/35), 11.68 KiB | 0 bytes/s, done.
Total 35 (delta 3), reused 0 (delta 0)
remote: Compressing source files... done.
remote: -----> Python app detected
remote: ! The latest version of Python 3 is python-3.6.5 (you are using python-3.6.5, which is unsupported).
remote: ! We recommend upgrading by specifying the latest version (python-3.6.5).
remote: Learn More: https://devcenter.heroku.com/articles/python-runtimes
remote: -----> Installing python-3.6.5
remote: ! Requested runtime (python-3.6.5) is not available for this stack (heroku-16).
remote: ! Aborting. More info: https://devcenter.heroku.com/articles/python-support
remote: ! Push rejected, failed to compile Python app.
remote:
remote: ! Push failed
remote: Verifying deploy...
remote:
remote: ! Push rejected to XXXXXXXX.
remote:
To https://git.heroku.com/XXXXXXXX.git
! [remote rejected] master -> master (pre-receive hook declined)
error: failed to push some refs to 'https://git.heroku.com/XXXXXXXX.git
```
Why is `python-3.6.5` being rejected? Isn't that exactly what Heroku says is the default version?
|
2018/05/09
|
[
"https://Stackoverflow.com/questions/50254723",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/9556991/"
] |
Heroku believes that your `runtime.txt` contains some extra characters:
```
ÿþpython-3.6.5
```
This is probably [byte-order mark for a file encoded as UTF-16 in little-endian order](https://en.wikipedia.org/wiki/Byte_order_mark#UTF-16). Make sure you're using a sane encoding for that file (and others). UTF-8 is a good choice in virtually all situations.
|
You're trying to install `ÿþpython-3.6.5` not `python-3.6.5` as the console output suggests. Remove `ÿþ` and it should work as expected.
| 8,610
|
65,030,618
|
TLDR
====
One of my models contains data that could either be a charfield, textfield, or boolfield based on a choice made in a separate model that it is connected to through a foreignkey. What's the most efficient way to model this in Django?
My problem
==========
I'm putting together a Django app that outputs a python {'key': 'value'} dictionary in a somewhat lengthy two-step process. In the first step, users design a custom 'Template' that contains a collection of 'TemplateEntries'. In pseudo-code:
```
Template MODEL
foreign key: User
description = textfield
name = charfield
TemplateEntry MODEL
foreign key: Template
key = charfield
value_type = charfield(choices='CharField', 'TextField', 'BoolField')
description = textfield
order = positivesmallintegerfield (So users can re-arrange the order of TemplateEntries when creating the Template)
```
EXAMPLE TEMPLATE FORM #1
1. [Description] | [Key] | Field Type: [Choice between **Char Field**, Text Field, Bool Field]
2. [Description] | [Key] | Field Type: [Choice between **Char Field**, Text Field, Bool Field]
3. [Description] | [Key] | Field Type: [Choice between Char Field, Text Field, **Bool Field**]
4. [Description] | [Key] | Field Type: [Choice between **Char Field**, Text Field, Bool Field]
5. [Description] | [Key] | Field Type: [Choice between Char Field, **Text Field**, Bool Field]
In the second step, the same or different user is presented with a form based off of the Template and with the appropriate field for each of the values. In pseudocode:
```
EntrySet MODEL
foreignkey: User
foreignkey: Template
name = charfield
Entry MODEL
foreignkey: EntrySet
foreignkey: TemplateEntry
value = ??
```
EXAMPLE ENTRYSET FORM FOR TEMPLATE #1
(the description for what entry represents is carried over from TemplateEntry)
1. [Char Field]
2. [Char Field]
3. True/False (Bool Field)
4. [Char Field]
5. [----------------Text Field-----------------]
Finally, the dictionary is created by combining the key field from each TemplateEntry in Template with the value field from each Entry in EntrySet.
The problem I'm having is that I don't know how to model the 'value' field in the entry model, since it could take the form of a charfield, textfield, or boolfield. My current approach is to break it up into three different fields: value\_short = charfield, value\_long = textfield, value\_bool = boolfield and to iterate through each of them when creating the dictionary, only taking the value of whichever field has content. However, this seems inefficient and would result in errors if more than one of them contained a value. Any suggestions on how to fix this issue or improve my model would be appreciated!
|
2020/11/27
|
[
"https://Stackoverflow.com/questions/65030618",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/14621609/"
] |
If your database supports [jsonfield](https://docs.djangoproject.com/en/3.1/ref/contrib/postgres/fields/#jsonfield) and you want to keep it as a single field, you can use it.
If it doesn't, first of all, if I'm not skipping something, you can use both textfield and charfield as textfield instead of separating them. Other than that, the best option is to leave these fields with the options `null=True,blank=True`. Then, add data to the relevant field according to its type and return whichever is not null to the user. And if you need to nullify the previous value when the data type changes, I recommend you do it in view or form or in pre\_save signal.
|
I feel that `contenttypes` will be useful for you, and help you to prevent reinventing the wheel:
<https://docs.djangoproject.com/en/3.1/ref/contrib/contenttypes/>
| 8,611
|
2,030,970
|
I've got a series of (x,y) values that I want to plot a 2d histogram of using python's matplotlib. Using hexbin, I get something like this:
[](https://i.stack.imgur.com/FUL1M.png)
But I'm looking for something like this:
[](https://i.stack.imgur.com/ASwJl.png)
Example Code:
```
from matplotlib import pyplot as plt
import random
foo = lambda : random.gauss(0.0,1.0)
x = [foo() for i in xrange(5000)]
y = [foo() for i in xrange(5000)]
pairs = zip(x,y)
#using hexbin I supply the x,y series and it does the binning for me
hexfig = plt.figure()
hexplt = hexfig.add_subplot(1,1,1)
hexplt.hexbin(x, y, gridsize = 20)
#to use imshow I have to bin the data myself
def histBin(pairsData,xbins,ybins=None):
if (ybins == None): ybins = xbins
xdata, ydata = zip(*pairsData)
xmin,xmax = min(xdata),max(xdata)
xwidth = xmax-xmin
ymin,ymax = min(ydata),max(ydata)
ywidth = ymax-ymin
def xbin(xval):
xbin = int(xbins*(xval-xmin)/xwidth)
return max(min(xbin,xbins-1),0)
def ybin(yval):
ybin = int(ybins*(yval-ymin)/ywidth)
return max(min(ybin,ybins-1),0)
hist = [[0 for x in xrange(xbins)] for y in xrange(ybins)]
for x,y in pairsData:
hist[ybin(y)][xbin(x)] += 1
extent = (xmin,xmax,ymin,ymax)
return hist,extent
#plot using imshow
imdata,extent = histBin(pairs,20)
imfig = plt.figure()
implt = imfig.add_subplot(1,1,1)
implt.imshow(imdata,extent = extent, interpolation = 'nearest')
plt.draw()
plt.show()
```
It seems like there should already be a way to do this without writing my own "binning" method and using imshow.
|
2010/01/08
|
[
"https://Stackoverflow.com/questions/2030970",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/43665/"
] |
Numpy has a function called [histogram2d](http://docs.scipy.org/doc/numpy/reference/generated/numpy.histogram2d.html), whose docstring also shows you how to visualize it using Matplotlib. Add `interpolation=nearest` to the imshow call to disable the interpolation.
|
Is `matplotlib.pyplot.hist` what you're looking for?
```
>>> help(matplotlib.pyplot.hist)
Help on function hist in module matplotlib.pyplot:
hist(x, bins=10, range=None, normed=False, weights=None, cumulative=False, botto
m=None, histtype='bar', align='mid', orientation='vertical', rwidth=None, log=Fa
lse, hold=None, **kwargs)
call signature::
hist(x, bins=10, range=None, normed=False, cumulative=False,
bottom=None, histtype='bar', align='mid',
orientation='vertical', rwidth=None, log=False, **kwargs)
Compute and draw the histogram of *x*. The return value is a
tuple (*n*, *bins*, *patches*) or ([*n0*, *n1*, ...], *bins*,
[*patches0*, *patches1*,...]) if the input contains multiple
data.
```
| 8,612
|
5,518,927
|
I need to crawl a list of several thousand hosts and find at least two files rooted there that are larger than some value, given as an argument. Can any popular (python based?) tool possibly help?
|
2011/04/01
|
[
"https://Stackoverflow.com/questions/5518927",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/649805/"
] |
Here is an example of how you can get the filesize of an file on a HTTP server.
```
import urllib2
def sizeofURLResource(url):
"""
Return the size of an resource at 'url' in bytes
"""
info = urllib2.urlopen(url).info()
return info.getheaders("Content-Length")[0]
```
There is also an library for building web scrapers here: <http://dev.scrapy.org/> but I don't know much about it(just googled honestly).
|
Here is how I did it. See the code below.
```
import urllib2
url = 'http://www.ueseo.org'
r = urllib2.urlopen(url)
print len(r.read())
```
| 8,621
|
57,915,312
|
I am not sure how exactly to ask this question so please forgive my ignorance.
I am running a function from many files. And after importing df I get the outcome into a csv file.
```
df=pd.read_csv("C:\Users\filename.csv ")
years = 5
days = 365
out_put, productivity= timeresult.input_data.outbuild(df, year, days)
productivity.to_csv("Jan.csv")
```
However, this looks painfully to do for many CSV I am working with. So I managed to put all the csv file names into a big folder. And imported the csv files names into a list.
```
filelist=["C:\Users\jan.csv", "C:\Users\feb.csv", "C:\Users\mar.csv"]
```
Would there be a way to have python loop all the list in the function and take the place of df and then send each file to a csv.
I tried this but failed
```
filelist = []
for x in filelist:
out_put, productivity= timeresult.input_data.outbuild(x, year, days)
filelist.append(productivity)
```
My goal was to have it run every cvs file name in the list and then create csv for each file.
|
2019/09/12
|
[
"https://Stackoverflow.com/questions/57915312",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/12016027/"
] |
If I understand your problem correctly, this code is for you:
```
years = 5
days = 365
filelist = ["C:\Users\jan.csv", "C:\Users\feb.csv", "C:\Users\mar.csv"]
for filepath in filelist:
df = pd.read_csv(filepath)
out_put, productivity= timeresult.input_data.outbuild(df, year, days)
df.index.name = filepath.split('\\')[-1].split('.')[0]
productivity.to_csv(filepath)
```
An example of the dataframe obtained could be the following:
From `jan.csv`:
```
costPrice currencyCode endDateValid
jan
0 83.56 GBP 2011-05-01
1 99.56 EUR 2017-05-01
```
From `feb.csv`:
```
costPrice currencyCode endDateValid
feb
0 93.89 EUR 2014-02-01
1 59.56 EUR 2012-07-01
```
**Tips**: If you want to get the list of names of all *.csv* files in the *"C:\Users\"* folder:
```
import glob
filelist = glob.glob("C:\Users\*.csv")
```
|
You were so close, you used the location instead of the file. Use this code:
```
filelist=["C:\Users\jan.csv", "C:\Users\feb.csv", "C:\Users\mar.csv"]
for location in filelist:
df = pd.read_csv(location)
out_put, productivity= timeresult.input_data.outbuild(df, year, days)
filelist.append(productivity)
```
| 8,622
|
56,438,069
|
I have images in a sub-folder. Let's the folder `images`
I have a python program which will take image arguments from the folder one by one, the images are named in sequential order (1.jpg , 2.jpg, 3.jpg and so on).
The call to the program is : `python prog.py 1.jpg`
What will be a shell script to automate this ?
Please ask for any additional information.
|
2019/06/04
|
[
"https://Stackoverflow.com/questions/56438069",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6916919/"
] |
Try this from the folder that contains images/:
`for i in images/*.jpg; do
python prog.py $i
done`
|
```
cd IMG_DIR
for item in [0-9]*.jpg
do
python prog.py $item
echo "Item processed : $item"
done
```
You can also pass image dir as a shellscript argument
| 8,624
|
37,023,460
|
I'm transitioning from discretization of a continuous state space to function approximation. My action and state space(3D) are both continuous. My problem suffers majorly from errors due to aliasing and nearly no convergene after training for a long time. Also I just cannot figure out how to choose the right step size for discretization.
Reading Sutton & Barto helped me understand the power of tile coding i.e having the state space described by multiple offested tilings overlapping each other. Given a continuous query/state, it is discribed by N basis functions, each corresponding to a single block/square of the criss-cross tilings it belongs to.
1) How is the performance different from going for a highly discretized state space?
2) Can anyone please point me to a working example of tile coding in python? I am learning too many things at the same time and getting super confused! (Q learning, discretization dilemma, tile coding, function approximation and handling the problem itself)
There doesn't seem to be any exhaustive Python coding tutorials for continuous problems in RL.
|
2016/05/04
|
[
"https://Stackoverflow.com/questions/37023460",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4284161/"
] |
As the Simon's comment describes, a key difference between a highly discretized state space and a function approximator using tile coding, it's the hability of tile coding to generalize the values learned from one state to other similar states (i.e., tiles can overlap). In the case of a highly discretized state space, you need to visit all (and they can be a lot) the states to obtain a good representation of the value function (or Q function).
Regarding the second question, in this [link](http://incompleteideas.net/rlai.cs.ualberta.ca/RLAI/RLtoolkit/tiles.html#Python_Versions_) you can find an implementation of tile coding (in C, C++, Lisp and Python) written by Rich Sutton and other members of his laboratory.
|
Adding to Pablo's answer -
Tile coding (as a special case of coarse coding) can be compared to simple state aggregation. A simple state aggregation is, for example, a grid. Tile coding would be a stack of grids on top of each other, each shifted a bit from the previous.
The benefits are two fold - it allows you to have **better discrimination** (more fine grained control, less bias) **without loss of generalization** (less variance).
This is because with tile coding **you cover more states, with less features**.
A grid is similar to one-hot-encoding. A 3x3 grid is equivalent to a 9-Dimension 1-hot-encoding vector - and covers 10 states in total - either an object is in one of the 9 grid blocks, or is in none of them.
[](https://i.stack.imgur.com/Lo1y7.png)
So the middle point could be represented by (0,0,0,0,1,0,0,0,0).
How about you take 4 - 1x1 boxes, and just shift them a little bit 0.5 box (so that they cover 2x2 area of the grid each).
[](https://i.stack.imgur.com/KKZnv.png)
Now you cover 10 states with only 4 dimensions, or 4 inputs: red box, green box, blue box, and purple box.
Now the same middle point could be represented by (1,1,1,1).
This means you can generalize better. Before - gradient descent would only affect that middle point parameters. Now, since a point is influenced by a combination of few features - all of these features parameters will be affected. Which also allows for faster learning (as Pablo mentions).
Coursera offers (a paid) [specialization](https://www.coursera.org/specializations/reinforcement-learning?) which has exercises you need to implement in Python. Specifically Course 3 week 3 let's you work with tiles. They are using an **updated** (compared to Pablo's answer) [Sutton's implementation of the code](http://incompleteideas.net/tiles/tiles3.html), which is more simplified and uses python 3. Since the code can be quite cryptic at first, here is [my comments on it](https://github.com/MaverickMeerkat/Reinforcement-Learning/blob/master/Tile%20Coding.ipynb).
| 8,626
|
1,480,431
|
I need to:
1. Open a video file
2. Iterate over the frames of the file as images
3. Do some analysis in this image frame of the video
4. Draw in this image of the video
5. Create a new video with these changes
OpenCV isn't working for my webcam, but python-gst is working. Is this possible using python-gst?
Thank you!
|
2009/09/26
|
[
"https://Stackoverflow.com/questions/1480431",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/179372/"
] |
Do you mean opencv can't connect to your webcam or can't read video files recorded by it?
Have you tried saving the video in an other format?
OpenCV is probably the best supported python image processing tool
|
Just build a C/C++ wrapper for your webcam and then use SWIG or SIP to access these functions from Python. Then use OpenCV in Python that's the best open sourced computer vision library in the wild.
If you worry for performance and you work under Linux, you could download free versions of Intel Performance Primitives (IPP) that could be loaded at runtime with zero effort from OpenCV. For certain algorithms you could get a 200% boost of performances, plus automatic multicore support for most of time consuming functions.
| 8,627
|
8,099,925
|
I want to check what is the password I stored in the DB for the user named as 'user'.
Here is what I have done.
```
user@ubuntu:~/Documents/Django/django_bookmarks$ python manage.py shell
Python 2.7.1+ (r271:86832, Apr 11 2011, 18:05:24)
[GCC 4.5.2] on linux2
Type "help", "copyright", "credits" or "license" for more information.
(InteractiveConsole)
>>> from django.contrib.auth.models import User
>>> user = User.objects.get(id=1)
>>> user.username, user.password
(u'user', u'sha1$6934a$f92c73726c0bd5d4821013ad4161578a2114090f')
>>>
>>> import hashlib
>>> hexhash = hashlib.sha1("password")
>>> hexhash
<sha1 HASH object @ 0x99c18c0>
>>> hexhash.digest
<built-in method digest of _hashlib.HASH object at 0x99c18c0>
```
I remember that I have used 'password' for the password of user but I cannot verify it.
Question> How can I find out what the password for the user is?
Thank you
|
2011/11/11
|
[
"https://Stackoverflow.com/questions/8099925",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/391104/"
] |
You can check the user's password with `check_password`: <https://docs.djangoproject.com/en/1.3/topics/auth/#django.contrib.auth.models.User.check_password>
```
from django.contrib.auth.models import User
user = User.objects.get(id=1)
user.check_password('password') # Returns True or False
```
|
django has been hashed your passwd, this is a function that only works in a way.
You can try to search the sha1 on a [hash database](http://www.hash-database.net/), but they are not guaranty to found it.
You should search for 'f92c73726c0bd5d4821013ad4161578a2114090f'. Hash function is sha1 and key used to hash is '6934a'
| 8,632
|
35,104,897
|
First of all a disclaimer: I am using python and anaconda and jupyter all for the first time, so it might be something basic.
I pasted the following code into a new Jupyter note from this url:
<https://github.com/t0pep0/btc-e.api.python/blob/master/btceapi.py>
After filling in my own API and secret API key, I tried to get this running:
```
getInfo()
```
But I ran into this error:
```
---------------------------------------------------------------------------
NameError Traceback (most recent call last)
<ipython-input-14-c63c8cc1259c> in <module>()
96
97
---> 98 getInfo()
NameError: name 'getInfo' is not defined
```
I checked the following solutions:
* Defining the function first and then running it, this example works
fine in Jupyter.
[function is not defined error in Python](https://stackoverflow.com/questions/5986860/function-is-not-defined-error-in-python)
* Defining the class first and then running the function, this example
also works fine in Jupyter.
[Python NameError: name is not defined](https://stackoverflow.com/questions/14804084/python-nameerror-name-is-not-defined)
But since the class and function are both defined in the correct order in the script I copied, there must be something else going on.
|
2016/01/30
|
[
"https://Stackoverflow.com/questions/35104897",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3022427/"
] |
`getInfo` is a class method. So you need to instanciate an `api` object before calling it. You could try something like this.
```
myApi = api()
myApi.getInfo()
```
|
Some general comments, as Hakens answer is your problem.
Don't copy this script into a cell in the notebook like this (I believe this is what you are doing) You can either manually install to site packages (there doesn't appear to be a setup script for this module), or have the file in the same directory as the notebook. Then you can run
```
from btcapi import api
```
and proceed with Haken's answer (with appropriate arguments to the **init** method)
| 8,635
|
17,417,918
|
What's the most efficient way to get the integer part and fractional part of a python (python 3) `Decimal`?
This is what I have right now:
```
from decimal import *
>>> divmod(Decimal('1.0000000000000003')*7,Decimal(1))
(Decimal('7'), Decimal('2.1E-15'))
```
Any suggestions are welcome.
|
2013/07/02
|
[
"https://Stackoverflow.com/questions/17417918",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1115577/"
] |
You can use the `Window > Reset Windows` menu item. This will reset the IDE's GUI back to the default state.
|
Use the `Help>About` menu to find the `User Directory`. Then navigate to it and delete directory `config>Windows2Local`. Restart the IDE and you will have the default windows settings. (The deleted dir will be recreated by netbeans)
| 8,636
|
28,029,672
|
I want to be able to create a JSON object so that I can access it like this.
```
education.schools.UNCC.graduation
```
Currently, my JSON is like this:
```
var education = {
"schools": [
"UNCC": {
"graduation": 2015,
"city": "Charlotte, NC",
"major": ["CS", "Spanish"]
},
"UNC-CH": {
"graduation": 2012,
"city": "Chapel Hill, NC"
"major": ["Sociology", "Film"]
}
],
"online": {
"website": "Udacity",
"courses": ["python", "java", "data science"]
}
};
```
When I go to Lint my JSON, I get an error message.
I know I can reformat my object to access it like this (below), but I don't want to do it this way. I want to be able to call the school name, and not use an index number.
```
education.schools[1].graduation
```
|
2015/01/19
|
[
"https://Stackoverflow.com/questions/28029672",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3558010/"
] |
Objects have named keys. Arrays are a list of members.
Replace the value of `"schools"` with an object. Change `[]` to `{}`.
|
This is your JSON corrected.
Your JSON is invalid.
```
{
"schools": [
{
"UNCC": {
"graduation": "2015",
"city": [
"CS",
"Spanish"
],
"major": [
"CS",
"Spanish"
]
}
},
{
"UNC-CH": {
"graduation": "2012",
"city": [
"Chapel Hill",
"NC"
],
"major": [
"Sociology",
"Film"
]
}
}
],
"online": {
"website": "Udacity",
"courses": [
"python",
"java",
"data science"
]
}
}
```
**Explanation**:
1. "city": "Chapel Hill, NC" -> this is a array with 2 values "*Chapel* *Hill*" and "*HC*", like you do with major and courses.
2. The Schools array, you need to use this sintaxe to construct a array [{
<http://adobe.github.io/Spry/samples/data_region/JSONDataSetSample.html>
| 8,637
|
1,809,874
|
I'm iterating through the fields of a form and for certain fields I want a slightly different layout, requiring altered HTML.
To do this accurately, I just need to know the widget type. Its class name or something similar. In standard python, this is easy! `field.field.widget.__class__.__name__`
Unfortunately, you're not allowed access to underscore variables in templates. **Great!**
You *can* test `field.field.widget.input_type` but this only works for text/password `<input ../>` types. I need more resolution that that.
To me, however difficult it might look, it makes most sense to do this at template level. I've outsourced the bit of code that handles HTML for fields to a separate template that gets included in the field-loop. This means it is consistent across `ModelForm`s and standard `Form`s (something that wouldn't be true if I wrote an intermediary Form class).
If you can see a universal approach that doesn't require me to edit 20-odd forms, let me know too!
|
2009/11/27
|
[
"https://Stackoverflow.com/questions/1809874",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/12870/"
] |
Following the answer from Oli and rinti: I used this one and I think it is a bit simpler:
template code: `{{ field|fieldtype }}`
filter code:
```
from django import template
register = template.Library()
@register.filter('fieldtype')
def fieldtype(field):
return field.field.widget.__class__.__name__
```
|
You can make every view that manages forms inherit from a custom generic view where you load into the context the metadata that you need in the templates. The generic form view should include something like this:
```
class CustomUpdateView(UpdateView):
...
def get_context_data(self, **kwargs):
context = super().get_context_data(**kwargs)
...
for f, value in context["form"].fields.items():
context["form"].fields[f].type = self.model._meta.get_field(f).get_internal_type()
...
return context
```
In the template you can access these custom properties through field.field:
```
{% if field.field.type == 'BooleanField' %}
<div class="custom-control custom-checkbox">
...
</div>
{% endif %}
```
By using the debugger of PyCharm or Visual Studio Code you can see all the available metadata, if you need something else besides the field type.
| 8,638
|
51,804,600
|
I am a little confused about a piece of python code in using dict:
```
>>> S = "ababcbacadefegdehijhklij"
>>> lindex = {c: i for i, c in enumerate(S)}
>>> lindex
{'a': 8, 'c': 7, 'b': 5, 'e': 15, 'd': 14, 'g': 13, 'f': 11, 'i': 22, 'h': 19, 'k': 20, 'j': 23, 'l': 21}
```
How to understand it the "{c: i for i, c in enumerate(S)}" ? Could anyone give me some explanations?
|
2018/08/11
|
[
"https://Stackoverflow.com/questions/51804600",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6654375/"
] |
First question
--------------
Polymorphism can be achieved in Java in two ways:
* Through *class inheritance*: `class A extends B`
* Through *interface implementation*: `class A implements C`.
In the later case, to properly implement A's behaviour, it can be done though *composition*, making A delegate over some other class/es which do the tasks specified in Interface C.
Example: Let's suppose we have already some class imeplementing interface C:
```
class X implements C
{
public String getName() {...}
public int getAge() {...}
}
```
How can we create a new class implementing C with the same behaviour of X? Like this:
```
class A implements C
{
private C x=new X();
public String getName() {return x.getName();}
public int getAge() {return x.getAge();}
}
```
Second question
---------------
No, polymorphism is not method overloading and/or method overriding (in fact, overloading has nothing to do with Object Oriented Design):
* Method *overloading* consists on creating a new method with the same name that some other (maybe inherited) method in the same class but with a different signature (=parameter numbers or types). Adding new methods is OK, but that is not the aim of polymorphism.
* Method *overriding* consists on setting a new body to an inherited method, so that this new body will be executed in the current class instead of the inherited method's body. This is a advantage of polymorphism, still is not the base of it either.
Polymorphism, in brief, is the ability of a class to be *used* as different classes/interfaces.
|
There is book answer, if one remember about all the firemans are fireman but some are drivers, chiefs etc. There you need polymorphism. There is things you can do with classes and it's a general idea in OOP as language constraints. Overriding is just what you can do with classes. Also permissions and local and/or global scopes. There is default constructor for any class. There is namespace scope, program, class etc.
*All Classes and methods are functions but not all functions are methods*
You can override class but not method. Those are static or volatile. Cos method can only return the value. So overriding the method has no sense. I hope this will turn you, if nothing, toward how it was meant to be. Inheritance is mechanism how polymorphism works.
My apologies for unintentional mistakes during too much data.
| 8,648
|
63,592,741
|
When trying to update a dictionary with a tuple, I encountered the error:
`>>> dict1.update(("stat",10))`
`ValueError: dictionary update sequence element #0 has length 4; 2 is required`
When in reality this shouldn't be happening. From the python docs,
>
> update() accepts either another dictionary object or an iterable of key/value pairs (as tuples or other iterables of length two).
>
>
>
This makes no sense, since the tuple i supplied clearly has length 2.
`>>> len(("stat",10))`
`2`
What is going on? Is this a bug that isn't resolved yet? Running Python 3.8.0.
Or was this due to the fact that my dictionary is empty? Tried this with other strings and values, same problem.
|
2020/08/26
|
[
"https://Stackoverflow.com/questions/63592741",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/14168419/"
] |
The documentation said you need an ***iterable*** of key value pairs. A single tuple is not an iterable of key value pairs, either a list or tuple of tuples will do.
```py
dict1.update([("stat", 10)])
```
|
The documentation says "or an iterable of key/value pairs (as tuples or other iterables of length two)".
Therefore you need to pass it a tuple of tuples that have length 2
Try this:
```
dict1.update((("stat",10),))
```
Or you can pass multiple key/value pairs as followings:
```
dict1.update((("stat",10), ('foo', 'bar'), ('buzz', 'fizz')))
```
| 8,651
|
4,701,383
|
what i'm trying to do is write a quadratic equation solver but when the solution should be `-1`, as in `quadratic(2, 4, 2)` it returns `1`
what am i doing wrong?
```
#!/usr/bin/python
import math
def quadratic(a, b, c):
#a = raw_input("What\'s your `a` value?\t")
#b = raw_input("What\'s your `b` value?\t")
#c = raw_input("What\'s your `c` value?\t")
a, b, c = float(a), float(b), float(c)
disc = (b*b)-(4*a*c)
print "Discriminant is:\n" + str(disc)
if disc >= 0:
root = math.sqrt(disc)
top1 = b + root
top2 = b - root
sol1 = top1/(2*a)
sol2 = top2/(2*a)
if sol1 != sol2:
print "Solution 1:\n" + str(sol1) + "\nSolution 2:\n" + str(sol2)
if sol1 == sol2:
print "One solution:\n" + str(sol1)
else:
print "No solution!"
```
EDIT: it returns the following...
```
>>> import mathmodules
>>> mathmodules.quadratic(2, 4, 2)
Discriminant is:
0.0
One solution:
1.0
```
|
2011/01/15
|
[
"https://Stackoverflow.com/questions/4701383",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/569183/"
] |
Unless the formula has changed since I went to school (one can never be too sure), it's `(-b +- sqrt(b^2-4ac)) / 2a`, you have `b` in your code.
[edit] May I suggest a refactor?
```
def quadratic(a, b, c):
discriminant = b**2 - 4*a*c
if discriminant < 0:
return []
elif discriminant == 0:
return [-b / (2*a)]
else:
root = math.sqrt(discriminant)
return [(-b + root) / (2*a), (-b - root) / (2*a)]
print quadratic(2, 3, 2) # []
print quadratic(2, 4, 2) # [-1]
print quadratic(2, 5, 2) # [-0.5, -2.0]
```
|
The solution to the quadratic is
```
x = (-b +/- sqrt(b^2 - 4ac))/2a
```
but what you have coded up is
```
x = (b +/- sqrt(b^2 - 4ac))/2a
```
So that's why you get the sign error.
| 8,653
|
70,088,798
|
I am making a python PyQt5 CSV comparison tool project and the user can add conditions for querying the pandas dataframe one by one before they are executed.
At the moment I have a nested list of conditions with each element containing the field, operation (==,!=,>,<), and value for comparison as strings. With just one condition I can use .query as it takes a string condition:
```
data.query('{} {} {}'.format(field,operation,value))
```
But as far as I can tell the formatting for multiple queries would use loc similar to this:
```
data.loc[(data.query('{} {} {}'.format(field[0],operation[0],value[0]))) & (data.query('{} {} {}'.format(field[1],operation[1],value[1]))) & ...]
```
Firstly I wanted to make sure my understanding of the loc function was correct (do I need a primary key at the end maybe?).
And secondly, how would I represent this multiple condition query with an unknown number of conditions set?
Thanks
|
2021/11/23
|
[
"https://Stackoverflow.com/questions/70088798",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/13696214/"
] |
Would this work?
```
conds = [
f'{f} {o} {v}' for f, o, v in zip(field, operation, value)
]
data.query(' and '.join(conds))
```
|
**Warning**: Not tested, more like a comment but put here for proper format:
`data.query` returns a dataframe, you can't just do `dataframe1 & dataframe2`. You would do something like
```
data.query(' AND '.join(['{} {} {}'.format(f, o, v)
for f, o, v in zip(fields, operations, values)
])
)
```
| 8,659
|
26,785,812
|
I need to do some intense numerical computations and fortunately python offers very simple ways to implement parallelisations. However, the results I got were totally weird and after some trial'n error I stumbled upon the problem.
The following code simply calculates the mean of a random sample of numbers but illustrates my problem:
```
import multiprocessing
import numpy as np
from numpy.random import random
# Define function to generate random number
def get_random(seed):
dummy = random(1000) * seed
return np.mean(dummy)
# Input data
input_data = [100,100,100,100]
pool = multiprocessing.Pool(processes=4)
result = pool.map(get_random, input_data)
print result
for i in input_data:
print get_random(i)
```
Now the output looks like this:
```
[51.003368466729405, 51.003368466729405, 51.003368466729405, 51.003368466729405]
```
for the parallelisation, which is always the same
and like this for the normal not parallelised loop:
```
50.8581749381
49.2887091049
50.83585841
49.3067281055
```
As you can see, the parallelisation just returns the same results, even though it should have calculated difference means just as the loop. Now, sometimes I get only 3 equal numbers with one being different from the other 3.
I suspect that some memory is allocated to all sub processes...
I would love some hints on what is going on here and what a fix would look like. :)
thanks
|
2014/11/06
|
[
"https://Stackoverflow.com/questions/26785812",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4223923/"
] |
When you use `multiprocessing`, you're talking about distinct processes. Distinct processes means distinct Python interpreters. Distinct interpreters means distinct random states. If you aren't seeding the random number generator uniquely on each process, then you're going to get the same starting random state from each process.
|
The answer was to put a new random seed into each process. Changing the function to
```
def get_random(seed):
np.random.seed()
dummy = random(1000) * seed
return np.mean(dummy)
```
gives the wanted results.
| 8,660
|
72,468,946
|
I'm migrating from `setup.py` to `pyproject.toml`. The commands to install my package appear to be the same, but I can't find what the `pyproject.toml` command for cleaning up build artifacts is. What is the equivalent to `python setup.py clean --all`?
|
2022/06/01
|
[
"https://Stackoverflow.com/questions/72468946",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/7059681/"
] |
The distutils command [clean](https://docs.python.org/3/distutils/apiref.html#module-distutils.command.clean) is not needed for a `pyproject.toml` based build. Modern tools invoking [PEP517](https://peps.python.org/pep-0517/)/[PEP518](https://peps.python.org/pep-0518/) hooks, such as [build](https://pypi.org/project/build/), create a temporary directory or a cache directory to store intermediate files while building, rather than littering the project directory with a `build` subdirectory.
Anyway, it was [not really an exciting command](https://github.com/python/cpython/blob/3.11/Lib/distutils/command/clean.py) in the first place and `rm -rf build` does the same job.
|
I ran into this same issue when I was migrating. What wim answered seems to be mostly true. If you do as the setuptools documentation says and use `python -m build` then the `build` directory will not be created, but a `dist` will. However if you do `pip install .` a `build` directory will be left behind even if you are using a `pyproject.toml` file. This can cause issues if you change your package structure or rename files as sometimes the old version that is in the `build` directory will be installed instead of your current changes. Personally I run `pip install . && rm -rf build` or `pip install . && rmdir /s /q build` for Windows. This could be expanded to remove any other unwanted artifacts.
| 8,661
|
11,023,990
|
I would like to use Python to run a macro contained in MacroBook.xlsm on a worksheet in Data.csv.
Normally in excel, I have both files open and shift focus to the Data.csv file and run the macro from MacroBook. The python script downloads the Data.csv file daily, so I can't put the macro in that file.
Here's my code:
```
import win32com.client
import os
import xl
excel = win32com.client.Dispatch("Excel.Application")
macrowb = xl.Workbook('C:\MacroBook.xlsm')
wb1 = xl.Workbook('C:\Database.csv')
excel.Run("FilterLoans")
```
I get an error,
>
> pywintypes.com\_error: (-2147352567, 'Exception occurred.', (0,
> u'Microsoft Excel', u"Cannot run the macro 'FilterLoans'. The macro
> may not be available in this workbook or all macros may be disabled.",
> u'xlmain11.chm', 0, -2146827284), None)
>
>
>
The error states that FilterLoans is not available in the Database.csv file...how can I import it?
|
2012/06/13
|
[
"https://Stackoverflow.com/questions/11023990",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1137778/"
] |
Just return an empty enumerable in the base method.
```
public virtual IEnumerable<Uri> GetBaseAddresses()
{
return Enumerable.Empty<Uri>();
}
```
Or if you're targeting a version of the .NET framework < 3.5 return an empty List.
|
Built in arrays support IEnumerable so you can use:
```
public virtual IEnumerable<Uri> GetBaseAddresses()
{
return new Uri[0];
}
```
| 8,662
|
52,377,332
|
[This](https://docs.aws.amazon.com/ses/latest/DeveloperGuide/send-using-sdk-python.html) page shows how to send an email using SES. The example works by reading the credentials from `~/.aws/credentials`, which are the root (yet "shared"??) credentials.
The documentation advises in various places against using the root credentials.
Acquiring temporary credentials
using [roles](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_use_switch-role-api.html) is mentioned as an option, yet `assume_role()` is not defined for SES client objects.
How do I send an email through SES with temporary SES-specific credentials?
**Update**
The context for my question is an application running on an EC2 instance.
|
2018/09/18
|
[
"https://Stackoverflow.com/questions/52377332",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/704972/"
] |
Here you go. There were a few problems.
1) When you return a value from a function, you need to assign it to a variable so you can pass it into the next function.
2) String literals like the letter grade "F" need to be inside single or double quote marks.
```
def main():
student_name = input('Please enter your first and last name: ')
scores = askForScore()
avg_score = calc_average(scores)
letter_grade = determine_grade(avg_score)
print(letter_grade)
def askForScore():
score1 = float(input('Please enter the first test score:'))
score2 = float(input('Please enter the second test score:'))
score3 = float(input('Please enter the third test score:'))
score4 = float(input('Please enter the fourth test score:'))
score5 = float(input('Please enter the fifth test score:'))
return (score1, score2, score3, score4, score5)
def calc_average(scores):
avg_score = (scores[0] + scores[1] + scores[2] + scores[3] + scores[4]) / 5
return avg_score
def determine_grade(avg_score):
if avg_score >= 90 and avg_score <= 100:
return 'A'
elif avg_score >= 80 and avg_score <= 89:
return 'B'
elif avg_score >= 70 and avg_score <= 79:
return 'C'
elif avg_score >= 60 and avg_score <= 69:
return 'D'
else:
return 'F'
main()
```
|
Make sure you understand some Python concepts such as the scope of variables, return statements and function arguments. In your case, for instance, `score1` ... `score5` inside `askForScore` are not "readable" by `calc_average`. In fact, `calc_average` returns the values you need and those values need to be passed to the next function as arguments like this:
```
...
score1, score2, score3, score4, score5 = askForScore()
calc_average(score1, score2, score3, score4, score5)
...
```
| 8,667
|
21,072,841
|
I am testing some python functionalities as web server. Typed :
```
$ python -m SimpleHTTPServer 8080
```
...and setup port forwarding on router to this 8080. I can access via web with <http://my.ip.adr.ess:8080/>, whereas my.ip.adr.ess stands for my IP adress.
When I started my xampp server it is accessible with <http://my.ip.adr.ess/> and no 8080 port is required for accessing.
What should I have to do to python server responds like that?
|
2014/01/12
|
[
"https://Stackoverflow.com/questions/21072841",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/891304/"
] |
It means that xampp is running on port 80 which is default for http://. You need to run SimpleHTTPServer on that port too. [More info about running SimpleHTTPServer on port 80](https://unix.stackexchange.com/questions/24598/how-can-i-start-the-python-simplehttpserver-on-port-80).
|
Specify the port as `80` (default port for HTTP protocol).
```
python -m SimpleHTTPServer 80
```
You may need superuser permission in Unix to bind port 80 (under 1024).
```
sudo python -m SimpleHTTPServer 80
```
| 8,668
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.