qid
int64 46k
74.7M
| question
stringlengths 54
37.8k
| date
stringlengths 10
10
| metadata
listlengths 3
3
| response_j
stringlengths 17
26k
| response_k
stringlengths 26
26k
|
|---|---|---|---|---|---|
49,154,899
|
I want to create a virtual environment using conda and yml file.
Command:
```
conda env create -n ex3 -f env.yml
```
Type ENTER it gives following message:
```
ResolvePackageNotFound:
- gst-plugins-base==1.8.0=0
- dbus==1.10.20=0
- opencv3==3.2.0=np111py35_0
- qt==5.6.2=5
- libxcb==1.12=1
- libgcc==5.2.0=0
- gstreamer==1.8.0=0
```
However, I do have those on my Mac. My MacOS: High Sierra 10.13.3
My env.yml file looks like this:
```
name: ex3
channels:
- menpo
- defaults
dependencies:
- cairo=1.14.8=0
- certifi=2016.2.28=py35_0
- cycler=0.10.0=py35_0
- dbus=1.10.20=0
- expat=2.1.0=0
- fontconfig=2.12.1=3
- freetype=2.5.5=2
- glib=2.50.2=1
- gst-plugins-base=1.8.0=0
- gstreamer=1.8.0=0
- harfbuzz=0.9.39=2
- hdf5=1.8.17=2
- icu=54.1=0
- jbig=2.1=0
- jpeg=9b=0
- libffi=3.2.1=1
- libgcc=5.2.0=0
- libgfortran=3.0.0=1
- libiconv=1.14=0
- libpng=1.6.30=1
- libtiff=4.0.6=3
- libxcb=1.12=1
- libxml2=2.9.4=0
- matplotlib=2.0.2=np111py35_0
- mkl=2017.0.3=0
- numpy=1.11.3=py35_0
- openssl=1.0.2l=0
- pandas=0.20.1=np111py35_0
- patsy=0.4.1=py35_0
- pcre=8.39=1
- pip=9.0.1=py35_1
- pixman=0.34.0=0
- pyparsing=2.2.0=py35_0
- pyqt=5.6.0=py35_2
- python=3.5.4=0
- python-dateutil=2.6.1=py35_0
- pytz=2017.2=py35_0
- qt=5.6.2=5
- readline=6.2=2
- scipy=0.19.0=np111py35_0
- seaborn=0.8=py35_0
- setuptools=36.4.0=py35_1
- sip=4.18=py35_0
- six=1.10.0=py35_0
- sqlite=3.13.0=0
- statsmodels=0.8.0=np111py35_0
- tk=8.5.18=0
- wheel=0.29.0=py35_0
- xz=5.2.3=0
- zlib=1.2.11=0
- opencv3=3.2.0=np111py35_0
- pip:
- bleach==1.5.0
- enum34==1.1.6
- html5lib==0.9999999
- markdown==2.6.11
- protobuf==3.5.1
- tensorflow==1.4.1
- tensorflow-tensorboard==0.4.0
- werkzeug==0.14.1
```
How to solve this problem?
Well....The stack overflow prompt me to say more details, but I think I describe things clearly, it is sad, stack overflow does not support to upload attachment....
|
2018/03/07
|
[
"https://Stackoverflow.com/questions/49154899",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/7722305/"
] |
There can be another reason for the '**ResolvePackageNotFound**' error -- the version of the packages you require might be in an old version of the repository that is not searched by default.
The different paths to locations in the Anaconda repositories can be found at:
<https://repo.continuum.io/pkgs/>
My yml file [NW\_BI.yml] is as follows:
```
name: NW_BI
channels:
- 'https://repo.continuum.io/pkgs/free' # Remove this line and it fails!!!
- conda-forge
- defaults
dependencies:
- python=2.7.10
- pandas=0.16.2
- pyodbc=3.0.10
```
**Create using:**
```
conda env create -f 'path to file'\NW_BI.yml
```
I wanted to recreate an old environment!!!!
**Note using:**
Anaconda3 2019.10
Windows10
|
Use `--no-builds` option to `conda env export`
<https://github.com/conda/conda/issues/7311#issuecomment-442320274>
|
49,154,899
|
I want to create a virtual environment using conda and yml file.
Command:
```
conda env create -n ex3 -f env.yml
```
Type ENTER it gives following message:
```
ResolvePackageNotFound:
- gst-plugins-base==1.8.0=0
- dbus==1.10.20=0
- opencv3==3.2.0=np111py35_0
- qt==5.6.2=5
- libxcb==1.12=1
- libgcc==5.2.0=0
- gstreamer==1.8.0=0
```
However, I do have those on my Mac. My MacOS: High Sierra 10.13.3
My env.yml file looks like this:
```
name: ex3
channels:
- menpo
- defaults
dependencies:
- cairo=1.14.8=0
- certifi=2016.2.28=py35_0
- cycler=0.10.0=py35_0
- dbus=1.10.20=0
- expat=2.1.0=0
- fontconfig=2.12.1=3
- freetype=2.5.5=2
- glib=2.50.2=1
- gst-plugins-base=1.8.0=0
- gstreamer=1.8.0=0
- harfbuzz=0.9.39=2
- hdf5=1.8.17=2
- icu=54.1=0
- jbig=2.1=0
- jpeg=9b=0
- libffi=3.2.1=1
- libgcc=5.2.0=0
- libgfortran=3.0.0=1
- libiconv=1.14=0
- libpng=1.6.30=1
- libtiff=4.0.6=3
- libxcb=1.12=1
- libxml2=2.9.4=0
- matplotlib=2.0.2=np111py35_0
- mkl=2017.0.3=0
- numpy=1.11.3=py35_0
- openssl=1.0.2l=0
- pandas=0.20.1=np111py35_0
- patsy=0.4.1=py35_0
- pcre=8.39=1
- pip=9.0.1=py35_1
- pixman=0.34.0=0
- pyparsing=2.2.0=py35_0
- pyqt=5.6.0=py35_2
- python=3.5.4=0
- python-dateutil=2.6.1=py35_0
- pytz=2017.2=py35_0
- qt=5.6.2=5
- readline=6.2=2
- scipy=0.19.0=np111py35_0
- seaborn=0.8=py35_0
- setuptools=36.4.0=py35_1
- sip=4.18=py35_0
- six=1.10.0=py35_0
- sqlite=3.13.0=0
- statsmodels=0.8.0=np111py35_0
- tk=8.5.18=0
- wheel=0.29.0=py35_0
- xz=5.2.3=0
- zlib=1.2.11=0
- opencv3=3.2.0=np111py35_0
- pip:
- bleach==1.5.0
- enum34==1.1.6
- html5lib==0.9999999
- markdown==2.6.11
- protobuf==3.5.1
- tensorflow==1.4.1
- tensorflow-tensorboard==0.4.0
- werkzeug==0.14.1
```
How to solve this problem?
Well....The stack overflow prompt me to say more details, but I think I describe things clearly, it is sad, stack overflow does not support to upload attachment....
|
2018/03/07
|
[
"https://Stackoverflow.com/questions/49154899",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/7722305/"
] |
Use `--no-builds` option to `conda env export`
<https://github.com/conda/conda/issues/7311#issuecomment-442320274>
|
If you are looking at this and feel too much chore to change Conda version
`packge=ver=py.*` to pip style `package==ver`, I wrote this small script that delete the `=py.*` part from Conda style.
Note below code work on the presume that you already changed `package=ver` to `package==ver`.
```
#!/bin/bash
COUNT=0
find_pip=0
while IFS= read -r line; do
COUNT=$(( $COUNT + 1 ))
# echo "$COUNT"
# echo "read it"
if echo ${line} | grep -q -- "- pip:" ; then
# echo "find it"
find_pip=1
indent=`awk -F- '{print length($1)}' <<< "$line"`
pip_indent=$(( $indent + 2 ))
# echo $indent
# echo $pip_indent
fi
line_indent=`awk -F- '{print length($1)}' <<< "$line"`
if [[ ${find_pip} ]] && [[ ${pip_indent} -eq ${line_indent} ]]; then
# echo "$line"
new_line=`echo ${line} | cut -d'=' -f-3`
new_line=" $new_line"
# echo "${new_line}"
sed -e "${COUNT}s/.*/${new_line}/" -i '' $1
fi
done < "$1"
```
|
49,154,899
|
I want to create a virtual environment using conda and yml file.
Command:
```
conda env create -n ex3 -f env.yml
```
Type ENTER it gives following message:
```
ResolvePackageNotFound:
- gst-plugins-base==1.8.0=0
- dbus==1.10.20=0
- opencv3==3.2.0=np111py35_0
- qt==5.6.2=5
- libxcb==1.12=1
- libgcc==5.2.0=0
- gstreamer==1.8.0=0
```
However, I do have those on my Mac. My MacOS: High Sierra 10.13.3
My env.yml file looks like this:
```
name: ex3
channels:
- menpo
- defaults
dependencies:
- cairo=1.14.8=0
- certifi=2016.2.28=py35_0
- cycler=0.10.0=py35_0
- dbus=1.10.20=0
- expat=2.1.0=0
- fontconfig=2.12.1=3
- freetype=2.5.5=2
- glib=2.50.2=1
- gst-plugins-base=1.8.0=0
- gstreamer=1.8.0=0
- harfbuzz=0.9.39=2
- hdf5=1.8.17=2
- icu=54.1=0
- jbig=2.1=0
- jpeg=9b=0
- libffi=3.2.1=1
- libgcc=5.2.0=0
- libgfortran=3.0.0=1
- libiconv=1.14=0
- libpng=1.6.30=1
- libtiff=4.0.6=3
- libxcb=1.12=1
- libxml2=2.9.4=0
- matplotlib=2.0.2=np111py35_0
- mkl=2017.0.3=0
- numpy=1.11.3=py35_0
- openssl=1.0.2l=0
- pandas=0.20.1=np111py35_0
- patsy=0.4.1=py35_0
- pcre=8.39=1
- pip=9.0.1=py35_1
- pixman=0.34.0=0
- pyparsing=2.2.0=py35_0
- pyqt=5.6.0=py35_2
- python=3.5.4=0
- python-dateutil=2.6.1=py35_0
- pytz=2017.2=py35_0
- qt=5.6.2=5
- readline=6.2=2
- scipy=0.19.0=np111py35_0
- seaborn=0.8=py35_0
- setuptools=36.4.0=py35_1
- sip=4.18=py35_0
- six=1.10.0=py35_0
- sqlite=3.13.0=0
- statsmodels=0.8.0=np111py35_0
- tk=8.5.18=0
- wheel=0.29.0=py35_0
- xz=5.2.3=0
- zlib=1.2.11=0
- opencv3=3.2.0=np111py35_0
- pip:
- bleach==1.5.0
- enum34==1.1.6
- html5lib==0.9999999
- markdown==2.6.11
- protobuf==3.5.1
- tensorflow==1.4.1
- tensorflow-tensorboard==0.4.0
- werkzeug==0.14.1
```
How to solve this problem?
Well....The stack overflow prompt me to say more details, but I think I describe things clearly, it is sad, stack overflow does not support to upload attachment....
|
2018/03/07
|
[
"https://Stackoverflow.com/questions/49154899",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/7722305/"
] |
I had same problem and found your question googling for it.
`ResolvePackageNotFound` error describes all packages not installed yet, but required.
To solve the problem, move them under `pip` section:
```
name: ex3
channels:
- menpo
- defaults
dependencies:
- cairo=1.14.8=0
- ***
- another dependencies, except not found ones
- pip:
- gst-plugins-base==1.8.0
- bleach==1.5.0
- enum34==1.1.6
- html5lib==0.9999999
- markdown==2.6.11
- protobuf==3.5.1
- tensorflow==1.4.1
- tensorflow-tensorboard==0.4.0
- werkzeug==0.14.1
*** added ***
- gst-plugins-base==1.8.0
- dbus==1.10.20
- opencv3==3.2.0
- qt==5.6.2
- libxcb==1.12
- libgcc==5.2.0
- gstreamer==1.8.0
```
|
Let's assume the following command is used to create the environment.yml file : `conda env export --from-history -n envName -f environment1.yml` where envName is the name of the environment we are interested in. Assuming the content of the file is:
```
name: envName
channels:
- defaults
dependencies:
- python=3.9
- numpy
- spyder
- scipy
- opencv==4.5.5
- scikit-learn-intelex
- shapely
- imgaug
- r-uuid
- more-itertools
- sympy
- pylatex
- progressbar2
prefix: /home/User/anaconda3/envs/envName
```
Then add `- conda-forge` under `- defaults` in channels, And try to create the
environment again.
|
49,154,899
|
I want to create a virtual environment using conda and yml file.
Command:
```
conda env create -n ex3 -f env.yml
```
Type ENTER it gives following message:
```
ResolvePackageNotFound:
- gst-plugins-base==1.8.0=0
- dbus==1.10.20=0
- opencv3==3.2.0=np111py35_0
- qt==5.6.2=5
- libxcb==1.12=1
- libgcc==5.2.0=0
- gstreamer==1.8.0=0
```
However, I do have those on my Mac. My MacOS: High Sierra 10.13.3
My env.yml file looks like this:
```
name: ex3
channels:
- menpo
- defaults
dependencies:
- cairo=1.14.8=0
- certifi=2016.2.28=py35_0
- cycler=0.10.0=py35_0
- dbus=1.10.20=0
- expat=2.1.0=0
- fontconfig=2.12.1=3
- freetype=2.5.5=2
- glib=2.50.2=1
- gst-plugins-base=1.8.0=0
- gstreamer=1.8.0=0
- harfbuzz=0.9.39=2
- hdf5=1.8.17=2
- icu=54.1=0
- jbig=2.1=0
- jpeg=9b=0
- libffi=3.2.1=1
- libgcc=5.2.0=0
- libgfortran=3.0.0=1
- libiconv=1.14=0
- libpng=1.6.30=1
- libtiff=4.0.6=3
- libxcb=1.12=1
- libxml2=2.9.4=0
- matplotlib=2.0.2=np111py35_0
- mkl=2017.0.3=0
- numpy=1.11.3=py35_0
- openssl=1.0.2l=0
- pandas=0.20.1=np111py35_0
- patsy=0.4.1=py35_0
- pcre=8.39=1
- pip=9.0.1=py35_1
- pixman=0.34.0=0
- pyparsing=2.2.0=py35_0
- pyqt=5.6.0=py35_2
- python=3.5.4=0
- python-dateutil=2.6.1=py35_0
- pytz=2017.2=py35_0
- qt=5.6.2=5
- readline=6.2=2
- scipy=0.19.0=np111py35_0
- seaborn=0.8=py35_0
- setuptools=36.4.0=py35_1
- sip=4.18=py35_0
- six=1.10.0=py35_0
- sqlite=3.13.0=0
- statsmodels=0.8.0=np111py35_0
- tk=8.5.18=0
- wheel=0.29.0=py35_0
- xz=5.2.3=0
- zlib=1.2.11=0
- opencv3=3.2.0=np111py35_0
- pip:
- bleach==1.5.0
- enum34==1.1.6
- html5lib==0.9999999
- markdown==2.6.11
- protobuf==3.5.1
- tensorflow==1.4.1
- tensorflow-tensorboard==0.4.0
- werkzeug==0.14.1
```
How to solve this problem?
Well....The stack overflow prompt me to say more details, but I think I describe things clearly, it is sad, stack overflow does not support to upload attachment....
|
2018/03/07
|
[
"https://Stackoverflow.com/questions/49154899",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/7722305/"
] |
I got the same issue and found a [GitHub issue](https://github.com/conda/conda/issues/6073#issuecomment-356981567) related to this. In the comments, @kalefranz posted an ideal solution by using the `--no-builds` flag with conda env export.
```
conda env export --no-builds > environment.yml
```
However, even remove build numbers, some packages may still have different version number at different OS. The best way I think is to create different env yml file for different OS.
Hope this helps.
|
For Apple ARM you should use another requirements:
<https://github.com/magnusviri/stable-diffusion-old/blob/apple-silicon-mps-support/environment-mac.yaml>
|
65,822,290
|
I made two components using python functions and I am trying to pass data between them using files, but I am unable to do so. I want to calculate the sum and then send the answer to the other component using a file. Below is the partial code (The code works without the file passing). Please assist.
```
# Define your components code as standalone python functions:======================
def add(a: float, b: float, f: comp.OutputTextFile(float)) -> NamedTuple(
'AddOutput',
[
('out', comp.OutputTextFile(float))
]):
'''Calculates sum of two arguments'''
sum = a+b
f.write(sum)
from collections import namedtuple
addOutput = namedtuple(
'AddOutput',
['out'])
return addOutput(f) # the metrics will be uploaded to the cloud
def multiply(c:float, d:float, f: comp.InputTextFile(float) ):
'''Calculates the product'''
product = c * d
print(f.read())
add_op = comp.func_to_container_op(add, output_component_file='add_component.yaml')
product_op = comp.create_component_from_func(multiply,
output_component_file='multiple_component.yaml')
@dsl.pipeline(
name='Addition-pipeline',
description='An example pipeline that performs addition calculations.'
)
def my_pipeline(a, b='7', c='4', d='1'):
add_op = pl_comp_list[0]
product_op = pl_comp_list[1]
first_add_task = add_op(a, 4)
second_add_task = product_op(c, d, first_add_task.outputs['out'])
```
|
2021/01/21
|
[
"https://Stackoverflow.com/questions/65822290",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/13435688/"
] |
Here is a slightly simplified version of your pipeline that I tested and which works.
It doesn't matter what class type you pass to `OutputTextFile` and `InputTextFile`. It'll be read and written as `str`. So this is what you should change:
* While writing to `OutputTextFile`: cast `sum_` from `float to str`
* While reading from `InputTextFile`: cast `f.read()` value from `str to float`
```
import kfp
from kfp import dsl
from kfp import components as comp
def add(a: float, b: float, f: comp.OutputTextFile()):
'''Calculates sum of two arguments'''
sum_ = a + b
f.write(str(sum_)) # cast to str
return sum_
def multiply(c: float, d: float, f: comp.InputTextFile()):
'''Calculates the product'''
in_ = float(f.read()) # cast to float
product = c * d * in_
print(product)
return product
add_op = comp.func_to_container_op(add,
output_component_file='add_component.yaml')
product_op = comp.create_component_from_func(
multiply, output_component_file='multiple_component.yaml')
@dsl.pipeline(
name='Addition-pipeline',
description='An example pipeline that performs addition calculations.')
def my_pipeline(a, b='7', c='4', d='1'):
first_add_task = add_op(a, b)
second_add_task = product_op(c, d, first_add_task.output)
if __name__ == "__main__":
compiled_name = __file__ + ".yaml"
kfp.compiler.Compiler().compile(my_pipeline, compiled_name)
```
|
`('out', comp.OutputTextFile(float))`
This is not really valid. The `OutputTextFile` annotation (and other similar annotations) can only be used in the function parameters. The function return value is only for outputs that you want to output as values (not as files).
Since you already have `f: comp.OutputTextFile(float)` where you can just remove the function return value altogether. Then you pass the `f` output to the downstream component: `product_op(c, d, first_add_task.outputs['f'])`.
|
41,006,153
|
Using python 3.4.3 or python 3.5.1 I'm surprised to see that:
```
from decimal import Decimal
Decimal('0') * Decimal('123.456789123456')
```
returns:
```
Decimal('0E-12')
```
Worse part is that this specific use case works with float.
Is there anything I could do to make sure the maths work and 0 multiplied by anything returns 0?
|
2016/12/06
|
[
"https://Stackoverflow.com/questions/41006153",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6451083/"
] |
`0E-12` actually is `0` (it's short for `0 * 10 ** -12`; since the coefficient is 0, that's still 0), `Decimal` just provides the `E-12` bit to indicate the "confidence level" of the `0`. What you've got will still behave like zero (it's falsy, additive identity, etc.), the only quirk is in how it prints.
If you need formatting to match, you can use the formatting mini-language, or you can call `.normalize()` on the result, which will turn `Decimal('0E-12')` into `Decimal('0')` (the method's purpose is to strip trailing zeroes from the result to produce a minimal canonical form that represents all equal numbers the same way).
|
Do the multiplication with floats and convert to decimal afterwards.
```
x = 0
y = 123.456789123456
Decimal(x*y)
```
returns (on Python 3.5.2):
```
Decimal('0')
```
|
41,006,153
|
Using python 3.4.3 or python 3.5.1 I'm surprised to see that:
```
from decimal import Decimal
Decimal('0') * Decimal('123.456789123456')
```
returns:
```
Decimal('0E-12')
```
Worse part is that this specific use case works with float.
Is there anything I could do to make sure the maths work and 0 multiplied by anything returns 0?
|
2016/12/06
|
[
"https://Stackoverflow.com/questions/41006153",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6451083/"
] |
`0E-12` actually is `0` (it's short for `0 * 10 ** -12`; since the coefficient is 0, that's still 0), `Decimal` just provides the `E-12` bit to indicate the "confidence level" of the `0`. What you've got will still behave like zero (it's falsy, additive identity, etc.), the only quirk is in how it prints.
If you need formatting to match, you can use the formatting mini-language, or you can call `.normalize()` on the result, which will turn `Decimal('0E-12')` into `Decimal('0')` (the method's purpose is to strip trailing zeroes from the result to produce a minimal canonical form that represents all equal numbers the same way).
|
Even though `Decimal('0E-12')` is not visually the same as `Decimal('0')`, there is no difference to python.
```
>>> Decimal('0E-12') == 0
True
```
The notation 0E-12 actually represents: 0 \* 10 \*\* -12. This expression evaluates to 0.
|
41,006,153
|
Using python 3.4.3 or python 3.5.1 I'm surprised to see that:
```
from decimal import Decimal
Decimal('0') * Decimal('123.456789123456')
```
returns:
```
Decimal('0E-12')
```
Worse part is that this specific use case works with float.
Is there anything I could do to make sure the maths work and 0 multiplied by anything returns 0?
|
2016/12/06
|
[
"https://Stackoverflow.com/questions/41006153",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6451083/"
] |
`0E-12` actually is `0` (it's short for `0 * 10 ** -12`; since the coefficient is 0, that's still 0), `Decimal` just provides the `E-12` bit to indicate the "confidence level" of the `0`. What you've got will still behave like zero (it's falsy, additive identity, etc.), the only quirk is in how it prints.
If you need formatting to match, you can use the formatting mini-language, or you can call `.normalize()` on the result, which will turn `Decimal('0E-12')` into `Decimal('0')` (the method's purpose is to strip trailing zeroes from the result to produce a minimal canonical form that represents all equal numbers the same way).
|
You may quantize your result to required precision.
```
from decimal import Decimal, ROUND_HALF_UP
result = Decimal('0') * Decimal('123.456789123456')
result.quantize(Decimal('.01'), rounding = ROUND_HALF_UP)
>>>Decimal('0.00')
```
For more info (eg. about rounding modes or even setting up a context for all decimals) refer to <https://docs.python.org/3/library/decimal.html>
Edit: I guess your best hit would be using `result.normalize()` method as mentioned by ShadowRanger.
|
41,006,153
|
Using python 3.4.3 or python 3.5.1 I'm surprised to see that:
```
from decimal import Decimal
Decimal('0') * Decimal('123.456789123456')
```
returns:
```
Decimal('0E-12')
```
Worse part is that this specific use case works with float.
Is there anything I could do to make sure the maths work and 0 multiplied by anything returns 0?
|
2016/12/06
|
[
"https://Stackoverflow.com/questions/41006153",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6451083/"
] |
Even though `Decimal('0E-12')` is not visually the same as `Decimal('0')`, there is no difference to python.
```
>>> Decimal('0E-12') == 0
True
```
The notation 0E-12 actually represents: 0 \* 10 \*\* -12. This expression evaluates to 0.
|
Do the multiplication with floats and convert to decimal afterwards.
```
x = 0
y = 123.456789123456
Decimal(x*y)
```
returns (on Python 3.5.2):
```
Decimal('0')
```
|
41,006,153
|
Using python 3.4.3 or python 3.5.1 I'm surprised to see that:
```
from decimal import Decimal
Decimal('0') * Decimal('123.456789123456')
```
returns:
```
Decimal('0E-12')
```
Worse part is that this specific use case works with float.
Is there anything I could do to make sure the maths work and 0 multiplied by anything returns 0?
|
2016/12/06
|
[
"https://Stackoverflow.com/questions/41006153",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6451083/"
] |
You may quantize your result to required precision.
```
from decimal import Decimal, ROUND_HALF_UP
result = Decimal('0') * Decimal('123.456789123456')
result.quantize(Decimal('.01'), rounding = ROUND_HALF_UP)
>>>Decimal('0.00')
```
For more info (eg. about rounding modes or even setting up a context for all decimals) refer to <https://docs.python.org/3/library/decimal.html>
Edit: I guess your best hit would be using `result.normalize()` method as mentioned by ShadowRanger.
|
Do the multiplication with floats and convert to decimal afterwards.
```
x = 0
y = 123.456789123456
Decimal(x*y)
```
returns (on Python 3.5.2):
```
Decimal('0')
```
|
41,006,153
|
Using python 3.4.3 or python 3.5.1 I'm surprised to see that:
```
from decimal import Decimal
Decimal('0') * Decimal('123.456789123456')
```
returns:
```
Decimal('0E-12')
```
Worse part is that this specific use case works with float.
Is there anything I could do to make sure the maths work and 0 multiplied by anything returns 0?
|
2016/12/06
|
[
"https://Stackoverflow.com/questions/41006153",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6451083/"
] |
Even though `Decimal('0E-12')` is not visually the same as `Decimal('0')`, there is no difference to python.
```
>>> Decimal('0E-12') == 0
True
```
The notation 0E-12 actually represents: 0 \* 10 \*\* -12. This expression evaluates to 0.
|
You may quantize your result to required precision.
```
from decimal import Decimal, ROUND_HALF_UP
result = Decimal('0') * Decimal('123.456789123456')
result.quantize(Decimal('.01'), rounding = ROUND_HALF_UP)
>>>Decimal('0.00')
```
For more info (eg. about rounding modes or even setting up a context for all decimals) refer to <https://docs.python.org/3/library/decimal.html>
Edit: I guess your best hit would be using `result.normalize()` method as mentioned by ShadowRanger.
|
32,407,824
|
I am new to python programming. I need to read contents from a csv file and print based on a matching criteria. The file contains columns like this:
abc, A, xyz, W
gfk, B, abc, Y, xyz, F
I want to print the contents of the adjacent column based on the matching input string. For e.g. if the string is abc it should print A, and W for xyz, and "no match" for gfk for the first row. This should be executed for each row until the end of the file.
I have the following code. However, don't know to select the adjacent column.
```
c= ['abc','xyz','gfk']
with open('filer.csv', 'rt') as csvfile:
my_file = csv.reader(csvfile, delimiter=',')
for row in my_file:
for i in c:
if i in row:
print the contents of the adjacent cell
```
I would appreciate any help in completing this script.
Thank you
|
2015/09/04
|
[
"https://Stackoverflow.com/questions/32407824",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5298602/"
] |
Your approach made it more difficult to print adjacent values, because even if you used `enumerate` to get the indices, you would have to search the row again, after finding each pattern (after `if i in row:` you wouldn't immediately know where it was in the row). By structuring the data in a dictionary it becomes simpler:
```
patterns = ['abc','xyz','gfk']
with open('filer.csv') as fh:
reader = csv.reader(fh, skipinitialspace=True)
for line in reader:
print '---'
content = dict(zip(line[::2], line[1::2]))
for pattern in patterns:
print "{}: {}".format(pattern, content.get(pattern, 'no match'))
```
`zip(line[::2], line[1::2])` creates a list of tuples from the adjacent elements of the list, which can be turned into a dictionary, where the patterns you are looking for are keys and the corresponding letters are values.
|
You can adapt this for the CSV, but it basically do what you ask for
```
csv = [['abc', 'A', 'xyz', 'W'],
['gfk', 'B', 'abc', 'Y', 'xyz', 'F']]
match_string = ['abc','xyz','gfk']
for row in csv:
for i, column_content in enumerate(row):
if column_content in match_string:
print row[i + 1] # + 1 for the following column_content 'A', 'W', etc.
```
You should read about slicing : [Explain Python's slice notation](https://stackoverflow.com/questions/509211/explain-pythons-slice-notation)
|
45,622,443
|
I started using Retrofit in my android app consuming RESTful web services.
I managed to get it working with a simple object. But when trying with a more complex object it remains desperately empty. I put Okhttp in debug, and the json I'm expecting is present in the response, so I think there is a problem during the creation (or the filling) of the object. So what is the best way to debug such a situation? I tried setting breakpoints (not sure where put them), but did not find anything.
The interesting part of the code is here
```
ClientRequest clientRequest = new ClientRequest(sessionId, 1);
apiInterface = APIClient.getClient().create(APIInterface.class);
Call userCall = apiInterface.getUser(clientRequest);
userCall.enqueue(new Callback() {
@Override
public void onResponse(Call call, Response response) {
if (response.isSuccessful()) {
Client client = (Client) response.body();
Log.d("TAG Request", call.request().toString() + "");
Log.d("TAG Response Code ", response.code() + "");
Log.d("TAG Response Body ", response.body() + "");
Log.d("TAG Response Message ", response.message() + "");
Log.d("TAG Id ",client.getClientId() + "");
Log.d("TAG Nom de l'Entreprise", client.getCompanyName() + "");
Log.d("TAG Nom du Contact", client.getContactName() + "");
Log.d("TAG Numéro client", client.getCustomerNo() + "");
Log.d("TAG Nom d'utilisateur", client.getUsername() + "");
Log.d("TAG Ville", client.getCity() + "");
Log.d("TAG Pays", client.getCountry() + "");
} else {
Log.d("error message", response.errorBody().toString());
}
}
@Override
public void onFailure(Call call, Throwable t) {
call.cancel();
}
});
```
The API interface (the login works)
```
public interface APIInterface {
//@FormUrlEncoded
@POST("/remote/json.php?login")
Call<Token> doLogin(@Body Credential credential);
@POST("/remote/json.php?client_get")
Call<Client> getUser(@Body ClientRequest clientRequest);
}
```
The client class generated by <http://www.jsonschema2pojo.org/>
```
public class Client {
@SerializedName("client_id")
@Expose
public String clientId;
@SerializedName("sys_userid")
@Expose
public String sysUserid;
@SerializedName("sys_groupid")
@Expose
public String sysGroupid;
@SerializedName("sys_perm_user")
@Expose
public String sysPermUser;
@SerializedName("sys_perm_group")
@Expose
public String sysPermGroup;
@SerializedName("sys_perm_other")
@Expose
public String sysPermOther;
@SerializedName("company_name")
@Expose
public String companyName;
@SerializedName("company_id")
@Expose
public String companyId;
@SerializedName("gender")
@Expose
public String gender;
@SerializedName("contact_firstname")
@Expose
public String contactFirstname;
@SerializedName("contact_name")
@Expose
public String contactName;
@SerializedName("customer_no")
@Expose
public String customerNo;
@SerializedName("vat_id")
@Expose
public String vatId;
@SerializedName("street")
@Expose
public String street;
@SerializedName("zip")
@Expose
public String zip;
@SerializedName("city")
@Expose
public String city;
@SerializedName("state")
@Expose
public String state;
@SerializedName("country")
@Expose
public String country;
@SerializedName("telephone")
@Expose
public String telephone;
@SerializedName("mobile")
@Expose
public String mobile;
@SerializedName("fax")
@Expose
public String fax;
@SerializedName("email")
@Expose
public String email;
@SerializedName("internet")
@Expose
public String internet;
@SerializedName("icq")
@Expose
public String icq;
@SerializedName("notes")
@Expose
public String notes;
@SerializedName("bank_account_owner")
@Expose
public String bankAccountOwner;
@SerializedName("bank_account_number")
@Expose
public String bankAccountNumber;
@SerializedName("bank_code")
@Expose
public String bankCode;
@SerializedName("bank_name")
@Expose
public String bankName;
@SerializedName("bank_account_iban")
@Expose
public String bankAccountIban;
@SerializedName("bank_account_swift")
@Expose
public String bankAccountSwift;
@SerializedName("paypal_email")
@Expose
public String paypalEmail;
@SerializedName("default_mailserver")
@Expose
public String defaultMailserver;
@SerializedName("mail_servers")
@Expose
public String mailServers;
@SerializedName("limit_maildomain")
@Expose
public String limitMaildomain;
@SerializedName("limit_mailbox")
@Expose
public String limitMailbox;
@SerializedName("limit_mailalias")
@Expose
public String limitMailalias;
@SerializedName("limit_mailaliasdomain")
@Expose
public String limitMailaliasdomain;
@SerializedName("limit_mailforward")
@Expose
public String limitMailforward;
@SerializedName("limit_mailcatchall")
@Expose
public String limitMailcatchall;
@SerializedName("limit_mailrouting")
@Expose
public String limitMailrouting;
@SerializedName("limit_mailfilter")
@Expose
public String limitMailfilter;
@SerializedName("limit_fetchmail")
@Expose
public String limitFetchmail;
@SerializedName("limit_mailquota")
@Expose
public String limitMailquota;
@SerializedName("limit_spamfilter_wblist")
@Expose
public String limitSpamfilterWblist;
@SerializedName("limit_spamfilter_user")
@Expose
public String limitSpamfilterUser;
@SerializedName("limit_spamfilter_policy")
@Expose
public String limitSpamfilterPolicy;
@SerializedName("default_webserver")
@Expose
public String defaultWebserver;
@SerializedName("web_servers")
@Expose
public String webServers;
@SerializedName("limit_web_ip")
@Expose
public String limitWebIp;
@SerializedName("limit_web_domain")
@Expose
public String limitWebDomain;
@SerializedName("limit_web_quota")
@Expose
public String limitWebQuota;
@SerializedName("web_php_options")
@Expose
public String webPhpOptions;
@SerializedName("limit_cgi")
@Expose
public String limitCgi;
@SerializedName("limit_ssi")
@Expose
public String limitSsi;
@SerializedName("limit_perl")
@Expose
public String limitPerl;
@SerializedName("limit_ruby")
@Expose
public String limitRuby;
@SerializedName("limit_python")
@Expose
public String limitPython;
@SerializedName("force_suexec")
@Expose
public String forceSuexec;
@SerializedName("limit_hterror")
@Expose
public String limitHterror;
@SerializedName("limit_wildcard")
@Expose
public String limitWildcard;
@SerializedName("limit_ssl")
@Expose
public String limitSsl;
@SerializedName("limit_ssl_letsencrypt")
@Expose
public String limitSslLetsencrypt;
@SerializedName("limit_web_subdomain")
@Expose
public String limitWebSubdomain;
@SerializedName("limit_web_aliasdomain")
@Expose
public String limitWebAliasdomain;
@SerializedName("limit_ftp_user")
@Expose
public String limitFtpUser;
@SerializedName("limit_shell_user")
@Expose
public String limitShellUser;
@SerializedName("ssh_chroot")
@Expose
public String sshChroot;
@SerializedName("limit_webdav_user")
@Expose
public String limitWebdavUser;
@SerializedName("limit_backup")
@Expose
public String limitBackup;
@SerializedName("limit_directive_snippets")
@Expose
public String limitDirectiveSnippets;
@SerializedName("limit_aps")
@Expose
public String limitAps;
@SerializedName("default_dnsserver")
@Expose
public String defaultDnsserver;
@SerializedName("db_servers")
@Expose
public String dbServers;
@SerializedName("limit_dns_zone")
@Expose
public String limitDnsZone;
@SerializedName("default_slave_dnsserver")
@Expose
public String defaultSlaveDnsserver;
@SerializedName("limit_dns_slave_zone")
@Expose
public String limitDnsSlaveZone;
@SerializedName("limit_dns_record")
@Expose
public String limitDnsRecord;
@SerializedName("default_dbserver")
@Expose
public String defaultDbserver;
@SerializedName("dns_servers")
@Expose
public String dnsServers;
@SerializedName("limit_database")
@Expose
public String limitDatabase;
@SerializedName("limit_database_user")
@Expose
public String limitDatabaseUser;
@SerializedName("limit_database_quota")
@Expose
public String limitDatabaseQuota;
@SerializedName("limit_cron")
@Expose
public String limitCron;
@SerializedName("limit_cron_type")
@Expose
public String limitCronType;
@SerializedName("limit_cron_frequency")
@Expose
public String limitCronFrequency;
@SerializedName("limit_traffic_quota")
@Expose
public String limitTrafficQuota;
@SerializedName("limit_client")
@Expose
public String limitClient;
@SerializedName("limit_domainmodule")
@Expose
public String limitDomainmodule;
@SerializedName("limit_mailmailinglist")
@Expose
public String limitMailmailinglist;
@SerializedName("limit_openvz_vm")
@Expose
public String limitOpenvzVm;
@SerializedName("limit_openvz_vm_template_id")
@Expose
public String limitOpenvzVmTemplateId;
@SerializedName("parent_client_id")
@Expose
public String parentClientId;
@SerializedName("username")
@Expose
public String username;
@SerializedName("password")
@Expose
public String password;
@SerializedName("language")
@Expose
public String language;
@SerializedName("usertheme")
@Expose
public String usertheme;
@SerializedName("template_master")
@Expose
public String templateMaster;
@SerializedName("template_additional")
@Expose
public String templateAdditional;
@SerializedName("created_at")
@Expose
public String createdAt;
@SerializedName("locked")
@Expose
public String locked;
@SerializedName("canceled")
@Expose
public String canceled;
@SerializedName("can_use_api")
@Expose
public String canUseApi;
@SerializedName("tmp_data")
@Expose
public String tmpData;
@SerializedName("id_rsa")
@Expose
public String idRsa;
@SerializedName("ssh_rsa")
@Expose
public String sshRsa;
@SerializedName("customer_no_template")
@Expose
public String customerNoTemplate;
@SerializedName("customer_no_start")
@Expose
public String customerNoStart;
@SerializedName("customer_no_counter")
@Expose
public String customerNoCounter;
@SerializedName("added_date")
@Expose
public String addedDate;
@SerializedName("added_by")
@Expose
public String addedBy;
@SerializedName("default_xmppserver")
@Expose
public String defaultXmppserver;
@SerializedName("xmpp_servers")
@Expose
public String xmppServers;
@SerializedName("limit_xmpp_domain")
@Expose
public String limitXmppDomain;
@SerializedName("limit_xmpp_user")
@Expose
public String limitXmppUser;
@SerializedName("limit_xmpp_muc")
@Expose
public String limitXmppMuc;
@SerializedName("limit_xmpp_anon")
@Expose
public String limitXmppAnon;
@SerializedName("limit_xmpp_auth_options")
@Expose
public String limitXmppAuthOptions;
@SerializedName("limit_xmpp_vjud")
@Expose
public String limitXmppVjud;
@SerializedName("limit_xmpp_proxy")
@Expose
public String limitXmppProxy;
@SerializedName("limit_xmpp_status")
@Expose
public String limitXmppStatus;
@SerializedName("limit_xmpp_pastebin")
@Expose
public String limitXmppPastebin;
@SerializedName("limit_xmpp_httparchive")
@Expose
public String limitXmppHttparchive;
public Client() {
Log.d("TAG","Default Constructor");
}
public String getClientId() {
return clientId;
}
public String getSysUserid() {
return sysUserid;
}
public String getSysGroupid() {
return sysGroupid;
}
public String getSysPermUser() {
return sysPermUser;
}
public String getSysPermGroup() {
return sysPermGroup;
}
public String getSysPermOther() {
return sysPermOther;
}
public String getCompanyName() {
return companyName;
}
public String getCompanyId() {
return companyId;
}
public String getGender() {
return gender;
}
public String getContactFirstname() {
return contactFirstname;
}
public String getContactName() {
return contactName;
}
public String getCustomerNo() {
return customerNo;
}
public String getVatId() {
return vatId;
}
public String getStreet() {
return street;
}
public String getZip() {
return zip;
}
public String getCity() {
return city;
}
public String getState() {
return state;
}
public String getCountry() {
return country;
}
public String getTelephone() {
return telephone;
}
public String getMobile() {
return mobile;
}
public String getFax() {
return fax;
}
public String getEmail() {
return email;
}
public String getInternet() {
return internet;
}
public String getIcq() {
return icq;
}
public String getNotes() {
return notes;
}
public String getBankAccountOwner() {
return bankAccountOwner;
}
public String getBankAccountNumber() {
return bankAccountNumber;
}
public String getBankCode() {
return bankCode;
}
public String getBankName() {
return bankName;
}
public String getBankAccountIban() {
return bankAccountIban;
}
public String getBankAccountSwift() {
return bankAccountSwift;
}
public String getPaypalEmail() {
return paypalEmail;
}
public String getDefaultMailserver() {
return defaultMailserver;
}
public String getMailServers() {
return mailServers;
}
public String getLimitMaildomain() {
return limitMaildomain;
}
public String getLimitMailbox() {
return limitMailbox;
}
public String getLimitMailalias() {
return limitMailalias;
}
public String getLimitMailaliasdomain() {
return limitMailaliasdomain;
}
public String getLimitMailforward() {
return limitMailforward;
}
public String getLimitMailcatchall() {
return limitMailcatchall;
}
public String getLimitMailrouting() {
return limitMailrouting;
}
public String getLimitMailfilter() {
return limitMailfilter;
}
public String getLimitFetchmail() {
return limitFetchmail;
}
public String getLimitMailquota() {
return limitMailquota;
}
public String getLimitSpamfilterWblist() {
return limitSpamfilterWblist;
}
public String getLimitSpamfilterUser() {
return limitSpamfilterUser;
}
public String getLimitSpamfilterPolicy() {
return limitSpamfilterPolicy;
}
public String getDefaultWebserver() {
return defaultWebserver;
}
public String getWebServers() {
return webServers;
}
public String getLimitWebIp() {
return limitWebIp;
}
public String getLimitWebDomain() {
return limitWebDomain;
}
public String getLimitWebQuota() {
return limitWebQuota;
}
public String getWebPhpOptions() {
return webPhpOptions;
}
public String getLimitCgi() {
return limitCgi;
}
public String getLimitSsi() {
return limitSsi;
}
public String getLimitPerl() {
return limitPerl;
}
public String getLimitRuby() {
return limitRuby;
}
public String getLimitPython() {
return limitPython;
}
public String getForceSuexec() {
return forceSuexec;
}
public String getLimitHterror() {
return limitHterror;
}
public String getLimitWildcard() {
return limitWildcard;
}
public String getLimitSsl() {
return limitSsl;
}
public String getLimitSslLetsencrypt() {
return limitSslLetsencrypt;
}
public String getLimitWebSubdomain() {
return limitWebSubdomain;
}
public String getLimitWebAliasdomain() {
return limitWebAliasdomain;
}
public String getLimitFtpUser() {
return limitFtpUser;
}
public String getLimitShellUser() {
return limitShellUser;
}
public String getSshChroot() {
return sshChroot;
}
public String getLimitWebdavUser() {
return limitWebdavUser;
}
public String getLimitBackup() {
return limitBackup;
}
public String getLimitDirectiveSnippets() {
return limitDirectiveSnippets;
}
public String getLimitAps() {
return limitAps;
}
public String getDefaultDnsserver() {
return defaultDnsserver;
}
public String getDbServers() {
return dbServers;
}
public String getLimitDnsZone() {
return limitDnsZone;
}
public String getDefaultSlaveDnsserver() {
return defaultSlaveDnsserver;
}
public String getLimitDnsSlaveZone() {
return limitDnsSlaveZone;
}
public String getLimitDnsRecord() {
return limitDnsRecord;
}
public String getDefaultDbserver() {
return defaultDbserver;
}
public String getDnsServers() {
return dnsServers;
}
public String getLimitDatabase() {
return limitDatabase;
}
public String getLimitDatabaseUser() {
return limitDatabaseUser;
}
public String getLimitDatabaseQuota() {
return limitDatabaseQuota;
}
public String getLimitCron() {
return limitCron;
}
public String getLimitCronType() {
return limitCronType;
}
public String getLimitCronFrequency() {
return limitCronFrequency;
}
public String getLimitTrafficQuota() {
return limitTrafficQuota;
}
public String getLimitClient() {
return limitClient;
}
public String getLimitDomainmodule() {
return limitDomainmodule;
}
public String getLimitMailmailinglist() {
return limitMailmailinglist;
}
public String getLimitOpenvzVm() {
return limitOpenvzVm;
}
public String getLimitOpenvzVmTemplateId() {
return limitOpenvzVmTemplateId;
}
public String getParentClientId() {
return parentClientId;
}
public String getUsername() {
return username;
}
public String getPassword() {
return password;
}
public String getLanguage() {
return language;
}
public String getUsertheme() {
return usertheme;
}
public String getTemplateMaster() {
return templateMaster;
}
public String getTemplateAdditional() {
return templateAdditional;
}
public String getCreatedAt() {
return createdAt;
}
public String getLocked() {
return locked;
}
public String getCanceled() {
return canceled;
}
public String getCanUseApi() {
return canUseApi;
}
public String getTmpData() {
return tmpData;
}
public String getIdRsa() {
return idRsa;
}
public String getSshRsa() {
return sshRsa;
}
public String getCustomerNoTemplate() {
return customerNoTemplate;
}
public String getCustomerNoStart() {
return customerNoStart;
}
public String getCustomerNoCounter() {
return customerNoCounter;
}
public String getAddedDate() {
return addedDate;
}
public String getAddedBy() {
return addedBy;
}
public String getDefaultXmppserver() {
return defaultXmppserver;
}
public String getXmppServers() {
return xmppServers;
}
public String getLimitXmppDomain() {
return limitXmppDomain;
}
public String getLimitXmppUser() {
return limitXmppUser;
}
public String getLimitXmppMuc() {
return limitXmppMuc;
}
public String getLimitXmppAnon() {
return limitXmppAnon;
}
public String getLimitXmppAuthOptions() {
return limitXmppAuthOptions;
}
public String getLimitXmppVjud() {
return limitXmppVjud;
}
public String getLimitXmppProxy() {
return limitXmppProxy;
}
public String getLimitXmppStatus() {
return limitXmppStatus;
}
public String getLimitXmppPastebin() {
return limitXmppPastebin;
}
public String getLimitXmppHttparchive() {
return limitXmppHttparchive;
}
}
```
|
2017/08/10
|
[
"https://Stackoverflow.com/questions/45622443",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4208537/"
] |
Try first sending data to your WebServices using clients like [Postman](https://chrome.google.com/webstore/detail/postman/fhbjgbiflinjbdggehcddcbncdddomop), this is a very usefull tool. You can check what's your WebService sending to your Android app
|
I faced with the same problem by using retrofit on kotlin.
it was:
```
data class MtsApprovalResponseData(
@field:Element(name = "MessageId", required = false)
@Namespace(reference = WORKAROUND_NAMESPACE)
var messageId: String? = null,
@field:Element(name = "Version", required = false)
@Namespace(reference = WORKAROUND_NAMESPACE)
var version: String? = null
)
```
and if fixed emptiness by moving to small name parameters:
```
data class MtsApprovalResponseData(
@field:Element(name = "messageId", required = false)
@Namespace(reference = WORKAROUND_NAMESPACE)
var messageId: String? = null,
@field:Element(name = "version", required = false)
@Namespace(reference = WORKAROUND_NAMESPACE)
var version: String? = null
)
```
btw my retrofit bean are:
```
@Bean
fun soapRetrofit(): Retrofit {
val logging = HttpLoggingInterceptor()
logging.level = HttpLoggingInterceptor.Level.BODY
val httpClient = OkHttpClient.Builder().addInterceptor(logging).build()
return Retrofit.Builder()
.addConverterFactory(SimpleXmlConverterFactory.create())
.baseUrl(profile.serviceApiUrl)
.client(httpClient)
.build()
}
```
|
72,944,672
|
In one directory there are several folders that their names are as follows: 301, 302, ..., 600.
Each of these folders contain two folders with the name of `A` and `B`. I need to copy all the image files from A folders of each parent folder to the environment of that folder (copying images files from e.g. 600>A to 600 folder) and afterwards removing `A` and `B` folders of each parent folder. I found the solution from [this post](https://stackoverflow.com/questions/71269014/copy-files-from-a-folder-and-paste-to-another-directory-sub-folders-in-python) but I don't know how to copy the files into parent folders instead of sub-folder and also how to delete the sub-folders after copying and doing it for several folders.
```
import shutil
import os, sys
exepath = sys.argv[0]
directory = os.path.dirname(os.path.abspath(exepath))+"\\Files\\"
credit_folder = os.path.dirname(os.path.abspath(exepath))+"\\Credits\\"
os.chdir(credit_folder)
os.chdir(directory)
Source = credit_folder
Target = directory
files = os.listdir(Source)
folders = os.listdir(Target)
for file in files:
SourceCredits = os.path.join(Source,file)
for folder in folders:
TargetFolder = os.path.join(Target,folder)
shutil.copy2(SourceCredits, TargetFolder)
print(" \n ===> Credits Copy & Paste Sucessfully <=== \n ")
```
|
2022/07/11
|
[
"https://Stackoverflow.com/questions/72944672",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/11140344/"
] |
I recommend you to use the [Pathlib](https://docs.python.org/3/library/pathlib.html).
```
from pathlib import Path
import shutil
from tqdm import tqdm
folder_to_be_sorted = Path("/your/path/to/the/folder")
for folder_named_number_i in tqdm(list(folder_to_be_sorted.iterdir())):
# folder_named_number_i is 301, 302, ..., 600
A_folder = folder_named_number_i / "A"
B_folder = folder_named_number_i / "B"
# move files
for image_i in A_folder.iterdir():
shutil.move(str(image_i), folder_named_number_i)
# remove directories
shutil.rmtree(str(A_folder))
shutil.rmtree(str(B_folder))
```
The `os.path` is a more low-level module. I post another version here since you are using the `os` module in your question.
```
import shutil
import os
from tqdm import tqdm
folder_to_be_sorted = "/your/path/to/the/folder"
for folder_named_number_name in tqdm(os.listdir(folder_to_be_sorted)):
folder_named_number_i = os.path.join(folder_to_be_sorted, folder_named_number_name)
# folder_named_number_i is 301, 302, ..., 600
A_folder = os.path.join(folder_named_number_i, "A")
B_folder = os.path.join(folder_named_number_i, "B")
# move files
for image_i_name in os.listdir(A_folder):
image_i = os.path.join(A_folder, image_i_name)
shutil.move(str(image_i), folder_named_number_i)
# remove directories
shutil.rmtree(str(A_folder))
shutil.rmtree(str(B_folder))
```
By the codes above I suppose you want to transfrom
```
# /your/path/to/the/folder
# │
# └───301
# │ │
# │ └───A
# │ │ └───image_301_A_1.png
# │ │ └───image_301_A_2.png
# │ │ └───image_301_A_3.png
# │ │ └───...(other images)
# │ │
# │ └───B
# │ └───image_301_B_1.png
# │ └───image_301_B_2.png
# │ └───image_301_B_3.png
# │ └───...(other images)
# │
# └───302(like 301)
# :
# :
# └───600(like 301)
```
to:
```
# /your/path/to/the/folder
# │
# └───301
# │ │
# │ └───image_301_A_1.png
# │ └───image_301_A_2.png
# │ └───image_301_A_3.png
# │ └───...(other images in folder 301/A/)
# │
# └───302(like 301)
# :
# :
# └───600(like 301)
```
|
@hellohawii gave an excellent answer. Following code also works and you only need change value of *Source* when using.
```
import shutil
import os, sys
from tqdm import tqdm
exepath = sys.argv[0] # current path of code
Source = os.path.dirname(os.path.abspath(exepath))+"\\Credits\\" # path of folders:301, 302... 600
# Source = your_path_of_folders
files = os.listdir(Source) # get list of folders under 301 etc, in your situation: [A, B]
def get_parent_dir(path=None, offset=-1):
"""get parent dir of current path"""
result = path if path else __file__
for i in range(abs(offset)):
result = os.path.dirname(result)
return result
def del_files0(dir_path):
"""delete full folder"""
shutil.rmtree(dir_path)
for file_path in files:
current_path = os.path.join(Source, file_path) # current_path
if file_path == 'A': # select the folder to copy
file_list = os.listdir(current_path) # get file_list of selected folder
parent_path = get_parent_dir(current_path) # get parent dir path, namely target path
for file in tqdm(file_list):
shutil.copy(file, parent_path)
del_files0(current_path) # delete current path(folder)
print(" \n ===> Credits Copy & Paste & delete Successfully <=== \n ")
```
|
67,707,605
|
I am trying to use python to place orders through the TWS API. My problem is getting the next valid order ID.
Here is what I am using:
```
from ibapi.client import EClient
from ibapi.wrapper import EWrapper
from ibapi.common import TickerId
from ibapi import contract, order, common
from threading import Thread
class ib_class(EWrapper, EClient):
def __init__(self, addr, port, client_id):
EClient.__init__(self, self)
self.connect(addr, port, client_id) # Connect to TWS
thread = Thread(target=self.run, daemon=True) # Launch the client thread
thread.start()
def error(self, reqId:TickerId, errorCode:int, errorString:str):
if reqId > -1:
print("Error. Id: " , reqId, " Code: " , errorCode , " Msg: " , errorString)
def nextValidId(self, orderId: int):
self.nextValidId = orderId
ib_api = ib_class("127.0.0.1", 7496, 1)
orderID = ib_api.nextValidId(0)
```
And this gives me :
```
TypeError: 'int' object is not callable
```
|
2021/05/26
|
[
"https://Stackoverflow.com/questions/67707605",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/15524510/"
] |
The `nextValidId` method is a wrapper method. From the client, you need to call `reqIds` to get an Order ID.
```
ib_api.reqIds(-1)
```
The parameter of `reqIds` doesn't matter. Also, it doesn't have a return value. Instead, `reqIds` sends a message to IB, and when the response is received, the wrapper's `nextValidId` method will be called.
|
Add some delay after `reqIds`
-----------------------------
In the wrapper class for receiving updates from `ibapi`,
override `nextValidId` function as it is used by `ib_api.reqIds` to respond back.
```
from ibapi.wrapper import iswrapper
@iswrapper
def nextValidId(self, orderId: int):
super().nextValidId(orderId)
print("setting nextValidOrderId: %d" % orderId)
self.nextValidOrderId = orderId
```
Then, add some delay after calling `reqIds` as `TWS Api` might take some time to respond and access it via `ib_api.nextValidOrderId`
```
ib_api.reqIds(ib_api.nextValidOrderId)
time.sleep(3)
print("Next Order ID: ", ib_api.nextValidOrderId)
```
Note: `ib_api` holds the object for above `wrapper class`
|
68,164,039
|
I'm a python beginner and I'm trying to solve a cubic equation with two independent variables and one dependent variable. Here is the equation, which I am trying to solve for v:
3pv^3−(p+8t)v^2+9v−3=0
If I set p and t as individual values, I can solve the equation with my current code. But what I would like to do is set a single t and solve for a range of p values. I have tried making p an array, but when I run my code, the only return I get is "[]". I believe this is an issue with the way I am trying to print the solutions, i.e. I'm not asking the program to print the solutions as an array, but I'm not sure how to do this.
Here's my code thus far:
```
# cubic equation for solution to reduced volume
## define variables ##
# gas: specify
tc = 300
t1 = 315
t = t1/tc
from sympy.solvers import solve
from sympy import Symbol
v = Symbol('v')
import numpy as np
p = np.array(range(10))
print(solve(3*p*(v**3)-(v**2)*(p+8*t)+9*v-3,v))
```
Any hints on how to get the solutions of v for various p (at constant t) printed out properly?
|
2021/06/28
|
[
"https://Stackoverflow.com/questions/68164039",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/16334402/"
] |
You can simply use a for loop I think. This is what seems to work:
```
for p in range(10):
solution = solve(3*p*(v**3)-(v**2)*(p+8*t)+9*v-3, v)
print(solution)
```
Also, note that range(10) goes from 0 to 9 (inclusive)
|
Same idea as previous answer, but I think it's generally good practice to keep imports at the top and to separate equation and solution to separate lines etc. Makes it a bit easier to read
```
from sympy.solvers import solve
from sympy import Symbol
import numpy as np
tc = 300
t1 = 315
t = t1/tc
v = Symbol('v') # Variable we want to solve
p = np.array(range(10)) # Range of p's to test
for p_i in p: # Loop through each p, call them p_i
print(f"p = {p_i}")
equation = 3*p_i*(v**3)-(v**2)*(p_i+8*t)+9*v-3
solutions = solve(equation,v)
print(f"v = {solutions}\n")
```
|
54,853,332
|
I tried using pip install sendgrid, but got this error:
>
> Collecting sendgrid
> Using cached <https://files.pythonhosted.org/packages/24/21/9bea4c51f949497cdce11f46fd58f1a77c6fcccd926cc1bb4e14be39a5c0/sendgrid-5.6.0-py2.py3-none-any.whl>
> Requirement already satisfied: python-http-client>=3.0 in /home/avin/.local/lib/python2.7/site-packages (from sendgrid) (3.1.0)
> Installing collected packages: sendgrid
> Could not install packages due to an EnvironmentError: [Errno 13] Permission denied: '/usr/local/lib/python2.7/dist-packages/sendgrid-5.6.0.dist-info'
> Consider using the `--user` option or check the permissions.
>
>
>
I used the `--user` as suggested and it run ok:
>
> Collecting sendgrid
> Using cached <https://files.pythonhosted.org/packages/24/21/9bea4c51f949497cdce11f46fd58f1a77c6fcccd926cc1bb4e14be39a5c0/sendgrid-5.6.0-py2.py3-none-any.whl>
> Requirement already satisfied: python-http-client>=3.0 in /home/avin/.local/lib/python2.7/site-packages (from sendgrid) (3.1.0)
> Installing collected packages: sendgrid
> Successfully installed sendgrid-5.6.0
>
>
>
However, now, when running IPython, I can't `import sendgrid`...
>
> ImportError: No module named sendgrid
>
>
>
**pip -V = pip 19.0.3**
|
2019/02/24
|
[
"https://Stackoverflow.com/questions/54853332",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3512538/"
] |
This is a very useful command `pip install --ignore-installed <package>`
It will make your life easy :)
|
Solved.
It required another package that I missed: `pip install python-HTTP-Client`.
After that I no longer needed the `--user` and the imports worked fine
|
18,564,642
|
So I have been trying to install SimpleCV for some time now. I was finally able to install pygame, but now I have ran into a new error. I have used pip, easy\_install, and cloned the SimpleCV github repository to try to install SimpleCV, but I get this error from all:
```
ImportError: No module named scipy.ndimage
```
If it is helpful, this is the whole error message:
```
Traceback (most recent call last):
File "/usr/local/bin/simplecv", line 8, in <module>
load_entry_point('SimpleCV==1.3', 'console_scripts', 'simplecv')()
File"/System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python/pkg_resource s.py", line 318, in load_entry_point
return get_distribution(dist).load_entry_point(group, name)
File"/System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python/pkg_resources.py", line 2221, in load_entry_point
return ep.load()
File"/System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python/pkg_resources.py", line 1954, in load
entry = __import__(self.module_name, globals(),globals(), ['__name__'])
File "/Library/Python/2.7/site-packages/SimpleCV/__init__.py", line 3, in <module>
from SimpleCV.base import *
File "/Library/Python/2.7/site-packages/SimpleCV/base.py", line 22, in <module>
import scipy.ndimage as ndimage
ImportError: No module named scipy.ndimage
```
I am sorry if there is a simple solution to this, I have been trying and searching for solutions for well over an hour with no luck.
|
2013/09/02
|
[
"https://Stackoverflow.com/questions/18564642",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2364997/"
] |
From my understanding, what you trying to do is showing a custom dialog or alertview and only on its return value, the code should execute further.
So lets say, If you display a `UIAlertView` with `UITextView` inside for adding more information, you should move your `sendEmail` method to the callback of alertview's button. Similarly if you are going for some custom dialog, then you should write a callback in your custom dialog to main window and then in that callback, you should write code for sending email.
I think, a callback mechanism like is the only solution for what you are opting for and if you want everything to be written in same code block, then i you can make use of [blocks in objective-c](https://developer.apple.com/library/ios/documentation/cocoa/conceptual/ProgrammingWithObjectiveC/WorkingwithBlocks/WorkingwithBlocks.html)
|
The flow needs to be designed so that when the email is prompted it has set everything up and created a window with a button, at that point your code has finished and is not doing anything. Then when the button is pressed the next function/method is called that uses the data that was prepared before hand.
|
18,564,642
|
So I have been trying to install SimpleCV for some time now. I was finally able to install pygame, but now I have ran into a new error. I have used pip, easy\_install, and cloned the SimpleCV github repository to try to install SimpleCV, but I get this error from all:
```
ImportError: No module named scipy.ndimage
```
If it is helpful, this is the whole error message:
```
Traceback (most recent call last):
File "/usr/local/bin/simplecv", line 8, in <module>
load_entry_point('SimpleCV==1.3', 'console_scripts', 'simplecv')()
File"/System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python/pkg_resource s.py", line 318, in load_entry_point
return get_distribution(dist).load_entry_point(group, name)
File"/System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python/pkg_resources.py", line 2221, in load_entry_point
return ep.load()
File"/System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python/pkg_resources.py", line 1954, in load
entry = __import__(self.module_name, globals(),globals(), ['__name__'])
File "/Library/Python/2.7/site-packages/SimpleCV/__init__.py", line 3, in <module>
from SimpleCV.base import *
File "/Library/Python/2.7/site-packages/SimpleCV/base.py", line 22, in <module>
import scipy.ndimage as ndimage
ImportError: No module named scipy.ndimage
```
I am sorry if there is a simple solution to this, I have been trying and searching for solutions for well over an hour with no luck.
|
2013/09/02
|
[
"https://Stackoverflow.com/questions/18564642",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2364997/"
] |
From my understanding, what you trying to do is showing a custom dialog or alertview and only on its return value, the code should execute further.
So lets say, If you display a `UIAlertView` with `UITextView` inside for adding more information, you should move your `sendEmail` method to the callback of alertview's button. Similarly if you are going for some custom dialog, then you should write a callback in your custom dialog to main window and then in that callback, you should write code for sending email.
I think, a callback mechanism like is the only solution for what you are opting for and if you want everything to be written in same code block, then i you can make use of [blocks in objective-c](https://developer.apple.com/library/ios/documentation/cocoa/conceptual/ProgrammingWithObjectiveC/WorkingwithBlocks/WorkingwithBlocks.html)
|
I use MB Progress HUD
<https://github.com/jdg/MBProgressHUD>
i guess it is what you are looking for. It will show a loader which puts like overlay over the view in which it is added to.
You can use it like this :
In your Interface add
```
MBProgressHUD *HUD;
```
Then call this when even you need
```
-(void) showHUD {
HUD = [[MBProgressHUD alloc] initWithView:self.view];
HUD.labelText = @"Loading";
HUD.detailsLabelText = @"Please Wait ...";
HUD.mode = MBProgressHUDModeIndeterminate;
[self.tableView addSubview:HUD];
[HUD show:YES];
}
```
And finally to hide it
```
[MBProgressHUD hideHUDForView:self.tableView animated:YES];
```
|
7,561,640
|
Executive summary: a Python module is linked against a different version of `libstdc++.dylib` than the Python executable. The result is that calls to `iostream` from the module crash.
Backstory
---------
I'm creating a Python module using SWIG on an older computer (running 10.5.8). For various reasons, I am using GCC 4.5 (installed via MacPorts) to do this, using Python 2.7 (installed via MacPorts, compiled using the system-default GCC 4.0.1).
Observed Behavior
-----------------
To make a long story short: calling `str( myObject )` in Python causes the C++ code in turn to call `std::operator<< <std::char_traits<char> >`. This generates the following error:
```
Python(487) malloc: *** error for object 0x69548c: Non-aligned pointer being freed
*** set a breakpoint in malloc_error_break to debug
```
Setting a breakpoint and calling `backtrace` when it fails gives:
```
#0 0x9734de68 in malloc_error_break ()
#1 0x97348ad0 in szone_error ()
#2 0x97e6fdfc in std::string::_Rep::_M_destroy ()
#3 0x97e71388 in std::basic_string<char, std::char_traits<char>, std::allocator<char> >::~basic_string ()
#4 0x97e6b748 in std::basic_stringbuf<char, std::char_traits<char>, std::allocator<char> >::overflow ()
#5 0x97e6e7a0 in std::basic_streambuf<char, std::char_traits<char> >::xsputn ()
#6 0x00641638 in std::__ostream_insert<char, std::char_traits<char> > ()
#7 0x006418d0 in std::operator<< <std::char_traits<char> > ()
#8 0x01083058 in meshLib::operator<< <tranSupport::Dimension<(unsigned short)1> > (os=@0xbfffc628, c=@0x5a3c50) at /Users/sethrj/_code/pytrt/meshlib/oned/Cell.cpp:21
#9 0x01008b14 in meshLib_Cell_Sl_tranSupport_Dimension_Sl_1u_Sg__Sg____str__ (self=0x5a3c50) at /Users/sethrj/_code/_build/pytrt-gcc45DEBUG/meshlib/swig/mesh_onedPYTHON_wrap.cxx:4439
#10 0x0101d150 in _wrap_Cell_T___str__ (args=0x17eb470) at /Users/sethrj/_code/_build/pytrt-gcc45DEBUG/meshlib/swig/mesh_onedPYTHON_wrap.cxx:8341
#11 0x002f2350 in PyEval_EvalFrameEx ()
#12 0x002f4bb4 in PyEval_EvalCodeEx ()
[snip]
```
Suspected issue
---------------
I believe the issue to be that my code links against a new version of libstdc++:
```
/opt/local/lib/gcc45/libstdc++.6.dylib (compatibility version 7.0.0, current version 7.14.0)
```
whereas the Python binary has a very indirect dependence on the system `libstdc++`, which loads first (output from `info shared` in gdb):
```
1 dyld - 0x8fe00000 dyld Y Y /usr/lib/dyld at 0x8fe00000 (offset 0x0) with prefix "__dyld_"
2 Python - 0x1000 exec Y Y /opt/local/Library/Frameworks/Python.framework/Versions/2.7/Resources/Python.app/Contents/MacOS/Python (offset 0x0)
(objfile is) /opt/local/bin/python
3 Python F 0x219000 dyld Y Y /opt/local/Library/Frameworks/Python.framework/Versions/2.7/Python at 0x219000 (offset 0x219000)
4 libSystem.B.dylib - 0x9723d000 dyld Y Y /usr/lib/libSystem.B.dylib at 0x9723d000 (offset -0x68dc3000)
(commpage objfile is) /usr/lib/libSystem.B.dylib[LC_SEGMENT.__DATA.__commpage]
5 CoreFoundation F 0x970b3000 dyld Y Y /System/Library/Frameworks/CoreFoundation.framework/Versions/A/CoreFoundation at 0x970b3000 (offset -0x68f4d000)
6 libgcc_s.1.dylib - 0x923e6000 dyld Y Y /usr/lib/libgcc_s.1.dylib at 0x923e6000 (offset -0x6dc1a000)
7 libmathCommon.A.dylib - 0x94af5000 dyld Y Y /usr/lib/system/libmathCommon.A.dylib at 0x94af5000 (offset -0x6b50b000)
8 libicucore.A.dylib - 0x97cf4000 dyld Y Y /usr/lib/libicucore.A.dylib at 0x97cf4000 (offset -0x6830c000)
9 libobjc.A.dylib - 0x926f0000 dyld Y Y /usr/lib/libobjc.A.dylib at 0x926f0000 (offset -0x6d910000)
(commpage objfile is) /usr/lib/libobjc.A.dylib[LC_SEGMENT.__DATA.__commpage]
10 libauto.dylib - 0x95eac000 dyld Y Y /usr/lib/libauto.dylib at 0x95eac000 (offset -0x6a154000)
11 libstdc++.6.0.4.dylib - 0x97e3d000 dyld Y Y /usr/lib/libstdc++.6.0.4.dylib at 0x97e3d000 (offset -0x681c3000)
12 _mesh_oned.so - 0x1000000 dyld Y Y /Users/sethrj/_code/_build/pytrt-gcc45DEBUG/meshlib/swig/_mesh_oned.so at 0x1000000 (offset 0x1000000)
13 libhdf5.7.dylib - 0x122c000 dyld Y Y /opt/local/lib/libhdf5.7.dylib at 0x122c000 (offset 0x122c000)
14 libz.1.2.5.dylib - 0x133000 dyld Y Y /opt/local/lib/libz.1.2.5.dylib at 0x133000 (offset 0x133000)
15 libstdc++.6.dylib - 0x600000 dyld Y Y /opt/local/lib/gcc45/libstdc++.6.dylib at 0x600000 (offset 0x600000)
[snip]
```
Note that the `malloc` error occurs in the memory address for the system `libstdc++`, not the one the shared library is linked against.
Attempted resolutions
---------------------
I tried to force MacPorts to build Python using GCC 4.5 rather than the Apple compiler, but the install phase fails because it needs to create a Mac "Framework", which vanilla GCC apparently doesn't do.
Even with the `-static-libstdc++` compiler flag, \_\_ostream\_insert calls the `std::basic_streambuf` from the system-loaded shared library.
I tried modifying DYLD\_LIBRARY\_PATH by prepending `/opt/local/lib/gcc45/` but without avail.
What can I do to get this to work? I'm at my wit's end.
More information
----------------
This problem seems to be [common to mac os x](http://www.google.com/search?client=safari&rls=en&q=xsputn+__ostream_insert+malloc_error_break). Notice how in all of the debug outputs show, the address jumps between the calls to `std::__ostream_insert` and `std::basic_streambuf::xsputn`: it's leaving the new GCC 4.5 code and jumping into the older shared library code in `/usr/bin`. Now, to find a workaround...
|
2011/09/26
|
[
"https://Stackoverflow.com/questions/7561640",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/118160/"
] |
Solved it. I discovered that this problem is not too uncommon when mixing GCC versions on the mac. After reading [this solution for mpich](http://www.mail-archive.com/libmesh-users@lists.sourceforge.net/msg02756.html) and checking the [mpich source code](http://anonscm.debian.org/gitweb/?p=debian-science/packages/mpich2.git;a=blob_plain;f=configure;hb=HEAD), I found that the solution is to add the following flag to gcc on mac systems:
```
-flat_namespace
```
I am so happy. I wish this hadn't taken me a week to figure out. :)
|
Run Python in GDB, set a breakpoint on `malloc_error_break`. That will show you what's being freed that's not allocated. I doubt that this is an error between ABIs between the versions of libstdc++.
|
70,190,565
|
Kinda long code by complete beginner ahead, please help out
I have a database with the following values:
| Sl.No | trips | sales | price |
| --- | --- | --- | --- |
| 1 | 5 | 20 | 220 |
| 2 | 8 | 30 | 330 |
| 3 | 9 | 45 | 440 |
| 4 | 3 | 38 | 880 |
I am trying to use mysql-connector and python to get the sum of the columns trips, sales and price as variables and use it to do some calculations in the python script.
This is what I have so far:
```
def sum_fun():
try:
con = mysql.connector.connect(host='localhost',
database='twitterdb', user='root', password='mypasword', charset='utf8')
if con.is_connected():
cursor = con.cursor(buffered=True)
def sumTrips():
cursor.execute("SELECT SUM(trips) FROM table_name")
sum1=cursor.fetchall()[0][0]
return int(sum1)
def sumSales():
cursor.execute("SELECT SUM(sales) FROM table_name")
sum2=cursor.fetchall()[0][0]
return int(sum2)
def sumPrice():
cursor.execute("SELECT SUM(price) FROM table_name")
sum3=cursor.fetchall()[0][0]
return int(sum3)
except Error as e:
print(e)
cursor.close()
con.close()
```
I would like to receive the the three sums as three variables `sum_trips`, `sum_sales` and `sum_price` and assign a point system for them such that:
```
trip_points=20*sum_trips
sales_points=30*sum_sales
price_points=40*sum_price
```
And then take the three variables `trip_points, sales_points, prices_points` and insert it into another table in the same database named `Points` with the column names same as variable names.
**I have been trying so hard to find an answer to this for so long. Any help or guidance would be much appreciated. I am a total beginner to most of this stuff so if there's a better way to achieve what I am looking for please do let me know. Thanks**
|
2021/12/01
|
[
"https://Stackoverflow.com/questions/70190565",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/17184842/"
] |
It’s *possible*, but this doesn't look like great design. What happens when a new stage is added? I suspect your issues will disappear if you make a separate "Stage" table with two columns: Stage\_Number and Stage\_Value. Then finding the last filled in stage is a simple MAX query (and the rejection comes from adding one more to that value).
When you hard-wire the stages, you are left with a rather clumsy construct like
```
CASE WHEN stage1 IS NULL THEN 'stage1'
ELSE WHEN stage2 IS NULL THEN 'stage2'
etc.
END CASE
```
which, as I say, is possible, but inelegant and inflexible.
|
You can convert the columns to a JSON structure, then unnest the keys, sort them and pick the first one:
```
select t.*,
(select x.col
from jsonb_each_text(jsonb_strip_nulls(to_jsonb(t) - 'id' - 'status')) as x(col, val)
order by x.col desc
limit 1) as final_stage
from the_table t
```
Note this won't work correctly if you have more than 9 columns as sorting by name will put `stage10` before `stage2` so with the above descending sort `stage2` will be returned. If you need that, you need to convert the name to a proper number that is sorted correctly:
```
order by regexp_replace(x.col, '[^0-9]', '', 'g')::int desc
```
|
24,175,446
|
I'd like to assign each pixel of a mat `matA` to some value according to values of `matB`, my code is a nested for-loop:
```
clock_t begint=clock();
for(size_t i=0; i<depthImg.rows; i++){
for(size_t j=0; j<depthImg.cols; j++){
datatype px=depthImg.at<datatype>(i, j);
if(px==0)
depthImg.at<datatype>(i, j)=lastDepthImg.at<datatype>(i, j);
}
}
cout<<"~~~~~~~~time: "<<clock()-begint<<endl;
```
and it costs about 40~70ms for a mat of size 640\*480.
I could do this easily in python numpy using fancy indexing:
```
In [18]: b=np.vstack((np.ones(3), np.arange(3)))
In [19]: b
Out[19]:
array([[ 1., 1., 1.],
[ 0., 1., 2.]])
In [22]: a=np.vstack((np.arange(3), np.zeros(3)))
In [23]: a=np.tile(a, (320, 160))
In [24]: a.shape
Out[24]: (640, 480)
In [25]: b=np.tile(b, (320, 160))
In [26]: %timeit a[a==0]=b[a==0]
100 loops, best of 3: 2.81 ms per loop
```
and this is much faster than my hand writing for-loop.
So is there such operation in opencv c++ api?
|
2014/06/12
|
[
"https://Stackoverflow.com/questions/24175446",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1150712/"
] |
I am unable to replicate your timing results on my machine Your C++ code runs in under 1ms on my machine. However, whenever you have slow iteration, `at<>()` should be immediately suspect. OpenCV has a [tutorial on iterating through images](http://docs.opencv.org/doc/tutorials/core/how_to_scan_images/how_to_scan_images.html#howtoscanimagesopencv), which I recommend.
However, for the operation you describe, there is a better way. `Mat::copyTo()` allows masked operations:
```
lastDepthImg.copyTo(depthImg, depthImg == 0);
```
This is both faster (about 2x as fast) and far more readable than your nested-loop solution. In addition, it may benefit from hardware optimizations like SSE.
|
In your C++ code, at every pixel you are making a function call, and passing in two indices which are getting converted into a flat index doing something like `i*depthImageCols + j`.
My C++ skills are mostly lacking, but using [this](http://docs.opencv.org/modules/core/doc/basic_structures.html#mat-begin) as a template, I guess you could try something like, which should get rid of most of that overhead:
```
MatIterator_<datatype> it1 = depthImg.begin<datatype>(),
it1_end = depthImg.end<datatype>();
MatConstIterator_<datatype> it2 = lastDepthImg.begin<datatype>();
for(; it1 != it1_end; ++it1, ++it2) {
if (*it1 == 0) {
*it1 = *it2;
}
}
```
|
19,872,942
|
Alright, so I've been wrestling with this problem for a good two hours now.
I want to use a settings module, local.py, when I run my server locally via this command:
```
$ python manage.py runserver --settings=mysite.settings.local
```
However, I see this error when I try to do this:
```
ImportError: Could not import settings 'mysite.settings.local' (Is it on sys.path?): No module named base
```
This is how my directory is laid out:
```
├── manage.py
├── media
├── myapp
│ ├── __init__.py
│ ├── models.py
│ ├── tests.py
│ └── views.py
└── mysite
├── __init__.py
├── __init__.pyc
├── settings
│ ├── __init__.py
│ ├── __init__.pyc
│ ├── local.py
│ └── local.pyc
├── urls.py
└── wsgi.py
```
Similar questions have been asked, but their solutions have not worked for me.
One suggestion was to include an initialization file in the settings folder, but, as you can see, this is what I have already done.
Need a hand here!
|
2013/11/09
|
[
"https://Stackoverflow.com/questions/19872942",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2744684/"
] |
The underlying value is the same anyway. How you display it is a matter of **formatting**. Formatting in SQL is usually unnecessary. Formatting should be done on the client that receives the data from the database system.
Don't change anything in SQL. Simply format your grid to display the required number of digits.
|
Convert number in code side.
ex. :
```
string.Format("{0:0.##}", 256.583); // "256.58"
string.Format("{0:0.##}", 256.586); // "256.59"
string.Format("{0:0.##}", 256.58); // "256.58"
string.Format("{0:0.##}", 256.5); // "256.5"
string.Format("{0:0.##}", 256.0); // "256"
//===============================
string.Format("{0:0.00}", 256.583); // "256.58"
string.Format("{0:0.00}", 256.586); // "256.59"
string.Format("{0:0.00}", 256.58); // "256.58"
string.Format("{0:0.00}", 256.5); // "256.50"
string.Format("{0:0.00}", 256.0); // "256.00"
//===============================
string.Format("{0:00.000}", 1.2345); // "01.235"
string.Format("{0:000.000}", 12.345); // "012.345"
string.Format("{0:0000.000}", 123.456); // "0123.456"
```
In your case :
```
<asp:TemplateField HeaderText="VatAmount">
<ItemTemplate>
<%# string.Format("{0:0.00}", Eval("VatAmount").ToString()); %>
</ItemTemplate>
</asp:TemplateField>
```
|
19,872,942
|
Alright, so I've been wrestling with this problem for a good two hours now.
I want to use a settings module, local.py, when I run my server locally via this command:
```
$ python manage.py runserver --settings=mysite.settings.local
```
However, I see this error when I try to do this:
```
ImportError: Could not import settings 'mysite.settings.local' (Is it on sys.path?): No module named base
```
This is how my directory is laid out:
```
├── manage.py
├── media
├── myapp
│ ├── __init__.py
│ ├── models.py
│ ├── tests.py
│ └── views.py
└── mysite
├── __init__.py
├── __init__.pyc
├── settings
│ ├── __init__.py
│ ├── __init__.pyc
│ ├── local.py
│ └── local.pyc
├── urls.py
└── wsgi.py
```
Similar questions have been asked, but their solutions have not worked for me.
One suggestion was to include an initialization file in the settings folder, but, as you can see, this is what I have already done.
Need a hand here!
|
2013/11/09
|
[
"https://Stackoverflow.com/questions/19872942",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2744684/"
] |
You can [convert](http://msdn.microsoft.com/en-us/library/ms187928.aspx) to decimal type with 2 decimal points.
```
Select Vt.Id, Vt.Description, convert(decimal(20,2),abs(Vt.Rate)) as VRate,convert(decimal(20,2),Sum(((itemprice*Qty)*(abs(Vt.Rate)/100)))) as VatAmount ,convert(decimal(20,2),Sum(itemprice*Qty),)as NetAmount from BillItem1 as B1 left join bill b on b.orderid=b1.orderid
Left JOIN ItemDescription ItD ON ItD.Id=B1.itemId Left Join VatType Vt on Vt.Id = ItD.TaxId where B1.IsActive=1 and B1.IsDelete = 0 and b.date between '11/09/2013 10:43:31 AM' and '11/09/2013 10:43:31 AM' Group By Vt.Id,Vt.Rate,Vt.Description Order By SUM((b1.ItemPrice*Qty) - b1.NetAmount)DESC
```
|
You can use Round function to get only required number of digits after decimal point.
```
double myValue = 23.8000000000000;
double newValue=Math.Round(myValue, 2);
```
Result : newValue = 23.8
|
19,872,942
|
Alright, so I've been wrestling with this problem for a good two hours now.
I want to use a settings module, local.py, when I run my server locally via this command:
```
$ python manage.py runserver --settings=mysite.settings.local
```
However, I see this error when I try to do this:
```
ImportError: Could not import settings 'mysite.settings.local' (Is it on sys.path?): No module named base
```
This is how my directory is laid out:
```
├── manage.py
├── media
├── myapp
│ ├── __init__.py
│ ├── models.py
│ ├── tests.py
│ └── views.py
└── mysite
├── __init__.py
├── __init__.pyc
├── settings
│ ├── __init__.py
│ ├── __init__.pyc
│ ├── local.py
│ └── local.pyc
├── urls.py
└── wsgi.py
```
Similar questions have been asked, but their solutions have not worked for me.
One suggestion was to include an initialization file in the settings folder, but, as you can see, this is what I have already done.
Need a hand here!
|
2013/11/09
|
[
"https://Stackoverflow.com/questions/19872942",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2744684/"
] |
The underlying value is the same anyway. How you display it is a matter of **formatting**. Formatting in SQL is usually unnecessary. Formatting should be done on the client that receives the data from the database system.
Don't change anything in SQL. Simply format your grid to display the required number of digits.
|
You can use Round function to get only required number of digits after decimal point.
```
double myValue = 23.8000000000000;
double newValue=Math.Round(myValue, 2);
```
Result : newValue = 23.8
|
19,872,942
|
Alright, so I've been wrestling with this problem for a good two hours now.
I want to use a settings module, local.py, when I run my server locally via this command:
```
$ python manage.py runserver --settings=mysite.settings.local
```
However, I see this error when I try to do this:
```
ImportError: Could not import settings 'mysite.settings.local' (Is it on sys.path?): No module named base
```
This is how my directory is laid out:
```
├── manage.py
├── media
├── myapp
│ ├── __init__.py
│ ├── models.py
│ ├── tests.py
│ └── views.py
└── mysite
├── __init__.py
├── __init__.pyc
├── settings
│ ├── __init__.py
│ ├── __init__.pyc
│ ├── local.py
│ └── local.pyc
├── urls.py
└── wsgi.py
```
Similar questions have been asked, but their solutions have not worked for me.
One suggestion was to include an initialization file in the settings folder, but, as you can see, this is what I have already done.
Need a hand here!
|
2013/11/09
|
[
"https://Stackoverflow.com/questions/19872942",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2744684/"
] |
Convert number in code side.
ex. :
```
string.Format("{0:0.##}", 256.583); // "256.58"
string.Format("{0:0.##}", 256.586); // "256.59"
string.Format("{0:0.##}", 256.58); // "256.58"
string.Format("{0:0.##}", 256.5); // "256.5"
string.Format("{0:0.##}", 256.0); // "256"
//===============================
string.Format("{0:0.00}", 256.583); // "256.58"
string.Format("{0:0.00}", 256.586); // "256.59"
string.Format("{0:0.00}", 256.58); // "256.58"
string.Format("{0:0.00}", 256.5); // "256.50"
string.Format("{0:0.00}", 256.0); // "256.00"
//===============================
string.Format("{0:00.000}", 1.2345); // "01.235"
string.Format("{0:000.000}", 12.345); // "012.345"
string.Format("{0:0000.000}", 123.456); // "0123.456"
```
In your case :
```
<asp:TemplateField HeaderText="VatAmount">
<ItemTemplate>
<%# string.Format("{0:0.00}", Eval("VatAmount").ToString()); %>
</ItemTemplate>
</asp:TemplateField>
```
|
You can use Round function to get only required number of digits after decimal point.
```
double myValue = 23.8000000000000;
double newValue=Math.Round(myValue, 2);
```
Result : newValue = 23.8
|
19,872,942
|
Alright, so I've been wrestling with this problem for a good two hours now.
I want to use a settings module, local.py, when I run my server locally via this command:
```
$ python manage.py runserver --settings=mysite.settings.local
```
However, I see this error when I try to do this:
```
ImportError: Could not import settings 'mysite.settings.local' (Is it on sys.path?): No module named base
```
This is how my directory is laid out:
```
├── manage.py
├── media
├── myapp
│ ├── __init__.py
│ ├── models.py
│ ├── tests.py
│ └── views.py
└── mysite
├── __init__.py
├── __init__.pyc
├── settings
│ ├── __init__.py
│ ├── __init__.pyc
│ ├── local.py
│ └── local.pyc
├── urls.py
└── wsgi.py
```
Similar questions have been asked, but their solutions have not worked for me.
One suggestion was to include an initialization file in the settings folder, but, as you can see, this is what I have already done.
Need a hand here!
|
2013/11/09
|
[
"https://Stackoverflow.com/questions/19872942",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2744684/"
] |
You can [convert](http://msdn.microsoft.com/en-us/library/ms187928.aspx) to decimal type with 2 decimal points.
```
Select Vt.Id, Vt.Description, convert(decimal(20,2),abs(Vt.Rate)) as VRate,convert(decimal(20,2),Sum(((itemprice*Qty)*(abs(Vt.Rate)/100)))) as VatAmount ,convert(decimal(20,2),Sum(itemprice*Qty),)as NetAmount from BillItem1 as B1 left join bill b on b.orderid=b1.orderid
Left JOIN ItemDescription ItD ON ItD.Id=B1.itemId Left Join VatType Vt on Vt.Id = ItD.TaxId where B1.IsActive=1 and B1.IsDelete = 0 and b.date between '11/09/2013 10:43:31 AM' and '11/09/2013 10:43:31 AM' Group By Vt.Id,Vt.Rate,Vt.Description Order By SUM((b1.ItemPrice*Qty) - b1.NetAmount)DESC
```
|
Convert number in code side.
ex. :
```
string.Format("{0:0.##}", 256.583); // "256.58"
string.Format("{0:0.##}", 256.586); // "256.59"
string.Format("{0:0.##}", 256.58); // "256.58"
string.Format("{0:0.##}", 256.5); // "256.5"
string.Format("{0:0.##}", 256.0); // "256"
//===============================
string.Format("{0:0.00}", 256.583); // "256.58"
string.Format("{0:0.00}", 256.586); // "256.59"
string.Format("{0:0.00}", 256.58); // "256.58"
string.Format("{0:0.00}", 256.5); // "256.50"
string.Format("{0:0.00}", 256.0); // "256.00"
//===============================
string.Format("{0:00.000}", 1.2345); // "01.235"
string.Format("{0:000.000}", 12.345); // "012.345"
string.Format("{0:0000.000}", 123.456); // "0123.456"
```
In your case :
```
<asp:TemplateField HeaderText="VatAmount">
<ItemTemplate>
<%# string.Format("{0:0.00}", Eval("VatAmount").ToString()); %>
</ItemTemplate>
</asp:TemplateField>
```
|
19,872,942
|
Alright, so I've been wrestling with this problem for a good two hours now.
I want to use a settings module, local.py, when I run my server locally via this command:
```
$ python manage.py runserver --settings=mysite.settings.local
```
However, I see this error when I try to do this:
```
ImportError: Could not import settings 'mysite.settings.local' (Is it on sys.path?): No module named base
```
This is how my directory is laid out:
```
├── manage.py
├── media
├── myapp
│ ├── __init__.py
│ ├── models.py
│ ├── tests.py
│ └── views.py
└── mysite
├── __init__.py
├── __init__.pyc
├── settings
│ ├── __init__.py
│ ├── __init__.pyc
│ ├── local.py
│ └── local.pyc
├── urls.py
└── wsgi.py
```
Similar questions have been asked, but their solutions have not worked for me.
One suggestion was to include an initialization file in the settings folder, but, as you can see, this is what I have already done.
Need a hand here!
|
2013/11/09
|
[
"https://Stackoverflow.com/questions/19872942",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2744684/"
] |
The underlying value is the same anyway. How you display it is a matter of **formatting**. Formatting in SQL is usually unnecessary. Formatting should be done on the client that receives the data from the database system.
Don't change anything in SQL. Simply format your grid to display the required number of digits.
|
You can format the data displayed using the following code. This code causes truncation not rounding.
```
dataGridView1.DataSource=sometable; //bind to datasource
DataGridViewCellStyle style = new DataGridViewCellStyle();
style.Format = "N2";
this.dataGridView1.Columns["Price"].DefaultCellStyle = style;
```
|
19,872,942
|
Alright, so I've been wrestling with this problem for a good two hours now.
I want to use a settings module, local.py, when I run my server locally via this command:
```
$ python manage.py runserver --settings=mysite.settings.local
```
However, I see this error when I try to do this:
```
ImportError: Could not import settings 'mysite.settings.local' (Is it on sys.path?): No module named base
```
This is how my directory is laid out:
```
├── manage.py
├── media
├── myapp
│ ├── __init__.py
│ ├── models.py
│ ├── tests.py
│ └── views.py
└── mysite
├── __init__.py
├── __init__.pyc
├── settings
│ ├── __init__.py
│ ├── __init__.pyc
│ ├── local.py
│ └── local.pyc
├── urls.py
└── wsgi.py
```
Similar questions have been asked, but their solutions have not worked for me.
One suggestion was to include an initialization file in the settings folder, but, as you can see, this is what I have already done.
Need a hand here!
|
2013/11/09
|
[
"https://Stackoverflow.com/questions/19872942",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2744684/"
] |
You can [convert](http://msdn.microsoft.com/en-us/library/ms187928.aspx) to decimal type with 2 decimal points.
```
Select Vt.Id, Vt.Description, convert(decimal(20,2),abs(Vt.Rate)) as VRate,convert(decimal(20,2),Sum(((itemprice*Qty)*(abs(Vt.Rate)/100)))) as VatAmount ,convert(decimal(20,2),Sum(itemprice*Qty),)as NetAmount from BillItem1 as B1 left join bill b on b.orderid=b1.orderid
Left JOIN ItemDescription ItD ON ItD.Id=B1.itemId Left Join VatType Vt on Vt.Id = ItD.TaxId where B1.IsActive=1 and B1.IsDelete = 0 and b.date between '11/09/2013 10:43:31 AM' and '11/09/2013 10:43:31 AM' Group By Vt.Id,Vt.Rate,Vt.Description Order By SUM((b1.ItemPrice*Qty) - b1.NetAmount)DESC
```
|
enclose your VatAmount with `round(,2)`: `round(Sum(((itemprice*Qty)*(abs(Vt.Rate)/100))),2)`
|
19,872,942
|
Alright, so I've been wrestling with this problem for a good two hours now.
I want to use a settings module, local.py, when I run my server locally via this command:
```
$ python manage.py runserver --settings=mysite.settings.local
```
However, I see this error when I try to do this:
```
ImportError: Could not import settings 'mysite.settings.local' (Is it on sys.path?): No module named base
```
This is how my directory is laid out:
```
├── manage.py
├── media
├── myapp
│ ├── __init__.py
│ ├── models.py
│ ├── tests.py
│ └── views.py
└── mysite
├── __init__.py
├── __init__.pyc
├── settings
│ ├── __init__.py
│ ├── __init__.pyc
│ ├── local.py
│ └── local.pyc
├── urls.py
└── wsgi.py
```
Similar questions have been asked, but their solutions have not worked for me.
One suggestion was to include an initialization file in the settings folder, but, as you can see, this is what I have already done.
Need a hand here!
|
2013/11/09
|
[
"https://Stackoverflow.com/questions/19872942",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2744684/"
] |
You can format the data displayed using the following code. This code causes truncation not rounding.
```
dataGridView1.DataSource=sometable; //bind to datasource
DataGridViewCellStyle style = new DataGridViewCellStyle();
style.Format = "N2";
this.dataGridView1.Columns["Price"].DefaultCellStyle = style;
```
|
Convert number in code side.
ex. :
```
string.Format("{0:0.##}", 256.583); // "256.58"
string.Format("{0:0.##}", 256.586); // "256.59"
string.Format("{0:0.##}", 256.58); // "256.58"
string.Format("{0:0.##}", 256.5); // "256.5"
string.Format("{0:0.##}", 256.0); // "256"
//===============================
string.Format("{0:0.00}", 256.583); // "256.58"
string.Format("{0:0.00}", 256.586); // "256.59"
string.Format("{0:0.00}", 256.58); // "256.58"
string.Format("{0:0.00}", 256.5); // "256.50"
string.Format("{0:0.00}", 256.0); // "256.00"
//===============================
string.Format("{0:00.000}", 1.2345); // "01.235"
string.Format("{0:000.000}", 12.345); // "012.345"
string.Format("{0:0000.000}", 123.456); // "0123.456"
```
In your case :
```
<asp:TemplateField HeaderText="VatAmount">
<ItemTemplate>
<%# string.Format("{0:0.00}", Eval("VatAmount").ToString()); %>
</ItemTemplate>
</asp:TemplateField>
```
|
19,872,942
|
Alright, so I've been wrestling with this problem for a good two hours now.
I want to use a settings module, local.py, when I run my server locally via this command:
```
$ python manage.py runserver --settings=mysite.settings.local
```
However, I see this error when I try to do this:
```
ImportError: Could not import settings 'mysite.settings.local' (Is it on sys.path?): No module named base
```
This is how my directory is laid out:
```
├── manage.py
├── media
├── myapp
│ ├── __init__.py
│ ├── models.py
│ ├── tests.py
│ └── views.py
└── mysite
├── __init__.py
├── __init__.pyc
├── settings
│ ├── __init__.py
│ ├── __init__.pyc
│ ├── local.py
│ └── local.pyc
├── urls.py
└── wsgi.py
```
Similar questions have been asked, but their solutions have not worked for me.
One suggestion was to include an initialization file in the settings folder, but, as you can see, this is what I have already done.
Need a hand here!
|
2013/11/09
|
[
"https://Stackoverflow.com/questions/19872942",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2744684/"
] |
enclose your VatAmount with `round(,2)`: `round(Sum(((itemprice*Qty)*(abs(Vt.Rate)/100))),2)`
|
You can use Round function to get only required number of digits after decimal point.
```
double myValue = 23.8000000000000;
double newValue=Math.Round(myValue, 2);
```
Result : newValue = 23.8
|
19,872,942
|
Alright, so I've been wrestling with this problem for a good two hours now.
I want to use a settings module, local.py, when I run my server locally via this command:
```
$ python manage.py runserver --settings=mysite.settings.local
```
However, I see this error when I try to do this:
```
ImportError: Could not import settings 'mysite.settings.local' (Is it on sys.path?): No module named base
```
This is how my directory is laid out:
```
├── manage.py
├── media
├── myapp
│ ├── __init__.py
│ ├── models.py
│ ├── tests.py
│ └── views.py
└── mysite
├── __init__.py
├── __init__.pyc
├── settings
│ ├── __init__.py
│ ├── __init__.pyc
│ ├── local.py
│ └── local.pyc
├── urls.py
└── wsgi.py
```
Similar questions have been asked, but their solutions have not worked for me.
One suggestion was to include an initialization file in the settings folder, but, as you can see, this is what I have already done.
Need a hand here!
|
2013/11/09
|
[
"https://Stackoverflow.com/questions/19872942",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2744684/"
] |
The underlying value is the same anyway. How you display it is a matter of **formatting**. Formatting in SQL is usually unnecessary. Formatting should be done on the client that receives the data from the database system.
Don't change anything in SQL. Simply format your grid to display the required number of digits.
|
enclose your VatAmount with `round(,2)`: `round(Sum(((itemprice*Qty)*(abs(Vt.Rate)/100))),2)`
|
65,356,299
|
I am running VS Code (Version 1.52) with extensions Jupyter Notebook (2020.12) and Python (2020.12) on MacOS Catalina.
**Context:**
I have problems getting Intellisense to work properly in my Jupyter Notebooks in VS Code. Some have had some success with adding these config parameters to the global settings of VS Code:
```
"python.dataScience.runStartupCommands": [
"%config IPCompleter.greedy=True",
"%config IPCompleter.use_jedi = False"
]
```
I went ahead and added those as well but then had to realize that all settings under `python.dataScience` are `Unknown Configuration Setting`. Any idea why this is and how I could this get to work?
|
2020/12/18
|
[
"https://Stackoverflow.com/questions/65356299",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5197386/"
] |
Since Nov 2020 the Jupyter extension is seperated from Python extension for VS Code. The setting key has been renamed from `python.dataScience` to `jupyter`[^update](https://devblogs.microsoft.com/python/introducing-the-jupyter-extension-for-vs-code/)
So in your case please rename `python.dataScience.runStartupCommands` to `jupyter.runStartupCommands`
|
According to your description, you could refer to the following:
1. Whether in the "`.py`" file or the "`.ipynb`" file, we can use the shortcut key `"Ctrl+space`" to open the code suggested options:
[](https://i.stack.imgur.com/K3MlT.png)
2. It is recommended that you use the extension "Pylance", which provides outstanding language services for Python in VSCode, and related content will also be displayed in the Jupyter file:
[](https://i.stack.imgur.com/uU07N.png)
Combine these two methods:
[](https://i.stack.imgur.com/irIR3.png)
For setting "python.dataScience.runStartupCommands", as it shows "Unknown Configuration Setting", now we don't use it to set Jupyter's "Intellisense" in VSCode.
|
49,145,059
|
In a dynamic system my base values are all functions of time, `d(t)`. I create the variable `d` using `d = Function('d')(t)` where `t = S('t')`
Obviously it's very common to have derivatives of d (rates of change like velocity etc.). However the default printing of `diff(d(t))` gives:-
```
Derivative(d(t), t)
```
and using pretty printing in ipython (for e.g.) gives a better looking version of:-
```
d/dt (d(t))
```
The functions which include the derivatives of `d(t)` are fairly long in my problems however, and I'd like the printed representation to be something like `d'(t)` or `\dot(d)(t)` (Latex).
Is this possible in sympy? I can probably workaround this using `subs` but would prefer a generic **sympy\_print** function or something I could tweak.
|
2018/03/07
|
[
"https://Stackoverflow.com/questions/49145059",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4443898/"
] |
I do this by substitution. It is horribly stupid, but it works like a charm:
```
q = Function('q')(t)
q_d = Function('\\dot{q}')(t)
```
and then substitute with
```
alias = {q.diff(t):q_d, } # and higher derivatives etc..
hd = q.diff(t).subs(alias)
```
And the output hd has a pretty dot over it's head!
As I said: this is a work-around and works, but you have to be careful in order to substitute correctly (Also for `q_d.diff(t)`, which must be `q_d2` and so on! You can have one big list with all replacements for printing and just apply it after the relevant mathematical steps.)
|
The [vector printing](http://docs.sympy.org/latest/modules/physics/vector/api/printing.html) module that you already found is the only place where such printing is implemented in SymPy.
```
from sympy.physics.vector import dynamicsymbols
from sympy.physics.vector.printing import vpprint, vlatex
d = dynamicsymbols('d')
vpprint(d.diff()) # ḋ
vlatex(d.diff()) # '\\dot{d}'
```
The regular printers (pretty, LaTeX, etc) do not support either prime or dot notation for derivatives. Their `_print_Derivative` methods are written so that they also work for multivariable expressions, where one has to specify a variable by using some sort of d/dx notation.
It would be nice to have an option for shorter derivative notation in general.
|
38,645,486
|
I'm sure that this is a pretty simple problem and that I am just missing something incredibly obvious, but the answer to this predicament has eluded me for several hours now.
My project directory structure looks like this:
```
-PhysicsMaterial
-Macros
__init__.py
Macros.py
-Modules
__init__.py
AvgAccel.py
AvgVelocity.py
-UnitTests
__init__.py
AvgAccelUnitTest.py
AvgVelocityUnitTest.py
__init__.py
```
Criticisms aside on my naming conventions and directory structure here, I cannot seem to be able to use relative imports. I'm attempting to relative import a Module file to be tested in AvgAccelUnitTest.py:
```
from .Modules import AvgAccel as accel
```
However, I keep getting:
```
ValueError: Attempted relative import in non-package
```
Since I have all of my **init** files set up throughout my structure, and I also have the top directory added to my PYTHONPATH, I am stumped. Why is python not interpreting the package and importing the file correctly?
|
2016/07/28
|
[
"https://Stackoverflow.com/questions/38645486",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6649949/"
] |
This occurs because you're running the script as `__main__`. When you run a script like this:
```
python /path/to/package/module.py
```
That file is loaded as `__main__`, not as `package.module`, so it can't do relative imports because it isn't part of a package.
This can lead to strange errors where a class defined in your script gets defined twice, once as `__main__.Class` and again as `package.module.Class`, which can cause `isinstance` checks to fail and similar oddities. Because of this, you generally shouldn't run your modules directly.
For your tests, you can remove the `__init__.py` inside the tests directory and just use absolute instead of relative imports. In fact, your tests probably shouldn't be inside your package at all.
Alternatively, you could create a test runner script that imports your tests and runs them.
|
[How to fix "Attempted relative import in non-package" even with \_\_init\_\_.py](https://stackoverflow.com/questions/11536764/attempted-relative-import-in-non-package-even-with-init-py)
Well, guess it's on to using sys.path.append now. Clap and a half to @BrenBarn, @fireant, and @Ignacio Vazquez-Abrams
|
70,922,321
|
I've been coding with R for quite a while but I want to start learning and using python more for its machine learning applications. However, I'm quite confused as to how to properly install packages and set up the whole working environment. Unlike R where I suppose most people just use RStudio and directly install packages with `install.packages()`, there seems to be a variety of ways this can be done in python, including `pip install` `conda install` and there is also the issue of doing it in the command prompt or one of the IDEs. I've downloaded python 3.8.5 and anaconda3 and some of my most burning questions right now are:
1. When to use which command for installing packages? (and also should I always do it in the command prompt aka cmd on windows instead of inside jupyter notebook)
2. How to navigate the cmd syntax/coding (for example the python documentation for installing packages has this piece of code: `py -m pip install "SomeProject"` but I am completely unfamiliar with this syntax and how to use it - so in the long run do I also have to learn what goes on in the command prompt or does most of the operations occur in the IDE and I mostly don't have to touch the cmd?)
3. How to set up a working directory of sorts (like `setwd()` in R) such that my `.ipynb` files can be saved to other directories or even better if I can just directly start my IDE from another file destination?
I've tried looking at some online resources but they mostly deal with coding basics and the python language instead of these technical aspects of the set up, so I would greatly appreciate some advice on how to navigate and set up the python working environment in general. Thanks a lot!
|
2022/01/31
|
[
"https://Stackoverflow.com/questions/70922321",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/14540717/"
] |
I had to import the metada of the `Table` I manually defined
|
I came to this page because autogenerating migrations no longer worked with the upgrade to SQLAlchemy 1.4, as the metadata were no longer recognized and the automatically generated migration deleted every table (DROP table in upgrade, CREATE TABLE in downgrade).
I have first tried to import the tables metadata like this :
```
target_metadata = [orm.table_company.metadata, orm.table_user.metadata, orm.table_3.metadata, orm.table_4.metadata]
```
It resulted in the following error code:
```
alembic/autogenerate/api.py", line 462, in table_key_to_table
ValueError: Duplicate table keys across multiple MetaData objects: "tb_company", "tb_user", "tb_3", "tb_4"
```
I have found that rather than importing one metadata object per table, you can access it in a single pass with `target_metadata = orm.mapper_registry.metadata` :
SQL Alchemy 1.4
===============
adapters/orm.py
---------------
```py
from myapp import domain
from sqlalchemy.orm import registry
from sqlalchemy.schema import MetaData
metadata = MetaData()
mapper_registry = registry(metadata=metadata)
# define your tables here
table_user = Table(
"tb_user",
mapper_registry.metadata,
Column("id", Integer, primary_key=True, autoincrement=True),
Column(
"pk_user",
UUID(as_uuid=True),
primary_key=True,
server_default=text("uuid_generate_v4()"),
default=uuid.uuid4,
unique=True,
nullable=False,
),
Column(
"fk_company",
UUID(as_uuid=True),
ForeignKey("tb_company.pk_company"),
),
Column("first_name", String(255)),
Column("last_name", String(255)),
)
# map your domain objects to the tables
def start_mappers():
mapper_registry.map_imperatively(
domain.model.User,
table_user,
properties={
"company": relationship(
domain.Company,
backref="users"
)
},
)
```
alembic/env.py
--------------
```py
from myapp.adapters import orm
# (...)
target_metadata = orm.mapper_registry.metadata
```
SQL Alchemy 1.3
===============
Using classical / imperative mapping, Alembic could generate migrations from SQL Alchemy 1.3 with the following syntax:
adapters/orm.py
---------------
```py
from myapp import domain
# (...)
from sqlalchemy import Table, Column
from sqlalchemy.orm import mapper, relationship
from sqlalchemy.schema import MetaData
metadata = MetaData()
# define your tables here
table_user = Table(
"tb_user",
metadata,
Column("id", Integer, primary_key=True, autoincrement=True),
# (...)
)
# map your domain objects to the tables
def start_mappers():
user_mapper = mapper(
domain.User,
tb_user,
properties={
"company": relationship(
domain.Company,
backref="users"
),
},
)
```
alembic/env.py
--------------
```py
from myapp.adapters import orm
# (...)
target_metadata = orm.metadata
```
|
72,640,228
|
I'm trying to replace a single character '°' with '?' in an [edf](https://www.edfplus.info/specs/edf.html) file with binary encoding.([File](https://github.com/warren-manuel/sleep-edf/tree/main/Files)) I need to change all occurances of it in the first line.
I cannot open it without specifying read binary. (The following fails with UnicodeDecodeError: 'utf-8' codec can't decode byte 0xb0 in position 3008: invalid start byte)
```
with open('heartbeat-baseline-700001.edf') as fin:
lines = fin.readlines()
```
I ended up trying to replace it via this code
```
with open("heartbeat-baseline-700001.edf", "rb") as text_file:
lines = text_file.readlines()
lines[1] = lines[1].replace(str.encode('°'), str.encode('?'))
for i,line in enumerate(lines):
with open('heartbeat-baseline-700001_python.edf', 'wb') as fout:
fout.write(line)
```
What I end up with is a file that is exponentially smaller (7KB vs 79MB) and does not work.
What seems to be the issue with this code? Is there a simpler way to replace the character?
|
2022/06/16
|
[
"https://Stackoverflow.com/questions/72640228",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4126542/"
] |
You need to have **Advanced Access** to **business\_management** permission.
You can submit app review for advanced access, if it approved, you can call /bm-id/adaccount to create ad account with required parameter.
|
Also, you can't create an ad account on a real FB user. It works with test users well, but you need special permissions from FB to do this on a real user's account.
|
7,558,814
|
>
> **Possible Duplicate:**
>
> [Python replace multiple strings](https://stackoverflow.com/questions/6116978/python-replace-multiple-strings)
>
>
>
I am looking to replace `“ “`, `“\r”`, `“\n”`, `“<”`, `“>”`, `“’”` (single quote), and `‘”’` (double quote) with `“”` (empty). I’m also looking to replace `“;”` and `“|”` with `“,”`.
Would this be handled by `re.search` since I want to be able to search anywhere in the text, or would I use `re.sub`.
What would be the best way to handle this? I have found bits and pieces, but not where multiple regexes are handled.
|
2011/09/26
|
[
"https://Stackoverflow.com/questions/7558814",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/454488/"
] |
If you want to remove all occurrences of those characters, just put them all in a character class and do [`re.sub()`](http://docs.python.org/library/re.html#re.sub)
```
your_str = re.sub(r'[ \r\n\'"]+', '', your_str)
your_str = re.sub(r'[;|]', ',', your_str)
```
You have to call `re.sub()` for every replacement rule.
|
```
import re
reg = re.compile('([ \r\n\'"]+)|([;|]+)')
ss = 'bo ba\rbu\nbe\'bi"by-ja;ju|jo'
def repl(mat, di = {1:'',2:','}):
return di[mat.lastindex]
print reg.sub(repl,ss)
```
Note: '|' loses its speciality between brackets
|
7,558,814
|
>
> **Possible Duplicate:**
>
> [Python replace multiple strings](https://stackoverflow.com/questions/6116978/python-replace-multiple-strings)
>
>
>
I am looking to replace `“ “`, `“\r”`, `“\n”`, `“<”`, `“>”`, `“’”` (single quote), and `‘”’` (double quote) with `“”` (empty). I’m also looking to replace `“;”` and `“|”` with `“,”`.
Would this be handled by `re.search` since I want to be able to search anywhere in the text, or would I use `re.sub`.
What would be the best way to handle this? I have found bits and pieces, but not where multiple regexes are handled.
|
2011/09/26
|
[
"https://Stackoverflow.com/questions/7558814",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/454488/"
] |
If you need to replace only single characters then you could use [`str.translate()`](http://docs.python.org/library/stdtypes.html#str.translate):
```
import string
table = string.maketrans(';|', ',,')
deletechars = ' \r\n<>\'"'
print "ex'a;m|ple\n".translate(table, deletechars)
# -> exa,m,ple
```
|
If you want to remove all occurrences of those characters, just put them all in a character class and do [`re.sub()`](http://docs.python.org/library/re.html#re.sub)
```
your_str = re.sub(r'[ \r\n\'"]+', '', your_str)
your_str = re.sub(r'[;|]', ',', your_str)
```
You have to call `re.sub()` for every replacement rule.
|
7,558,814
|
>
> **Possible Duplicate:**
>
> [Python replace multiple strings](https://stackoverflow.com/questions/6116978/python-replace-multiple-strings)
>
>
>
I am looking to replace `“ “`, `“\r”`, `“\n”`, `“<”`, `“>”`, `“’”` (single quote), and `‘”’` (double quote) with `“”` (empty). I’m also looking to replace `“;”` and `“|”` with `“,”`.
Would this be handled by `re.search` since I want to be able to search anywhere in the text, or would I use `re.sub`.
What would be the best way to handle this? I have found bits and pieces, but not where multiple regexes are handled.
|
2011/09/26
|
[
"https://Stackoverflow.com/questions/7558814",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/454488/"
] |
If you need to replace only single characters then you could use [`str.translate()`](http://docs.python.org/library/stdtypes.html#str.translate):
```
import string
table = string.maketrans(';|', ',,')
deletechars = ' \r\n<>\'"'
print "ex'a;m|ple\n".translate(table, deletechars)
# -> exa,m,ple
```
|
```
import re
reg = re.compile('([ \r\n\'"]+)|([;|]+)')
ss = 'bo ba\rbu\nbe\'bi"by-ja;ju|jo'
def repl(mat, di = {1:'',2:','}):
return di[mat.lastindex]
print reg.sub(repl,ss)
```
Note: '|' loses its speciality between brackets
|
53,632,979
|
i recently start learning about python, and made simple source of 2 balls in canvas which are moving with 2d vector rule. i want multiply the number of balls with list in python. here is source of that.
```
import time
import random
from tkinter import *
import numpy as np
import math
window = Tk()
canvas = Canvas(window,width=600,height=400)
canvas.pack()
canvas.create_rectangle(50,50,550,350)
R = 15
x1 = random.randrange(50+R,550-R)
y1 = random.randrange(50+R,350-R)
x2 = random.randrange(50+R,550-R)
y2 = random.randrange(50+R,350-R)
vx1 = random.randrange(1 , 10)
vy1 = random.randrange(1 , 10)
vx2 = random.randrange(1 , 10)
vy2 = random.randrange(1 , 10)
ntime = 100000
dt = .1
for iter in range(ntime):
x1 += vx1*dt
y1 += vy1*dt
x2 += vx2*dt
y2 += vy2*dt
c1 = canvas.create_oval(x1-R,y1-R,x1+R,y1+R,fill="red")
c2 = canvas.create_oval(x2-R,y2-R,x2+R,y2+R,fill="blue")
if (x1 > 550-R):
vx1 = -vx1
if (x1 < 50+R ):
vx1 = -vx1
if (x2 > 550-R):
vx2 = -vx2
if (x2 < 50+R ):
vx2 = -vx2
if (y1 > 350-R) or (y1 < 50+R):
vy1 = -vy1
if (y2 > 350-R) or (y2 < 50+R):
vy2 = -vy2
if (x2-x1)**2 + (y2-y1)**2 <= 4*R*R:
vector1 = np.array([x1,y1])
vector2 = np.array([x2,y2])
vvector1 = np.array([vx1,vy1])
vvector2 = np.array([vx2,vy2])
nvector = np.array([x2-x1,y2-y1])
un = (nvector)/((sum(nvector*nvector))**(1/2))
tvector = np.array([y1-y2,x2-x1])
ut = tvector/((sum(nvector*nvector))**(1/2))
vector1midn = sum(vvector1*un)
vector2midn = sum(vvector2*un)
vector1midt = sum(vvector1*ut)
vector2midt = sum(vvector2*ut)
vector1after = vector2midn*un + vector1midt*ut
vector2after = vector1midn*un + vector2midt*ut
vx1 = vector1after[0]
vy1 = vector1after[1]
vx2 = vector2after[0]
vy2 = vector2after[1]
txt = canvas.create_text(100,30,text=str(iter),font=('consolas', '20',
'bold'))
window.update()
time.sleep(0.002)
if iter == ntime-1 : break
canvas.delete(c1)
canvas.delete(c2)
canvas.delete(txt)
window.mainloop()
```
exact question is, how do i change c1,c2 , above there , into many of them without simply typing every single ball.
|
2018/12/05
|
[
"https://Stackoverflow.com/questions/53632979",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/10749521/"
] |
1) You should be. I highly recommend avoiding secondary indexes and ALLOW FILTERING consider them advanced features for corner cases.
2) It can be more efficient with index, but still horrible, and also horrible in more new ways. There are only very few scenarios where secondary indexes are acceptable. There are very few scenarios where ALLOW FILTERING is acceptable. You are looking at an overlap of the two.
Maybe take a step back. You're building pojos to represent objects and trying to map that into Cassandra. The approach you should make when data modeling with Cassandra is to think of the queries you are going to make and design tables to match that - not the data. It is normal to end up with multiple tables that you update (disk space and writes are cheap) on changes so that your reads can hit 1 partition efficiently and get everything you need in one hit. Denormalize the data, Cassandra is not relational and 3rd normal form is generally a bad thing here.
|
As worst case for your use case, consider searching for an Austrian composer born in 1756. Yes, you can find him (Mozart) in a table of all humans who ever lived by intersecting the index of nationality=Austria, the index of birth=1756 and the index of profession=composer. But Cassandra will implement such a query very inefficiently - it either needs to retrieve huge lists and intersect them, or, what it really does, retrieve just one huge list (e.g., list of all Austrians who ever lived) and then filter them according to the other criteria (birth and profession). This is why you need the "ALLOW FILTERING". And why it is not a recommended use case for Cassandra's original Secondary Index.
Unlike Cassandra's original Secondary Index, search engines are geared toward exactly these sorts of intersections, and have special algorithms for calculating them efficiently. In particular, search engines usually have "skip lists", allowing find a small intersection of two lengthy lists by quickly skipping in one of the lists based on entries in the second list. They also have logic on which list (the shorter list, i.e., the rarer word) to start the process with.
As you may know, Cassandra has a **second** secondary-index implementation, known as SASI. SASI (see <https://github.com/apache/cassandra/blob/trunk/doc/SASI.md>) has many search-engine-oriented improvements over Cassandra's original secondary index implementation, and if I understand correctly (I never tried myself), efficient intersections is one of these features. So maybe switching to SASI is a good idea in your use case.
|
69,301,316
|
Im currently trying to get the mean() of a group in my dataframe (tdf), but I have a mix of some NaN values and filled values in my dataset. Example shown below
| Test # | a | b |
| --- | --- | --- |
| 1 | 1 | 1 |
| 1 | 2 | NaN |
| 1 | 3 | 2 |
| 2 | 4 | 3 |
My code needs to take this dataset, and make a new dataset containing the mean, std, and 95% interval of the set.
```
i = 0
num_timeframes = 2 #writing this in for example sake
new_df = pd.DataFrame(columns = tdf.columns)
while i < num_timeframes:
results = tdf.loc[tdf["Test #"] == i].groupby(["Test #"]).mean()
new_df = pd.concat([new_df,results])
results = tdf.loc[tdf["Test #"] == i].groupby(["Test #"]).std()
new_df = pd.concat([new_df,results])
results = 2*tdf.loc[tdf["Test #"] == i].groupby(["Test #"]).std()
new_df = pd.concat([new_df,results])
new_df['Test #'] = new_df['Test #'].fillna(i) #fill out test number values
i+=1
```
For simplicity, i will show the desired output on the first pass of the while loop, only calculating the mean. The problem impacts every row however. The expected output for the mean of Test # 1 is shown below:
| Test # | a | b |
| --- | --- | --- |
| 1 | 2 | 1.5 |
However, columns which contain any NaN rows are calculating the entire mean as NaN resulting in the output shown below
| Test # | a | b |
| --- | --- | --- |
| 1 | 2 | NaN |
I have tried passing skipna=True, but got an error stating that mean doesn't have a skipna argument. Im really at a loss here because it was my understanding that df.mean() ignores NaN rows by default. I have limited experience with python so any help is greatly appreciated.
|
2021/09/23
|
[
"https://Stackoverflow.com/questions/69301316",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/16985105/"
] |
You can use
```py
(?<![a-zA-Z0-9_.-])@(?=([A-Za-z]+[A-Za-z0-9_-]*))\1(?![@\w])
(?a)(?<![\w.-])@(?=([A-Za-z][\w-]*))\1(?![@\w])
```
See the [regex demo](https://regex101.com/r/KFHwo8/3). *Details*:
* `(?<![a-zA-Z0-9_.-])` - a negative lookbehind that matches a location that is not immediately preceded with ASCII digits, letters, `_`, `.` and `-`
* `@` - a `@` char
* `(?=([A-Za-z]+[A-Za-z0-9_-]*))` - a positive lookahead with a capturing group inside that captures one or more ASCII letters and then zero or more ASCII letters, digits, `-` or `_` chars
* `\1` - the Group 1 value (backreferences are atomic, no backtracking is allowed through them)
* `(?![@\w])` - a negative lookahead that fails the match if there is a word char (letter, digit or `_`) or a `@` char immediately to the right of the current location.
Note I put hyphens at the end of the character classes, this is best practice.
The `(?a)(?<![\w.-])@(?=([A-Za-z][\w-]*))\1(?![@\w])` alternative uses shorthand character classes and the `(?a)` inline modifier (equivalent of `re.ASCII` / `re.A` makes `\w` only match ASCII chars (as in the original version). Remove `(?a)` if you plan to match any Unicode digits/letters.
|
Another option is to assert a whitespace boundary to the left, and assert no word char or @ sign to the right.
```
(?<!\S)@([A-Za-z]+[\w-]+)(?![@\w])
```
The pattern matches:
* `(?<!\S)` Negative lookbehind, assert not a non whitespace char to the left
* `@` Match literally
* `([A-Za-z]+[\w-]+)` Capture group1, match 1+ chars A-Za-z and then 1+ word chars or `-`
* `(?![@\w])` Negative lookahead, assert not @ or word char to the right
[Regex demo](https://regex101.com/r/v2OGdL/1)
Or match a non word boundary `\B` before the @ instead of a lookbehind.
```
\B@([A-Za-z]+[\w-]+)(?![@\w])
```
[Regex demo](https://regex101.com/r/GRV8x4/1)
|
29,240,807
|
What is the complexity of the function `most_common` provided by the `collections.Counter` object in Python?
More specifically, is `Counter` keeping some kind of sorted list while it's counting, allowing it to perform the `most_common` operation faster than `O(n)` when `n` is the number of (unique) items added to the counter? For you information, I am processing some large amount of text data trying to find the n-th most frequent tokens.
I checked the [official documentation](https://docs.python.org/library/collections.html#collections.Counter) and the [TimeComplexity article](https://wiki.python.org/moin/TimeComplexity) on the CPython wiki but I couldn't find the answer.
|
2015/03/24
|
[
"https://Stackoverflow.com/questions/29240807",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3397458/"
] |
From the source code of [collections.py](https://github.com/python/cpython/blob/a8e814db96ebfeb1f58bc471edffde2176c0ae05/Lib/collections/__init__.py#L571), we see that if we don't specify a number of returned elements, `most_common` returns a sorted list of the counts. This is an `O(n log n)` algorithm.
If we use `most_common` to return `k > 1` elements, then we use [`heapq.nlargest`](https://github.com/python/cpython/blob/a8e814db96ebfeb1f58bc471edffde2176c0ae05/Lib/heapq.py#L521). This is an `O(k) + O((n - k) log k) + O(k log k)` algorithm, which is very good for a small constant `k`, since it's essentialy linear. The `O(k)` part comes from heapifying the initial `k` counts, the second part from `n - k` calls to `heappushpop` method and the third part from sorting the final heap of `k` elements. Since `k <= n` we can conclude that the complexity is:
>
> O(n log k)
>
>
>
If `k = 1` then it's easy to show that the complexity is:
>
> O(n)
>
>
>
|
[The source](https://github.com/python/cpython/blob/1b85f4ec45a5d63188ee3866bd55eb29fdec7fbf/Lib/collections/__init__.py#L575) shows exactly what happens:
```
def most_common(self, n=None):
'''List the n most common elements and their counts from the most
common to the least. If n is None, then list all element counts.
>>> Counter('abracadabra').most_common(3)
[('a', 5), ('r', 2), ('b', 2)]
'''
# Emulate Bag.sortedByCount from Smalltalk
if n is None:
return sorted(self.iteritems(), key=_itemgetter(1), reverse=True)
return _heapq.nlargest(n, self.iteritems(), key=_itemgetter(1))
```
`heapq.nlargest` is defined in [heapq.py](https://github.com/python/cpython/blob/1b85f4ec45a5d63188ee3866bd55eb29fdec7fbf/Lib/heapq.py#L524)
|
54,565,417
|
When I'm visiting my website (<https://osm-messaging-platform.appspot.com>), I get this error on the main webpage:
```
502 Bad Gateway. nginx/1.14.0 (Ubuntu).
```
It's really weird, since when I run it locally
```
python app.py
```
I get no errors, and my app and the website load fine.
I've already tried looking it up, but most of the answers I've found on stack overflow either have no errors or don't relate to me. Here is the error when I look at my GCloud logs:
```
019-02-07 02:07:05 default[20190206t175104] Traceback (most recent
call last): File "/env/lib/python3.7/site-
packages/gunicorn/arbiter.py", line 583, in spawn_worker
worker.init_process() File "/env/lib/python3.7/site-
packages/gunicorn/workers/gthread.py", line 104, in init_process
super(ThreadWorker, self).init_process() File
"/env/lib/python3.7/site-packages/gunicorn/workers/base.py", line
129, in init_process self.load_wsgi() File
"/env/lib/python3.7/site-packages/gunicorn/workers/base.py", line
138, in load_wsgi self.wsgi = self.app.wsgi() File
"/env/lib/python3.7/site-packages/gunicorn/app/base.py", line 67, in
wsgi self.callable = self.load() File
"/env/lib/python3.7/site-packages/gunicorn/app/wsgiapp.py", line 52,
in load return self.load_wsgiapp() File
"/env/lib/python3.7/site-packages/gunicorn/app/wsgiapp.py", line 41,
in load_wsgiapp return util.import_app(self.app_uri) File
"/env/lib/python3.7/site-packages/gunicorn/util.py", line 350, in
import_app __import__(module) ModuleNotFoundError: No module
named 'main'
2019-02-07 02:07:05 default[20190206t175104] [2019-02-07 02:07:05
+0000] [25] [INFO] Worker exiting (pid: 25)
2019-02-07 02:07:05 default[20190206t175104] [2019-02-07 02:07:05
+0000] [8] [INFO] Shutting down: Master
2019-02-07 02:07:05 default[20190206t175104] [2019-02-07 02:07:05
+0000] [8] [INFO] Reason: Worker failed to boot.
```
And here are the contents of my app.yaml file:
```
runtime: python37
handlers:
# This configures Google App Engine to serve the files in the app's
static
# directory.
- url: /static
static_dir: static
- url: /.*
script: auto
```
I expected it to show my website, but it didn't. Can anyone help?
|
2019/02/07
|
[
"https://Stackoverflow.com/questions/54565417",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/10808093/"
] |
The error is produced because the App Engine Standard Python37 runtime handles the requests in the `main.py` file by default. I guess that you don't have this file and you are handling the requests in the `app.py` file.
Also the logs traceback is pointing to it: `ModuleNotFoundError: No module named 'main'`
Change the name the name of the `app.py` file to `main.py` and try again.
As a general rule it is recommended to follow this file structure present in the [App Engine Standard documention](https://cloud.google.com/appengine/docs/standard/python3/building-app/writing-web-service#structuring_your_web_service_files):
* `your-app/`
+ `app.yaml`
+ `main.py`
+ `requirements.txt`
+ `static/`
- `script.js`
- `style.css`
+ `templates/`
- `index.html`
I believe this would be overkill for your situation but If you need a custom [entrypoint](https://cloud.google.com/appengine/docs/standard/python3/config/appref#entrypoint) read t[his Python3 runtime documentation](https://cloud.google.com/appengine/docs/standard/python3/runtime#application_startup) to know more about how to configure it.
|
My mistake was naming the main app "main" which conflicted with main.py. It worked fine locally as it did not use main.py. I changed it to root and everything worked fine. It took me a whole day to solve it out.
|
54,565,417
|
When I'm visiting my website (<https://osm-messaging-platform.appspot.com>), I get this error on the main webpage:
```
502 Bad Gateway. nginx/1.14.0 (Ubuntu).
```
It's really weird, since when I run it locally
```
python app.py
```
I get no errors, and my app and the website load fine.
I've already tried looking it up, but most of the answers I've found on stack overflow either have no errors or don't relate to me. Here is the error when I look at my GCloud logs:
```
019-02-07 02:07:05 default[20190206t175104] Traceback (most recent
call last): File "/env/lib/python3.7/site-
packages/gunicorn/arbiter.py", line 583, in spawn_worker
worker.init_process() File "/env/lib/python3.7/site-
packages/gunicorn/workers/gthread.py", line 104, in init_process
super(ThreadWorker, self).init_process() File
"/env/lib/python3.7/site-packages/gunicorn/workers/base.py", line
129, in init_process self.load_wsgi() File
"/env/lib/python3.7/site-packages/gunicorn/workers/base.py", line
138, in load_wsgi self.wsgi = self.app.wsgi() File
"/env/lib/python3.7/site-packages/gunicorn/app/base.py", line 67, in
wsgi self.callable = self.load() File
"/env/lib/python3.7/site-packages/gunicorn/app/wsgiapp.py", line 52,
in load return self.load_wsgiapp() File
"/env/lib/python3.7/site-packages/gunicorn/app/wsgiapp.py", line 41,
in load_wsgiapp return util.import_app(self.app_uri) File
"/env/lib/python3.7/site-packages/gunicorn/util.py", line 350, in
import_app __import__(module) ModuleNotFoundError: No module
named 'main'
2019-02-07 02:07:05 default[20190206t175104] [2019-02-07 02:07:05
+0000] [25] [INFO] Worker exiting (pid: 25)
2019-02-07 02:07:05 default[20190206t175104] [2019-02-07 02:07:05
+0000] [8] [INFO] Shutting down: Master
2019-02-07 02:07:05 default[20190206t175104] [2019-02-07 02:07:05
+0000] [8] [INFO] Reason: Worker failed to boot.
```
And here are the contents of my app.yaml file:
```
runtime: python37
handlers:
# This configures Google App Engine to serve the files in the app's
static
# directory.
- url: /static
static_dir: static
- url: /.*
script: auto
```
I expected it to show my website, but it didn't. Can anyone help?
|
2019/02/07
|
[
"https://Stackoverflow.com/questions/54565417",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/10808093/"
] |
The error is produced because the App Engine Standard Python37 runtime handles the requests in the `main.py` file by default. I guess that you don't have this file and you are handling the requests in the `app.py` file.
Also the logs traceback is pointing to it: `ModuleNotFoundError: No module named 'main'`
Change the name the name of the `app.py` file to `main.py` and try again.
As a general rule it is recommended to follow this file structure present in the [App Engine Standard documention](https://cloud.google.com/appengine/docs/standard/python3/building-app/writing-web-service#structuring_your_web_service_files):
* `your-app/`
+ `app.yaml`
+ `main.py`
+ `requirements.txt`
+ `static/`
- `script.js`
- `style.css`
+ `templates/`
- `index.html`
I believe this would be overkill for your situation but If you need a custom [entrypoint](https://cloud.google.com/appengine/docs/standard/python3/config/appref#entrypoint) read t[his Python3 runtime documentation](https://cloud.google.com/appengine/docs/standard/python3/runtime#application_startup) to know more about how to configure it.
|
I resolved the issue in `main.py` by changing the host from:
```
app.run(host="127.0.0.1", port=8080, debug=True)
```
to
```
app.run(host="0.0.0.0", port=8080, debug=True)
```
|
54,565,417
|
When I'm visiting my website (<https://osm-messaging-platform.appspot.com>), I get this error on the main webpage:
```
502 Bad Gateway. nginx/1.14.0 (Ubuntu).
```
It's really weird, since when I run it locally
```
python app.py
```
I get no errors, and my app and the website load fine.
I've already tried looking it up, but most of the answers I've found on stack overflow either have no errors or don't relate to me. Here is the error when I look at my GCloud logs:
```
019-02-07 02:07:05 default[20190206t175104] Traceback (most recent
call last): File "/env/lib/python3.7/site-
packages/gunicorn/arbiter.py", line 583, in spawn_worker
worker.init_process() File "/env/lib/python3.7/site-
packages/gunicorn/workers/gthread.py", line 104, in init_process
super(ThreadWorker, self).init_process() File
"/env/lib/python3.7/site-packages/gunicorn/workers/base.py", line
129, in init_process self.load_wsgi() File
"/env/lib/python3.7/site-packages/gunicorn/workers/base.py", line
138, in load_wsgi self.wsgi = self.app.wsgi() File
"/env/lib/python3.7/site-packages/gunicorn/app/base.py", line 67, in
wsgi self.callable = self.load() File
"/env/lib/python3.7/site-packages/gunicorn/app/wsgiapp.py", line 52,
in load return self.load_wsgiapp() File
"/env/lib/python3.7/site-packages/gunicorn/app/wsgiapp.py", line 41,
in load_wsgiapp return util.import_app(self.app_uri) File
"/env/lib/python3.7/site-packages/gunicorn/util.py", line 350, in
import_app __import__(module) ModuleNotFoundError: No module
named 'main'
2019-02-07 02:07:05 default[20190206t175104] [2019-02-07 02:07:05
+0000] [25] [INFO] Worker exiting (pid: 25)
2019-02-07 02:07:05 default[20190206t175104] [2019-02-07 02:07:05
+0000] [8] [INFO] Shutting down: Master
2019-02-07 02:07:05 default[20190206t175104] [2019-02-07 02:07:05
+0000] [8] [INFO] Reason: Worker failed to boot.
```
And here are the contents of my app.yaml file:
```
runtime: python37
handlers:
# This configures Google App Engine to serve the files in the app's
static
# directory.
- url: /static
static_dir: static
- url: /.*
script: auto
```
I expected it to show my website, but it didn't. Can anyone help?
|
2019/02/07
|
[
"https://Stackoverflow.com/questions/54565417",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/10808093/"
] |
My mistake was naming the main app "main" which conflicted with main.py. It worked fine locally as it did not use main.py. I changed it to root and everything worked fine. It took me a whole day to solve it out.
|
I resolved the issue in `main.py` by changing the host from:
```
app.run(host="127.0.0.1", port=8080, debug=True)
```
to
```
app.run(host="0.0.0.0", port=8080, debug=True)
```
|
70,147,775
|
I am just learning python and was trying to loop through a list and use pop() to remove all items from original list and add them to a new list formatted. Here is the code I use:
```
languages = ['russian', 'english', 'spanish', 'french', 'korean']
formatted_languages = []
for a in languages:
formatted_languages.append(languages.pop().title())
print(formatted_languages)
```
The result is this :
`['Korean', 'French', 'Spanish']`
I do not understand why do I only get 3 out of 5 items in the list returned. Can someone please explain?
Thank you,
Ruslan
|
2021/11/28
|
[
"https://Stackoverflow.com/questions/70147775",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/17534746/"
] |
Look:
```
languages = ['russian', 'english', 'spanish', 'french', 'korean']
formatted_languages = []
for a in languages:
print(a)
print(languages)
formatted_languages.append(languages.pop().title())
print(formatted_languages)
```
Notice what "a" is:
```
russian
['russian', 'english', 'spanish', 'french', 'korean']
['Korean']
english
['russian', 'english', 'spanish', 'french']
['Korean', 'French']
spanish
['russian', 'english', 'spanish']
['Korean', 'French', 'Spanish']
```
So A is moving ahead and then it cannot continue in the loop
|
What you're doing here is changing the size of the list whilst the list is being processed, so that once you've got to spanish in the list (3rd item), you've removed french and korean already, so once you've removed spanish, there are no items left to loop through. You can use the code below to get around this.
```py
languages = ['russian', 'english', 'spanish', 'french', 'korean']
formatted_languages = []
for a in len(range(languages)):
formatted_languages.append(languages.pop().title())
print(formatted_languages)
```
This iterates through the length of the list instead, which is only calculated once and therefore doesn't change
|
70,147,775
|
I am just learning python and was trying to loop through a list and use pop() to remove all items from original list and add them to a new list formatted. Here is the code I use:
```
languages = ['russian', 'english', 'spanish', 'french', 'korean']
formatted_languages = []
for a in languages:
formatted_languages.append(languages.pop().title())
print(formatted_languages)
```
The result is this :
`['Korean', 'French', 'Spanish']`
I do not understand why do I only get 3 out of 5 items in the list returned. Can someone please explain?
Thank you,
Ruslan
|
2021/11/28
|
[
"https://Stackoverflow.com/questions/70147775",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/17534746/"
] |
Look:
```
languages = ['russian', 'english', 'spanish', 'french', 'korean']
formatted_languages = []
for a in languages:
print(a)
print(languages)
formatted_languages.append(languages.pop().title())
print(formatted_languages)
```
Notice what "a" is:
```
russian
['russian', 'english', 'spanish', 'french', 'korean']
['Korean']
english
['russian', 'english', 'spanish', 'french']
['Korean', 'French']
spanish
['russian', 'english', 'spanish']
['Korean', 'French', 'Spanish']
```
So A is moving ahead and then it cannot continue in the loop
|
The reason you don't get the full values is because you are making the list shorter on each iteration.
Consider what is happening in your loop:
```
for a in languages:
formatted_languages.append(languages.pop().title())
```
By the time `a` is equal to the 3rd element in your list, you already removed 3 elements from it(including the current value of `a`), so the loop cant get next value from the list and ends.
If all you need to do is to capitalize all the strings in the list,
you can just do:
```
formatted_languages = [entry.title() for entry in languages]
```
You can also set the original list to `[entry.title() for entry in languages]` since you are popping the values from it anyway:
```
languages = [entry.title() for entry in languages]
```
If you prefer to use a for loop similar to what you have, you should utilize the `a` value in your loop:
```
for a in languages:
formatted_languages.append(a.title())
```
I will point out that some of the comments and answers here suggested to make a copy of the list.
While this will work, it is not optimal since you waste memory by doing this and if the list is very large that might be impactful.
|
17,509,607
|
I have seen questions like this asked many many times but none are helpful
Im trying to submit data to a form on the web ive tried requests, and urllib and none have worked
for example here is code that should search for the [python] tag on SO:
```
import urllib
import urllib2
url = 'http://stackoverflow.com/'
# Prepare the data
values = {'q' : '[python]'}
data = urllib.urlencode(values)
# Send HTTP POST request
req = urllib2.Request(url, data)
response = urllib2.urlopen(req)
html = response.read()
# Print the result
print html
```
yet when i run it i get the html soure of the home page
here is an example of using requests:
```
import requests
data= {
'q': '[python]'
}
r = requests.get('http://stackoverflow.com', data=data)
print r.text
```
same result! i dont understand why these methods arent working i've tried them on various sites with no success so if anyone has successfully done this please show me how!
Thanks so much!
|
2013/07/07
|
[
"https://Stackoverflow.com/questions/17509607",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2292723/"
] |
If you want to pass `q` as a parameter in the URL using [`requests`](http://docs.python-requests.org/), use the `params` argument, not `data` (see [Passing Parameters In URLs](http://docs.python-requests.org/en/latest/user/quickstart.html#passing-parameters-in-urls)):
```
r = requests.get('http://stackoverflow.com', params=data)
```
This will request <https://stackoverflow.com/?q=%5Bpython%5D> , which isn't what you are looking for.
You really want to *`POST`* to a *form*. Try this:
```
r = requests.post('https://stackoverflow.com/search', data=data)
```
This is essentially the same as *`GET`*-ting <https://stackoverflow.com/questions/tagged/python> , but I think you'll get the idea from this.
|
```
import urllib
import urllib2
url = 'http://www.someserver.com/cgi-bin/register.cgi'
values = {'name' : 'Michael Foord',
'location' : 'Northampton',
'language' : 'Python' }
data = urllib.urlencode(values)
req = urllib2.Request(url, data)
response = urllib2.urlopen(req)
the_page = response.read()
```
This makes a POST request with the data specified in the values. we need urllib to encode the url and then urllib2 to send a request.
|
17,509,607
|
I have seen questions like this asked many many times but none are helpful
Im trying to submit data to a form on the web ive tried requests, and urllib and none have worked
for example here is code that should search for the [python] tag on SO:
```
import urllib
import urllib2
url = 'http://stackoverflow.com/'
# Prepare the data
values = {'q' : '[python]'}
data = urllib.urlencode(values)
# Send HTTP POST request
req = urllib2.Request(url, data)
response = urllib2.urlopen(req)
html = response.read()
# Print the result
print html
```
yet when i run it i get the html soure of the home page
here is an example of using requests:
```
import requests
data= {
'q': '[python]'
}
r = requests.get('http://stackoverflow.com', data=data)
print r.text
```
same result! i dont understand why these methods arent working i've tried them on various sites with no success so if anyone has successfully done this please show me how!
Thanks so much!
|
2013/07/07
|
[
"https://Stackoverflow.com/questions/17509607",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2292723/"
] |
If you want to pass `q` as a parameter in the URL using [`requests`](http://docs.python-requests.org/), use the `params` argument, not `data` (see [Passing Parameters In URLs](http://docs.python-requests.org/en/latest/user/quickstart.html#passing-parameters-in-urls)):
```
r = requests.get('http://stackoverflow.com', params=data)
```
This will request <https://stackoverflow.com/?q=%5Bpython%5D> , which isn't what you are looking for.
You really want to *`POST`* to a *form*. Try this:
```
r = requests.post('https://stackoverflow.com/search', data=data)
```
This is essentially the same as *`GET`*-ting <https://stackoverflow.com/questions/tagged/python> , but I think you'll get the idea from this.
|
Mechanize library from python is also great allowing you to even submit forms. You can use the following code to create a browser object and create requests.
```
import mechanize,re
br = mechanize.Browser()
br.set_handle_robots(False) # ignore robots
br.set_handle_refresh(False) # can sometimes hang without this
br.addheaders = [('User-agent', 'Firefox')]
br.open( "http://google.com" )
br.select_form( 'f' )
br.form[ 'q' ] = 'foo'
br.submit()
resp = None
for link in br.links():
siteMatch = re.compile( 'www.foofighters.com' ).search( link.url )
if siteMatch:
resp = br.follow_link( link )
break
content = resp.get_data()
print content
```
|
17,509,607
|
I have seen questions like this asked many many times but none are helpful
Im trying to submit data to a form on the web ive tried requests, and urllib and none have worked
for example here is code that should search for the [python] tag on SO:
```
import urllib
import urllib2
url = 'http://stackoverflow.com/'
# Prepare the data
values = {'q' : '[python]'}
data = urllib.urlencode(values)
# Send HTTP POST request
req = urllib2.Request(url, data)
response = urllib2.urlopen(req)
html = response.read()
# Print the result
print html
```
yet when i run it i get the html soure of the home page
here is an example of using requests:
```
import requests
data= {
'q': '[python]'
}
r = requests.get('http://stackoverflow.com', data=data)
print r.text
```
same result! i dont understand why these methods arent working i've tried them on various sites with no success so if anyone has successfully done this please show me how!
Thanks so much!
|
2013/07/07
|
[
"https://Stackoverflow.com/questions/17509607",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2292723/"
] |
```
import urllib
import urllib2
url = 'http://www.someserver.com/cgi-bin/register.cgi'
values = {'name' : 'Michael Foord',
'location' : 'Northampton',
'language' : 'Python' }
data = urllib.urlencode(values)
req = urllib2.Request(url, data)
response = urllib2.urlopen(req)
the_page = response.read()
```
This makes a POST request with the data specified in the values. we need urllib to encode the url and then urllib2 to send a request.
|
Mechanize library from python is also great allowing you to even submit forms. You can use the following code to create a browser object and create requests.
```
import mechanize,re
br = mechanize.Browser()
br.set_handle_robots(False) # ignore robots
br.set_handle_refresh(False) # can sometimes hang without this
br.addheaders = [('User-agent', 'Firefox')]
br.open( "http://google.com" )
br.select_form( 'f' )
br.form[ 'q' ] = 'foo'
br.submit()
resp = None
for link in br.links():
siteMatch = re.compile( 'www.foofighters.com' ).search( link.url )
if siteMatch:
resp = br.follow_link( link )
break
content = resp.get_data()
print content
```
|
17,847,869
|
I have this following Python Tkinter code which redraw the label every 10 second. My question is , to me it seems like it is drawing the new label over and over again over the old label. So, eventually, after a few hours there will be hundreds of drawing overlapping (at least from what i understand). Will this use more memory or cause problem?
```
import Tkinter as tk
import threading
def Draw():
frame=tk.Frame(root,width=100,height=100,relief='solid',bd=1)
frame.place(x=10,y=10)
text=tk.Label(frame,text='HELLO')
text.pack()
def Refresher():
print 'refreshing'
Draw()
threading.Timer(10, Refresher).start()
root=tk.Tk()
Refresher()
root.mainloop()
```
Here in my example, i am just using a single label.I am aware that i can use `textvariable` to update the text of the label or even `text.config`. But what am actually doing is to refresh a grid of label(like a table)+buttons and stuffs to match with the latest data available.
From my beginner understanding, if i wrote this `Draw()` function as class, i can destroy the `frame` by using `frame.destroy` whenever i execute `Refresher` function. But the code i currently have is written in functions without class ( i don't wish to rewrite the whole code into class).
The other option i can think of is to declare `frame` in the `Draw()` as global and use `frame.destroy()` ( which i reluctant to do as this could cause name conflict if i have too many frames (which i do))
If overdrawing over the old drawing doesn't cause any problem (except that i can't see the old drawing), i can live with that.
These are all just my beginner thoughts. Should i destroy the frame before redraw the updated one? if so, in what way should i destroy it if the code structure is just like in my sample code? Or overdrawing the old label is fine?
**EDIT**
Someone mentioned that python tkinter is not thread safe and my code will likely to fail randomly.
I actually took [this link](https://stackoverflow.com/questions/2223157/how-to-execute-a-function-asynchronously-every-60-seconds-in-python) as a reference to use `threading` as my solution and i didn't find anything about thread safety in that post.
I am wondering what are the general cases that i should not use `threading` and what are the general cases i could use `threading`?
|
2013/07/25
|
[
"https://Stackoverflow.com/questions/17847869",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2180836/"
] |
The correct way to run a function or update a label in tkinter is to use the [after](http://effbot.org/tkinterbook/widget.htm#Tkinter.Widget.after-method) method. This puts an event on the event queue to be executed at some time in the future. If you have a function that does some work, then puts itself back on the event queue, you have created something that will run forever.
Here's a quick example based on your example:
```
import Tkinter as tk
import time
def Draw():
global text
frame=tk.Frame(root,width=100,height=100,relief='solid',bd=1)
frame.place(x=10,y=10)
text=tk.Label(frame,text='HELLO')
text.pack()
def Refresher():
global text
text.configure(text=time.asctime())
root.after(1000, Refresher) # every second...
root=tk.Tk()
Draw()
Refresher()
root.mainloop()
```
There are a lot of things I would change about that program from a coding style point of view, but I wanted to keep it as close to your original question as possible. The point is, you can use `after` to call a function that updates the label without having to create new labels. Plus, that function can arrange for itself to be called again at some interval. In this example I picked one second just so that the effect is easier to see.
You also asked "I am wondering what are the general cases that i should not use threading and what are the general cases i could use threading?"
To put a blunt point on it, you should never use threading if you have to ask a question about when to use threading. Threading is an advanced technique, it is complicated, and it easy to get wrong. It's quite possible for threading to make your program slower rather than faster. It has subtle consequences, such as your programs failing mysteriously if you do things that aren't thread safe.
To be more specific to your situation: you should avoid using threads with tkinter. You *can* use them, but you can't access widgets from these other threads. If you need to do something with a widget, you must put an instruction in a thread-safe queue, and then in the main thread you need to periodically check that queue to see if there's an instruction to be processed. There are examples of that on this website if you search for them.
If all that sounds complicated, it is. For most of the GUIs you write, you won't need to worry about that. If you can take large processes and break them down into chunks that execute in 100 ms or less, you only need `after`, and never need threads.
|
To allow the cleanup with minimal code changes, you could pass previous frames explicitly:
```
import Tkinter as tk
def Draw(oldframe=None):
frame = tk.Frame(root,width=100,height=100,relief='solid',bd=1)
frame.place(x=10,y=10)
tk.Label(frame, text='HELLO').pack()
frame.pack()
if oldframe is not None:
oldframe.destroy() # cleanup
return frame
def Refresher(frame=None):
print 'refreshing'
frame = Draw(frame)
frame.after(10000, Refresher, frame) # refresh in 10 seconds
root = tk.Tk()
Refresher()
root.mainloop()
```
|
8,008,829
|
I am working on a project in python in which I need to extract only a subfolder of tar archive not all the files.
I tried to use
```
tar = tarfile.open(tarfile)
tar.extract("dirname", targetdir)
```
But this does not work, it does not extract the given subdirectory also no exception is thrown. I am a beginner in python.
Also if the above function doesn't work for directories whats the difference between this command and tar.extractfile() ?
|
2011/11/04
|
[
"https://Stackoverflow.com/questions/8008829",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/517272/"
] |
Building on the second example from the [tarfile module documentation](https://docs.python.org/3.5/library/tarfile.html#examples), you could extract the contained sub-folder and all of its contents with something like this:
```
with tarfile.open("sample.tar") as tar:
subdir_and_files = [
tarinfo for tarinfo in tar.getmembers()
if tarinfo.name.startswith("subfolder/")
]
tar.extractall(members=subdir_and_files)
```
This creates a list of the subfolder and its contents, and then uses the recommended `extractall()` method to extract just them. Of course, replace `"subfolder/"` with the actual path (relative to the root of the tar file) of the sub-folder you want to extract.
|
The other answer will retain the subfolder path, meaning that `subfolder/a/b` will be extracted to `./subfolder/a/b`. To extract a subfolder to the root, so `subfolder/a/b` would be extracted to `./a/b`, you can rewrite the paths with something like this:
```
def members(tf):
l = len("subfolder/")
for member in tf.getmembers():
if member.path.startswith("subfolder/"):
member.path = member.path[l:]
yield member
with tarfile.open("sample.tar") as tar:
tar.extractall(members=members(tar))
```
|
31,767,292
|
I may be way off here - but would appreciate insight on just how far ..
In the following `getFiles` method, we have an anonymous function being passed as an argument.
```
def getFiles(baseDir: String, filter: (File, String) => Boolean ) = {
val ffilter = new FilenameFilter {
// How to assign to the anonymous function argument 'filter' ?
override def accept(dir: File, name: String): Boolean = filter
}
..
```
So that `override` is quite incorrect: that syntax tries to `evaluate` the filter() function which results in a Boolean.
Naturally we could simply evaluate the anonymous function as follows:
```
override def accept(dir: File, name: String): Boolean = filter(dir, name)
```
But that approach does not actually `replace` the method .
So: how to assign the `accept` method to the `filter` anonymous function?
**Update** the error message is
```
Error:(56, 64) type mismatch;
found : (java.io.File, String) => Boolean
required: Boolean
override def accept(dir: File, name: String): Boolean = filter // { filter(dir, name) }
```
**Another update** Thinking more on this - and am going to take a swag : Dynamic languages like python and ruby can handle assignment of class methods to arbitrary functions. But scala requires compilation and thus the methods are actually static. A definitive answer on this hunch would be appreciated.
|
2015/08/01
|
[
"https://Stackoverflow.com/questions/31767292",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1056563/"
] |
There is no way easy or type-safe way (that I know of) to assign a function to a method as they are different types. In Python or JavaScript you could do something like this:
```
var fnf = new FilenameFilter();
fnf.accepts = filter;
```
But in Scala you have to do the delegation:
```
val fnf = new FilenameFilter {
override def accept(dir: File, name: String): Boolean = filter(dir, name)
}
```
|
I think you are actually misunderstanding the meaning of the override keyword.
It is for redefining methods defined in the superclass of a class, not redefining a method in an instance of a class.
If you want to redefine the accept method in the instances of FilenameFilter, you will need to add the filter method to the constructor of the class, like:
```
class FilenameFilter(val filter: (File, String) => Boolean) {
def accept(dir: File, name: String): Boolean = filter(dir, name)
}
```
|
35,201,647
|
How to use hook in bottle?
<https://pypi.python.org/pypi/bottle-session/0.4>
I am trying to implement session plug in with bottle hook.
```
@bottle.route('/loginpage')
def loginpage():
return '''
<form action="/login" method="post">
Username: <input name="username" type="text" />
Password: <input name="password" type="password" />
<input value="Login" type="submit" />
</form>
'''
@bottle.route('/login', method='POST')
def do_login(session, rdb):
username = request.forms.get('username')
password = request.forms.get('password')
session['name'] = username
return session['name']
@bottle.route('/usernot')
def nextPage(session):
return session['name']
```
and below is my hook:
```
@hook('before_request')
def setup_request():
try:
request.session = session['name']
request.session.cookie_expires = False
response.set_cookie(session['name'], "true")
#request.session.save()
except Exception, e:
print ('setup_request--> ', e)
```
I am unable to access session in hook, is it possible to pass session as a parameter to hook?
|
2016/02/04
|
[
"https://Stackoverflow.com/questions/35201647",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3282945/"
] |
Try this .................
Your **initializers/carrierwave.rb** look like this.
```
CarrierWave.configure do |config|
config.fog_credentials = {
:provider => 'AWS', # required
:aws_access_key_id => 'xxx', # required
:aws_secret_access_key => 'yyy', # required
:region => 'eu-west-1', # optional, defaults to 'us-east-1'
:host => 's3.example.com', # optional, defaults to nil
:endpoint => 'https://s3.example.com:8080' # optional, defaults to nil
}
config.fog_directory = 'name_of_directory' # required
config.fog_public = false # optional, defaults to true
config.fog_attributes = {'Cache-Control'=>'max-age=315576000'} # optional, defaults to {}
end
```
**In your uploader, set the storage to :fog**
```
class AvatarUploader < CarrierWave::Uploader::Base
storage :fog
end
```
|
`CarrierWave::Uploader::Base:Class#fog_provider=` is not released yet. It is only available on the CarrierWave `master` branch.
**Solution 1 (Use master):**
Change your `Gemfile` entry to
```
gem "carrierwave", git: "git@github.com:carrierwaveuploader/carrierwave.git"
```
But this is not recommended since it is not as stable as a release version.
---
**Solution 2 (Check 0.10 documentation):**
In `0.10` you set the `:provider` using `fog_credentials=`
```
CarrierWave.configure do |config|
config.fog_credentials = {
:provider => 'AWS', # required
:aws_access_key_id => 'xxx', # required
:aws_secret_access_key => 'yyy', # required
:region => 'eu-west-1', # optional, defaults to 'us-east-1'
:host => 's3.example.com', # optional, defaults to nil
:endpoint => 'https://s3.example.com:8080' # optional, defaults to nil
}
config.fog_directory = 'name_of_directory' # required
config.fog_public = false # optional, defaults to true
config.fog_attributes = {'Cache-Control'=>'max-age=315576000'} # optional, defaults to {}
end
```
View documentation for [v0.10 here](https://github.com/carrierwaveuploader/carrierwave/tree/v0.10.0).
|
10,043,636
|
A common antipattern in Python is to concatenate a sequence of strings using `+` in a loop. This is bad because the Python interpreter has to create a new string object for each iteration, and it ends up taking quadratic time. (Recent versions of CPython can apparently optimize this in some cases, but other implementations can't, so programmers are discouraged from relying on this.) `''.join` is the right way to do this.
However, I've heard it said ([including here on Stack Overflow](https://stackoverflow.com/questions/10001301/list-slicing-python/10001401#10001401)) that you should *never, ever* use `+` for string concatenation, but instead always use `''.join` or a format string. I don't understand why this is the case if you're only concatenating two strings. If my understanding is correct, it shouldn't take quadratic time, and I think `a + b` is cleaner and more readable than either `''.join((a, b))` or `'%s%s' % (a, b)`.
Is it good practice to use `+` to concatenate two strings? Or is there a problem I'm not aware of?
|
2012/04/06
|
[
"https://Stackoverflow.com/questions/10043636",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1064659/"
] |
There is nothing wrong in concatenating *two* strings with `+`. Indeed it's easier to read than `''.join([a, b])`.
You are right though that concatenating more than 2 strings with `+` is an O(n^2) operation (compared to O(n) for `join`) and thus becomes inefficient. However this has not to do with using a loop. Even `a + b + c + ...` is O(n^2), the reason being that each concatenation produces a new string.
CPython2.4 and above try to mitigate that, but it's still advisable to use `join` when concatenating more than 2 strings.
|
The assumption that one should never, ever use + for string concatenation, but instead always use ''.join may be a myth. It is true that using `+` creates unnecessary temporary copies of immutable string object but the other not oft quoted fact is that calling `join` in a loop would generally add the overhead of `function call`. Lets take your example.
Create two lists, one from the linked SO question and another a bigger fabricated
```
>>> myl1 = ['A','B','C','D','E','F']
>>> myl2=[chr(random.randint(65,90)) for i in range(0,10000)]
```
Lets create two functions, `UseJoin` and `UsePlus` to use the respective `join` and `+` functionality.
```
>>> def UsePlus():
return [myl[i] + myl[i + 1] for i in range(0,len(myl), 2)]
>>> def UseJoin():
[''.join((myl[i],myl[i + 1])) for i in range(0,len(myl), 2)]
```
Lets run timeit with the first list
```
>>> myl=myl1
>>> t1=timeit.Timer("UsePlus()","from __main__ import UsePlus")
>>> t2=timeit.Timer("UseJoin()","from __main__ import UseJoin")
>>> print "%.2f usec/pass" % (1000000 * t1.timeit(number=100000)/100000)
2.48 usec/pass
>>> print "%.2f usec/pass" % (1000000 * t2.timeit(number=100000)/100000)
2.61 usec/pass
>>>
```
They have almost the same runtime.
Lets use cProfile
```
>>> myl=myl2
>>> cProfile.run("UsePlus()")
5 function calls in 0.001 CPU seconds
Ordered by: standard name
ncalls tottime percall cumtime percall filename:lineno(function)
1 0.001 0.001 0.001 0.001 <pyshell#1376>:1(UsePlus)
1 0.000 0.000 0.001 0.001 <string>:1(<module>)
1 0.000 0.000 0.000 0.000 {len}
1 0.000 0.000 0.000 0.000 {method 'disable' of '_lsprof.Profiler' objects}
1 0.000 0.000 0.000 0.000 {range}
>>> cProfile.run("UseJoin()")
5005 function calls in 0.029 CPU seconds
Ordered by: standard name
ncalls tottime percall cumtime percall filename:lineno(function)
1 0.015 0.015 0.029 0.029 <pyshell#1388>:1(UseJoin)
1 0.000 0.000 0.029 0.029 <string>:1(<module>)
1 0.000 0.000 0.000 0.000 {len}
1 0.000 0.000 0.000 0.000 {method 'disable' of '_lsprof.Profiler' objects}
5000 0.014 0.000 0.014 0.000 {method 'join' of 'str' objects}
1 0.000 0.000 0.000 0.000 {range}
```
And it looks that using Join, results in unnecessary function calls which could add to the overhead.
Now coming back to the question. Should one discourage the use of `+` over `join` in all cases?
I believe no, things should be taken into consideration
1. Length of the String in Question
2. No of Concatenation Operation.
And off-course in a development pre-mature optimization is evil.
|
10,043,636
|
A common antipattern in Python is to concatenate a sequence of strings using `+` in a loop. This is bad because the Python interpreter has to create a new string object for each iteration, and it ends up taking quadratic time. (Recent versions of CPython can apparently optimize this in some cases, but other implementations can't, so programmers are discouraged from relying on this.) `''.join` is the right way to do this.
However, I've heard it said ([including here on Stack Overflow](https://stackoverflow.com/questions/10001301/list-slicing-python/10001401#10001401)) that you should *never, ever* use `+` for string concatenation, but instead always use `''.join` or a format string. I don't understand why this is the case if you're only concatenating two strings. If my understanding is correct, it shouldn't take quadratic time, and I think `a + b` is cleaner and more readable than either `''.join((a, b))` or `'%s%s' % (a, b)`.
Is it good practice to use `+` to concatenate two strings? Or is there a problem I'm not aware of?
|
2012/04/06
|
[
"https://Stackoverflow.com/questions/10043636",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1064659/"
] |
When working with multiple people, it's sometimes difficult to know exactly what's happening. Using a format string instead of concatenation can avoid one particular annoyance that's happened a whole ton of times to us:
Say, a function requires an argument, and you write it expecting to get a string:
```
In [1]: def foo(zeta):
...: print 'bar: ' + zeta
In [2]: foo('bang')
bar: bang
```
So, this function may be used pretty often throughout the code. Your coworkers may know exactly what it does, but not necessarily be fully up-to-speed on the internals, and may not know that the function expects a string. And so they may end up with this:
```
In [3]: foo(23)
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
/home/izkata/<ipython console> in <module>()
/home/izkata/<ipython console> in foo(zeta)
TypeError: cannot concatenate 'str' and 'int' objects
```
There would be no problem if you just used a format string:
```
In [1]: def foo(zeta):
...: print 'bar: %s' % zeta
...:
...:
In [2]: foo('bang')
bar: bang
In [3]: foo(23)
bar: 23
```
The same is true for all types of objects that define `__str__`, which may be passed in as well:
```
In [1]: from datetime import date
In [2]: zeta = date(2012, 4, 15)
In [3]: print 'bar: ' + zeta
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
/home/izkata/<ipython console> in <module>()
TypeError: cannot concatenate 'str' and 'datetime.date' objects
In [4]: print 'bar: %s' % zeta
bar: 2012-04-15
```
So yes: If you can use a format string *do it* and take advantage of what Python has to offer.
|
**''.join([a, b])** is better solution than **+**.
Because Code should be written in a way that does not disadvantage other implementations of Python (PyPy, Jython, IronPython, Cython, Psyco, and such)
form a += b or a = a + b is fragile even in CPython and isn't present at all in implementations **that don't use** **refcounting** *(reference counting is a technique of storing the number of references, pointers, or handles to a resource such as an object, block of memory, disk space or other resource*)
<https://www.python.org/dev/peps/pep-0008/#programming-recommendations>
|
10,043,636
|
A common antipattern in Python is to concatenate a sequence of strings using `+` in a loop. This is bad because the Python interpreter has to create a new string object for each iteration, and it ends up taking quadratic time. (Recent versions of CPython can apparently optimize this in some cases, but other implementations can't, so programmers are discouraged from relying on this.) `''.join` is the right way to do this.
However, I've heard it said ([including here on Stack Overflow](https://stackoverflow.com/questions/10001301/list-slicing-python/10001401#10001401)) that you should *never, ever* use `+` for string concatenation, but instead always use `''.join` or a format string. I don't understand why this is the case if you're only concatenating two strings. If my understanding is correct, it shouldn't take quadratic time, and I think `a + b` is cleaner and more readable than either `''.join((a, b))` or `'%s%s' % (a, b)`.
Is it good practice to use `+` to concatenate two strings? Or is there a problem I'm not aware of?
|
2012/04/06
|
[
"https://Stackoverflow.com/questions/10043636",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1064659/"
] |
Plus operator is perfectly fine solution to concatenate **two** Python strings. But if you keep adding more than two strings (n > 25) , you might want to think something else.
`''.join([a, b, c])` trick is a performance optimization.
|
The assumption that one should never, ever use + for string concatenation, but instead always use ''.join may be a myth. It is true that using `+` creates unnecessary temporary copies of immutable string object but the other not oft quoted fact is that calling `join` in a loop would generally add the overhead of `function call`. Lets take your example.
Create two lists, one from the linked SO question and another a bigger fabricated
```
>>> myl1 = ['A','B','C','D','E','F']
>>> myl2=[chr(random.randint(65,90)) for i in range(0,10000)]
```
Lets create two functions, `UseJoin` and `UsePlus` to use the respective `join` and `+` functionality.
```
>>> def UsePlus():
return [myl[i] + myl[i + 1] for i in range(0,len(myl), 2)]
>>> def UseJoin():
[''.join((myl[i],myl[i + 1])) for i in range(0,len(myl), 2)]
```
Lets run timeit with the first list
```
>>> myl=myl1
>>> t1=timeit.Timer("UsePlus()","from __main__ import UsePlus")
>>> t2=timeit.Timer("UseJoin()","from __main__ import UseJoin")
>>> print "%.2f usec/pass" % (1000000 * t1.timeit(number=100000)/100000)
2.48 usec/pass
>>> print "%.2f usec/pass" % (1000000 * t2.timeit(number=100000)/100000)
2.61 usec/pass
>>>
```
They have almost the same runtime.
Lets use cProfile
```
>>> myl=myl2
>>> cProfile.run("UsePlus()")
5 function calls in 0.001 CPU seconds
Ordered by: standard name
ncalls tottime percall cumtime percall filename:lineno(function)
1 0.001 0.001 0.001 0.001 <pyshell#1376>:1(UsePlus)
1 0.000 0.000 0.001 0.001 <string>:1(<module>)
1 0.000 0.000 0.000 0.000 {len}
1 0.000 0.000 0.000 0.000 {method 'disable' of '_lsprof.Profiler' objects}
1 0.000 0.000 0.000 0.000 {range}
>>> cProfile.run("UseJoin()")
5005 function calls in 0.029 CPU seconds
Ordered by: standard name
ncalls tottime percall cumtime percall filename:lineno(function)
1 0.015 0.015 0.029 0.029 <pyshell#1388>:1(UseJoin)
1 0.000 0.000 0.029 0.029 <string>:1(<module>)
1 0.000 0.000 0.000 0.000 {len}
1 0.000 0.000 0.000 0.000 {method 'disable' of '_lsprof.Profiler' objects}
5000 0.014 0.000 0.014 0.000 {method 'join' of 'str' objects}
1 0.000 0.000 0.000 0.000 {range}
```
And it looks that using Join, results in unnecessary function calls which could add to the overhead.
Now coming back to the question. Should one discourage the use of `+` over `join` in all cases?
I believe no, things should be taken into consideration
1. Length of the String in Question
2. No of Concatenation Operation.
And off-course in a development pre-mature optimization is evil.
|
10,043,636
|
A common antipattern in Python is to concatenate a sequence of strings using `+` in a loop. This is bad because the Python interpreter has to create a new string object for each iteration, and it ends up taking quadratic time. (Recent versions of CPython can apparently optimize this in some cases, but other implementations can't, so programmers are discouraged from relying on this.) `''.join` is the right way to do this.
However, I've heard it said ([including here on Stack Overflow](https://stackoverflow.com/questions/10001301/list-slicing-python/10001401#10001401)) that you should *never, ever* use `+` for string concatenation, but instead always use `''.join` or a format string. I don't understand why this is the case if you're only concatenating two strings. If my understanding is correct, it shouldn't take quadratic time, and I think `a + b` is cleaner and more readable than either `''.join((a, b))` or `'%s%s' % (a, b)`.
Is it good practice to use `+` to concatenate two strings? Or is there a problem I'm not aware of?
|
2012/04/06
|
[
"https://Stackoverflow.com/questions/10043636",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1064659/"
] |
There is nothing wrong in concatenating *two* strings with `+`. Indeed it's easier to read than `''.join([a, b])`.
You are right though that concatenating more than 2 strings with `+` is an O(n^2) operation (compared to O(n) for `join`) and thus becomes inefficient. However this has not to do with using a loop. Even `a + b + c + ...` is O(n^2), the reason being that each concatenation produces a new string.
CPython2.4 and above try to mitigate that, but it's still advisable to use `join` when concatenating more than 2 strings.
|
I use the following with python 3.8
```py
string4 = f'{string1}{string2}{string3}'
```
|
10,043,636
|
A common antipattern in Python is to concatenate a sequence of strings using `+` in a loop. This is bad because the Python interpreter has to create a new string object for each iteration, and it ends up taking quadratic time. (Recent versions of CPython can apparently optimize this in some cases, but other implementations can't, so programmers are discouraged from relying on this.) `''.join` is the right way to do this.
However, I've heard it said ([including here on Stack Overflow](https://stackoverflow.com/questions/10001301/list-slicing-python/10001401#10001401)) that you should *never, ever* use `+` for string concatenation, but instead always use `''.join` or a format string. I don't understand why this is the case if you're only concatenating two strings. If my understanding is correct, it shouldn't take quadratic time, and I think `a + b` is cleaner and more readable than either `''.join((a, b))` or `'%s%s' % (a, b)`.
Is it good practice to use `+` to concatenate two strings? Or is there a problem I'm not aware of?
|
2012/04/06
|
[
"https://Stackoverflow.com/questions/10043636",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1064659/"
] |
There is nothing wrong in concatenating *two* strings with `+`. Indeed it's easier to read than `''.join([a, b])`.
You are right though that concatenating more than 2 strings with `+` is an O(n^2) operation (compared to O(n) for `join`) and thus becomes inefficient. However this has not to do with using a loop. Even `a + b + c + ...` is O(n^2), the reason being that each concatenation produces a new string.
CPython2.4 and above try to mitigate that, but it's still advisable to use `join` when concatenating more than 2 strings.
|
Plus operator is perfectly fine solution to concatenate **two** Python strings. But if you keep adding more than two strings (n > 25) , you might want to think something else.
`''.join([a, b, c])` trick is a performance optimization.
|
10,043,636
|
A common antipattern in Python is to concatenate a sequence of strings using `+` in a loop. This is bad because the Python interpreter has to create a new string object for each iteration, and it ends up taking quadratic time. (Recent versions of CPython can apparently optimize this in some cases, but other implementations can't, so programmers are discouraged from relying on this.) `''.join` is the right way to do this.
However, I've heard it said ([including here on Stack Overflow](https://stackoverflow.com/questions/10001301/list-slicing-python/10001401#10001401)) that you should *never, ever* use `+` for string concatenation, but instead always use `''.join` or a format string. I don't understand why this is the case if you're only concatenating two strings. If my understanding is correct, it shouldn't take quadratic time, and I think `a + b` is cleaner and more readable than either `''.join((a, b))` or `'%s%s' % (a, b)`.
Is it good practice to use `+` to concatenate two strings? Or is there a problem I'm not aware of?
|
2012/04/06
|
[
"https://Stackoverflow.com/questions/10043636",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1064659/"
] |
Plus operator is perfectly fine solution to concatenate **two** Python strings. But if you keep adding more than two strings (n > 25) , you might want to think something else.
`''.join([a, b, c])` trick is a performance optimization.
|
According to Python docs, using str.join() will give you performance consistence across various implementations of Python. Although CPython optimizes away the quadratic behavior of s = s + t, other Python implementations may not.
>
> **CPython implementation detail**: If s and t are both strings, some
> Python implementations such as CPython can usually perform an in-place
> optimization for assignments of the form s = s + t or s += t. When
> applicable, this optimization makes quadratic run-time much less
> likely. This optimization is both version and implementation
> dependent. For performance sensitive code, it is preferable to use the
> str.join() method which assures consistent linear concatenation
> performance across versions and implementations.
>
>
>
[Sequence Types in Python docs](https://docs.python.org/2/library/stdtypes.html#sequence-types-str-unicode-list-tuple-bytearray-buffer-xrange) (see the foot note [6])
|
10,043,636
|
A common antipattern in Python is to concatenate a sequence of strings using `+` in a loop. This is bad because the Python interpreter has to create a new string object for each iteration, and it ends up taking quadratic time. (Recent versions of CPython can apparently optimize this in some cases, but other implementations can't, so programmers are discouraged from relying on this.) `''.join` is the right way to do this.
However, I've heard it said ([including here on Stack Overflow](https://stackoverflow.com/questions/10001301/list-slicing-python/10001401#10001401)) that you should *never, ever* use `+` for string concatenation, but instead always use `''.join` or a format string. I don't understand why this is the case if you're only concatenating two strings. If my understanding is correct, it shouldn't take quadratic time, and I think `a + b` is cleaner and more readable than either `''.join((a, b))` or `'%s%s' % (a, b)`.
Is it good practice to use `+` to concatenate two strings? Or is there a problem I'm not aware of?
|
2012/04/06
|
[
"https://Stackoverflow.com/questions/10043636",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1064659/"
] |
The assumption that one should never, ever use + for string concatenation, but instead always use ''.join may be a myth. It is true that using `+` creates unnecessary temporary copies of immutable string object but the other not oft quoted fact is that calling `join` in a loop would generally add the overhead of `function call`. Lets take your example.
Create two lists, one from the linked SO question and another a bigger fabricated
```
>>> myl1 = ['A','B','C','D','E','F']
>>> myl2=[chr(random.randint(65,90)) for i in range(0,10000)]
```
Lets create two functions, `UseJoin` and `UsePlus` to use the respective `join` and `+` functionality.
```
>>> def UsePlus():
return [myl[i] + myl[i + 1] for i in range(0,len(myl), 2)]
>>> def UseJoin():
[''.join((myl[i],myl[i + 1])) for i in range(0,len(myl), 2)]
```
Lets run timeit with the first list
```
>>> myl=myl1
>>> t1=timeit.Timer("UsePlus()","from __main__ import UsePlus")
>>> t2=timeit.Timer("UseJoin()","from __main__ import UseJoin")
>>> print "%.2f usec/pass" % (1000000 * t1.timeit(number=100000)/100000)
2.48 usec/pass
>>> print "%.2f usec/pass" % (1000000 * t2.timeit(number=100000)/100000)
2.61 usec/pass
>>>
```
They have almost the same runtime.
Lets use cProfile
```
>>> myl=myl2
>>> cProfile.run("UsePlus()")
5 function calls in 0.001 CPU seconds
Ordered by: standard name
ncalls tottime percall cumtime percall filename:lineno(function)
1 0.001 0.001 0.001 0.001 <pyshell#1376>:1(UsePlus)
1 0.000 0.000 0.001 0.001 <string>:1(<module>)
1 0.000 0.000 0.000 0.000 {len}
1 0.000 0.000 0.000 0.000 {method 'disable' of '_lsprof.Profiler' objects}
1 0.000 0.000 0.000 0.000 {range}
>>> cProfile.run("UseJoin()")
5005 function calls in 0.029 CPU seconds
Ordered by: standard name
ncalls tottime percall cumtime percall filename:lineno(function)
1 0.015 0.015 0.029 0.029 <pyshell#1388>:1(UseJoin)
1 0.000 0.000 0.029 0.029 <string>:1(<module>)
1 0.000 0.000 0.000 0.000 {len}
1 0.000 0.000 0.000 0.000 {method 'disable' of '_lsprof.Profiler' objects}
5000 0.014 0.000 0.014 0.000 {method 'join' of 'str' objects}
1 0.000 0.000 0.000 0.000 {range}
```
And it looks that using Join, results in unnecessary function calls which could add to the overhead.
Now coming back to the question. Should one discourage the use of `+` over `join` in all cases?
I believe no, things should be taken into consideration
1. Length of the String in Question
2. No of Concatenation Operation.
And off-course in a development pre-mature optimization is evil.
|
**''.join([a, b])** is better solution than **+**.
Because Code should be written in a way that does not disadvantage other implementations of Python (PyPy, Jython, IronPython, Cython, Psyco, and such)
form a += b or a = a + b is fragile even in CPython and isn't present at all in implementations **that don't use** **refcounting** *(reference counting is a technique of storing the number of references, pointers, or handles to a resource such as an object, block of memory, disk space or other resource*)
<https://www.python.org/dev/peps/pep-0008/#programming-recommendations>
|
10,043,636
|
A common antipattern in Python is to concatenate a sequence of strings using `+` in a loop. This is bad because the Python interpreter has to create a new string object for each iteration, and it ends up taking quadratic time. (Recent versions of CPython can apparently optimize this in some cases, but other implementations can't, so programmers are discouraged from relying on this.) `''.join` is the right way to do this.
However, I've heard it said ([including here on Stack Overflow](https://stackoverflow.com/questions/10001301/list-slicing-python/10001401#10001401)) that you should *never, ever* use `+` for string concatenation, but instead always use `''.join` or a format string. I don't understand why this is the case if you're only concatenating two strings. If my understanding is correct, it shouldn't take quadratic time, and I think `a + b` is cleaner and more readable than either `''.join((a, b))` or `'%s%s' % (a, b)`.
Is it good practice to use `+` to concatenate two strings? Or is there a problem I'm not aware of?
|
2012/04/06
|
[
"https://Stackoverflow.com/questions/10043636",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1064659/"
] |
There is nothing wrong in concatenating *two* strings with `+`. Indeed it's easier to read than `''.join([a, b])`.
You are right though that concatenating more than 2 strings with `+` is an O(n^2) operation (compared to O(n) for `join`) and thus becomes inefficient. However this has not to do with using a loop. Even `a + b + c + ...` is O(n^2), the reason being that each concatenation produces a new string.
CPython2.4 and above try to mitigate that, but it's still advisable to use `join` when concatenating more than 2 strings.
|
When working with multiple people, it's sometimes difficult to know exactly what's happening. Using a format string instead of concatenation can avoid one particular annoyance that's happened a whole ton of times to us:
Say, a function requires an argument, and you write it expecting to get a string:
```
In [1]: def foo(zeta):
...: print 'bar: ' + zeta
In [2]: foo('bang')
bar: bang
```
So, this function may be used pretty often throughout the code. Your coworkers may know exactly what it does, but not necessarily be fully up-to-speed on the internals, and may not know that the function expects a string. And so they may end up with this:
```
In [3]: foo(23)
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
/home/izkata/<ipython console> in <module>()
/home/izkata/<ipython console> in foo(zeta)
TypeError: cannot concatenate 'str' and 'int' objects
```
There would be no problem if you just used a format string:
```
In [1]: def foo(zeta):
...: print 'bar: %s' % zeta
...:
...:
In [2]: foo('bang')
bar: bang
In [3]: foo(23)
bar: 23
```
The same is true for all types of objects that define `__str__`, which may be passed in as well:
```
In [1]: from datetime import date
In [2]: zeta = date(2012, 4, 15)
In [3]: print 'bar: ' + zeta
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
/home/izkata/<ipython console> in <module>()
TypeError: cannot concatenate 'str' and 'datetime.date' objects
In [4]: print 'bar: %s' % zeta
bar: 2012-04-15
```
So yes: If you can use a format string *do it* and take advantage of what Python has to offer.
|
10,043,636
|
A common antipattern in Python is to concatenate a sequence of strings using `+` in a loop. This is bad because the Python interpreter has to create a new string object for each iteration, and it ends up taking quadratic time. (Recent versions of CPython can apparently optimize this in some cases, but other implementations can't, so programmers are discouraged from relying on this.) `''.join` is the right way to do this.
However, I've heard it said ([including here on Stack Overflow](https://stackoverflow.com/questions/10001301/list-slicing-python/10001401#10001401)) that you should *never, ever* use `+` for string concatenation, but instead always use `''.join` or a format string. I don't understand why this is the case if you're only concatenating two strings. If my understanding is correct, it shouldn't take quadratic time, and I think `a + b` is cleaner and more readable than either `''.join((a, b))` or `'%s%s' % (a, b)`.
Is it good practice to use `+` to concatenate two strings? Or is there a problem I'm not aware of?
|
2012/04/06
|
[
"https://Stackoverflow.com/questions/10043636",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1064659/"
] |
There is nothing wrong in concatenating *two* strings with `+`. Indeed it's easier to read than `''.join([a, b])`.
You are right though that concatenating more than 2 strings with `+` is an O(n^2) operation (compared to O(n) for `join`) and thus becomes inefficient. However this has not to do with using a loop. Even `a + b + c + ...` is O(n^2), the reason being that each concatenation produces a new string.
CPython2.4 and above try to mitigate that, but it's still advisable to use `join` when concatenating more than 2 strings.
|
**''.join([a, b])** is better solution than **+**.
Because Code should be written in a way that does not disadvantage other implementations of Python (PyPy, Jython, IronPython, Cython, Psyco, and such)
form a += b or a = a + b is fragile even in CPython and isn't present at all in implementations **that don't use** **refcounting** *(reference counting is a technique of storing the number of references, pointers, or handles to a resource such as an object, block of memory, disk space or other resource*)
<https://www.python.org/dev/peps/pep-0008/#programming-recommendations>
|
10,043,636
|
A common antipattern in Python is to concatenate a sequence of strings using `+` in a loop. This is bad because the Python interpreter has to create a new string object for each iteration, and it ends up taking quadratic time. (Recent versions of CPython can apparently optimize this in some cases, but other implementations can't, so programmers are discouraged from relying on this.) `''.join` is the right way to do this.
However, I've heard it said ([including here on Stack Overflow](https://stackoverflow.com/questions/10001301/list-slicing-python/10001401#10001401)) that you should *never, ever* use `+` for string concatenation, but instead always use `''.join` or a format string. I don't understand why this is the case if you're only concatenating two strings. If my understanding is correct, it shouldn't take quadratic time, and I think `a + b` is cleaner and more readable than either `''.join((a, b))` or `'%s%s' % (a, b)`.
Is it good practice to use `+` to concatenate two strings? Or is there a problem I'm not aware of?
|
2012/04/06
|
[
"https://Stackoverflow.com/questions/10043636",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1064659/"
] |
When working with multiple people, it's sometimes difficult to know exactly what's happening. Using a format string instead of concatenation can avoid one particular annoyance that's happened a whole ton of times to us:
Say, a function requires an argument, and you write it expecting to get a string:
```
In [1]: def foo(zeta):
...: print 'bar: ' + zeta
In [2]: foo('bang')
bar: bang
```
So, this function may be used pretty often throughout the code. Your coworkers may know exactly what it does, but not necessarily be fully up-to-speed on the internals, and may not know that the function expects a string. And so they may end up with this:
```
In [3]: foo(23)
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
/home/izkata/<ipython console> in <module>()
/home/izkata/<ipython console> in foo(zeta)
TypeError: cannot concatenate 'str' and 'int' objects
```
There would be no problem if you just used a format string:
```
In [1]: def foo(zeta):
...: print 'bar: %s' % zeta
...:
...:
In [2]: foo('bang')
bar: bang
In [3]: foo(23)
bar: 23
```
The same is true for all types of objects that define `__str__`, which may be passed in as well:
```
In [1]: from datetime import date
In [2]: zeta = date(2012, 4, 15)
In [3]: print 'bar: ' + zeta
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
/home/izkata/<ipython console> in <module>()
TypeError: cannot concatenate 'str' and 'datetime.date' objects
In [4]: print 'bar: %s' % zeta
bar: 2012-04-15
```
So yes: If you can use a format string *do it* and take advantage of what Python has to offer.
|
According to Python docs, using str.join() will give you performance consistence across various implementations of Python. Although CPython optimizes away the quadratic behavior of s = s + t, other Python implementations may not.
>
> **CPython implementation detail**: If s and t are both strings, some
> Python implementations such as CPython can usually perform an in-place
> optimization for assignments of the form s = s + t or s += t. When
> applicable, this optimization makes quadratic run-time much less
> likely. This optimization is both version and implementation
> dependent. For performance sensitive code, it is preferable to use the
> str.join() method which assures consistent linear concatenation
> performance across versions and implementations.
>
>
>
[Sequence Types in Python docs](https://docs.python.org/2/library/stdtypes.html#sequence-types-str-unicode-list-tuple-bytearray-buffer-xrange) (see the foot note [6])
|
11,566,355
|
Why it executes the script every 2 minutes?
Shouldn't it execute every 10 minutes ?
```
*\10 * * * * /usr/bin/python /var/www/py/LSFchecker.py
```
|
2012/07/19
|
[
"https://Stackoverflow.com/questions/11566355",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1131053/"
] |
```
*\10 * * * *
```
Should probably be
```
*/10 * * * *
```
|
You can try:
00,10,20,30,40,50 \* \* \* \* /usr/bin/python /var/www/py/LSFchecker.py
|
39,003,106
|
Comrades,
I'd like to capture images from the laptop camera in Python. Currently all signs point to OpenCV. Problem is OpenCV is a nightmare to install, and it's a nightmare that reoccurs every time you reinstall your code on a new system.
Is there a more lightweight way to capture camera data in Python? I'm looking for something to the effect of:
```
$ pip install something
$ python
>>> import something
>>> im = something.get_camera_image()
```
I'm on Mac, but solutions on any platform are welcome.
|
2016/08/17
|
[
"https://Stackoverflow.com/questions/39003106",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/851699/"
] |
I've done this before using [pygame](http://www.pygame.org/download.shtml). But I can't seem to find the script that I used... It seems there is a tutorial [here](http://www.pygame.org/docs/tut/camera/CameraIntro.html) which uses the native camera module and is not dependent on OpenCV.
Try this:
```
import pygame
import pygame.camera
from pygame.locals import *
pygame.init()
pygame.camera.init()
cam = pygame.camera.Camera("/path/to/camera",(640,480))
cam.start()
image = cam.get_image()
```
If you don't know the path to the camera you can also get a list of all available which should include you laptop webcam:
```
camlist = pygame.camera.list_cameras()
if camlist:
cam = pygame.camera.Camera(camlist[0],(640,480))
```
|
Videocapture for Windows
========================
I've used [videocapture](http://videocapture.sourceforge.net/) in the past and it was perfect:
```
from VideoCapture import Device
cam = Device()
cam.saveSnapshot('image.jpg')
```
|
39,003,106
|
Comrades,
I'd like to capture images from the laptop camera in Python. Currently all signs point to OpenCV. Problem is OpenCV is a nightmare to install, and it's a nightmare that reoccurs every time you reinstall your code on a new system.
Is there a more lightweight way to capture camera data in Python? I'm looking for something to the effect of:
```
$ pip install something
$ python
>>> import something
>>> im = something.get_camera_image()
```
I'm on Mac, but solutions on any platform are welcome.
|
2016/08/17
|
[
"https://Stackoverflow.com/questions/39003106",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/851699/"
] |
As [the documentation states](https://www.pygame.org/docs/ref/camera.html), currently `pygame.camera` only supports linux and v4l2 cameras, so this would not work on macOS. On Linux instead it works fine.
|
Videocapture for Windows
========================
I've used [videocapture](http://videocapture.sourceforge.net/) in the past and it was perfect:
```
from VideoCapture import Device
cam = Device()
cam.saveSnapshot('image.jpg')
```
|
39,003,106
|
Comrades,
I'd like to capture images from the laptop camera in Python. Currently all signs point to OpenCV. Problem is OpenCV is a nightmare to install, and it's a nightmare that reoccurs every time you reinstall your code on a new system.
Is there a more lightweight way to capture camera data in Python? I'm looking for something to the effect of:
```
$ pip install something
$ python
>>> import something
>>> im = something.get_camera_image()
```
I'm on Mac, but solutions on any platform are welcome.
|
2016/08/17
|
[
"https://Stackoverflow.com/questions/39003106",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/851699/"
] |
You can use pyavfcam [github](https://github.com/dashesy/pyavfcam). You will need to install xcode and cython, first. This is the official demo.
```
import pyavfcam
# Open the default video source
cam = pyavfcam.AVFCam(sinks='image')
cam.snap_picture('test.jpg')
print "Saved test.jpg (Size: " + str(cam.shape[0]) + " x " + str(cam.shape[1]) + ")"
```
|
Videocapture for Windows
========================
I've used [videocapture](http://videocapture.sourceforge.net/) in the past and it was perfect:
```
from VideoCapture import Device
cam = Device()
cam.saveSnapshot('image.jpg')
```
|
32,231,589
|
*Question:* I am trying to debug a function in a `class` to make sure a list is implemented and if not, I want an error to be thrown up. I am new to programming and am trying to figure how to effectively add an `assert` for the debugging purposes mentioned above.
In the method body of `class BaseAPI()` I have to iterate over the list and call `retrieve_category` for each item in ConvenienceAPI(BaseAPI), passing the new list on your `super()`. I think that I am doing something wrong because I get the error: `TypeError: create_assessment() got an unexpected keyword argument 'category'`
*Question 2*
I also want `category` to have a list to hold multiple categories (strings). This is not happening right now. The list in the method is not being iterated or created. I am unsure... see the traceback error below.
*CODE*
instantiating a list of `categories`:
=====================================
```
api = BaseAPI()
api.create_assessment('Situation Awareness', 'Developing better skills', 'baseball', 'this is a video','selecting and communicating options', category=['Leadership', 'Decision-Making', 'Situation Awareness', 'Teamwork and Communication'])
```
BaseAPI() method where `assert` and `create` lives
==================================================
```
class BaseAPI(object):
def create_assessment(self, name, text, user, video, element, category):
category = [] # Trying to check if category isn't a list
assert category, 'Category is not a list'
new_assessment = Assessment(name, text, user, video, element, category)
self.session.add(new_assessment)
self.session.commit()
```
NEW ADD TO iterate through list but still gives `Error`!
========================================================
```
class ConvenienceAPI(BaseAPI):
def create_assessment(self, name, text, username, videoname, element_text, category_name):
user = self.retrieve_user(username)
video = self.retrieve_video(videoname)
element = self.retrieve_element(element_text)
categories = self.retrieve_category(category_name)
for items in categories: # < ==== added
return categories # < ==== added
return super(ConvenienceAPI, self).create_assessment(name, text, user, video, element, categories)
```
Table
=====
```
class Assessment(Base):
__tablename__ = 'assessments'
#some code
category_id = Column(Integer, ForeignKey('categories.category_id'))
category = relationship('AssessmentCategoryLink', backref='assessments')
def __init__(self, name, text, user, video, element, category): # OBJECTS !!
#some code
self.category = []
```
TracebackError
==============
```
ERROR: db.test.test.test1
----------------------------------------------------------------------
Traceback (most recent call last):
File "/usr/local/lib/python2.7/site-packages/nose/case.py", line 197, in runTest
self.test(*self.arg)
File "/Users/ack/code/venv/NotssDB/notssdb/test/test.py", line 69, in test1
api.create_assessment('Situation Awareness', 'Developing better skills', 'baseball', 'this is a video','selecting and communicating options', category=['Leadership', 'Decision-Making', 'Situation Awareness', 'Teamwork and Communication'])
TypeError: create_assessment() got an unexpected keyword argument 'category'
```
|
2015/08/26
|
[
"https://Stackoverflow.com/questions/32231589",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5140076/"
] |
I am going to go for my Self Learner badge, here, and answer my own question.
I did about 8 hours of research on this and came up with a great solution. Three things had to be done.
1. Set the HTML5 video src to something other than the original URL. This will trigger the player to close the open socket connection.
2. Set the HTML5 video src to a Base64 encoded data object. This will prevent the Error Code 4 issue MEDIA\_ERR\_SRC\_NOT\_SUPPORTED.
3. For issues in older versions of Firefox, also trigger the .load() event.
My Code:
```
// Handling Bootstrap Modal Window Close Event
// Trigger player destroy
$("#xr-interaction-detail-modal").on("hidden.bs.modal", function () {
var player = xr.ui.mediaelement.xrPlayer();
if (player) {
player.pause();
("video,audio").attr("src","data:video/mp4;base64,AAAAHG...MTAw");
player.load();
player.remove();
}
});
```
I came up with the idea to load the data object as the src. However, I'd like to thank kud on GitHub for the super small base64 video.
<https://github.com/kud/blank-video>
|
Added a line between pause() and remove():
```
// Handling Bootstrap Modal Window Close Event
// Trigger player destroy
$("#xr-interaction-detail-modal").on("hidden.bs.modal", function () {
var player = xr.ui.mediaelement.xrPlayer();
if (player) {
player.pause();
("video,audio").attr("src", "");
player.remove();
}
});
```
|
44,472,252
|
I am very beginner Python. So my question could be quite naive.
I just started to study this language principally due to mathematical tools such as Numpy and Matplotlib which seem to be very useful.
In fact I don't see how python works in fields other than maths
I wonder if it is possible (and if yes, how?) to use Python for problems such as text files treatment.
And more precisely is it possible to solve such problem:
I have two files A.txt and B.txt . The A.txt file contains three columns of numbers and looks like this
```
0.22222000 0.11111000 0.00000000
0.22222000 0.44444000 0.00000000
0.22222000 0.77778000 0.00000000
0.55556000 0.11111000 0.00000000
0.55556000 0.44444000 0.00000000
.....
```
The B.txt file contains three columns of letters F or T and looks like this :
```
F F F
F F F
F F F
F F F
T T F
......
```
The number of lines is the same in files A.txt and B.txt
I need to create a file which would look like this
```
0.22222000 0.11111000 0.00000000 F F F
0.22222000 0.44444000 0.00000000 F F F
0.22222000 0.77778000 0.00000000 F F F
0.55556000 0.11111000 0.00000000 F F F
0.55556000 0.44444000 0.00000000 T T F
```
.......
In other words I need to create a file containing 3 columns of A.txt and then the 3 columns of B.txt file.
Could someone help me to write the lines in python needed for this?
I could easily do it in fortran but have heard that the script in python would be much smaller.
And since I started to study math tools in Python I wish also to extend my knowledge to other opportunities that this language offers.
Thanks in advance
|
2017/06/10
|
[
"https://Stackoverflow.com/questions/44472252",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/8141006/"
] |
If you want to concatenate them the good ol' fashioned way, and put them in a new file, you can do this:
```
a = open('A.txt')
b = open('B.txt')
c = open('C.txt', 'w')
for a_line, b_line in zip(a, b):
c.write(a_line.rstrip() + ' ' + b_line)
a.close()
b.close()
c.close()
```
|
Try this,read files as numpy arrays
```
a = np.loadtxt('a.txt')
b = np.genfromtxt('b.txt',dtype='str')
```
In case of b you need genfromtext because of string content.Than
```
np.concatenate((a, b), axis=1)
```
Finally, you will get
```
np.concatenate((a, b), axis=1)
array([['0.22222', '0.11111', '0.0', 'F', 'F', 'F'],
['0.22222', '0.44444', '0.0', 'F', 'F', 'F'],
['0.22222', '0.77778', '0.0', 'F', 'F', 'F'],
['0.55556', '0.11111', '0.0', 'F', 'F', 'F'],
['0.55556', '0.44444', '0.0', 'T', 'T', 'F']],
dtype='<U32')
```
|
44,472,252
|
I am very beginner Python. So my question could be quite naive.
I just started to study this language principally due to mathematical tools such as Numpy and Matplotlib which seem to be very useful.
In fact I don't see how python works in fields other than maths
I wonder if it is possible (and if yes, how?) to use Python for problems such as text files treatment.
And more precisely is it possible to solve such problem:
I have two files A.txt and B.txt . The A.txt file contains three columns of numbers and looks like this
```
0.22222000 0.11111000 0.00000000
0.22222000 0.44444000 0.00000000
0.22222000 0.77778000 0.00000000
0.55556000 0.11111000 0.00000000
0.55556000 0.44444000 0.00000000
.....
```
The B.txt file contains three columns of letters F or T and looks like this :
```
F F F
F F F
F F F
F F F
T T F
......
```
The number of lines is the same in files A.txt and B.txt
I need to create a file which would look like this
```
0.22222000 0.11111000 0.00000000 F F F
0.22222000 0.44444000 0.00000000 F F F
0.22222000 0.77778000 0.00000000 F F F
0.55556000 0.11111000 0.00000000 F F F
0.55556000 0.44444000 0.00000000 T T F
```
.......
In other words I need to create a file containing 3 columns of A.txt and then the 3 columns of B.txt file.
Could someone help me to write the lines in python needed for this?
I could easily do it in fortran but have heard that the script in python would be much smaller.
And since I started to study math tools in Python I wish also to extend my knowledge to other opportunities that this language offers.
Thanks in advance
|
2017/06/10
|
[
"https://Stackoverflow.com/questions/44472252",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/8141006/"
] |
Of course Python can be used for text processing (possibly it is even better suited for that than for numerical jobs). The task in question, however, can be done with a single Unix command: `paste A.txt B.txt > output.txt`
And here is a Python solution without using `numpy`:
```
with open('A.txt') as a:
with open('B.txt') as b:
with open('output.txt', 'w') as c:
for line_a, line_b in zip(a, b):
c.write(line_a.rstrip() + ' ' + line_b)
```
|
Try this,read files as numpy arrays
```
a = np.loadtxt('a.txt')
b = np.genfromtxt('b.txt',dtype='str')
```
In case of b you need genfromtext because of string content.Than
```
np.concatenate((a, b), axis=1)
```
Finally, you will get
```
np.concatenate((a, b), axis=1)
array([['0.22222', '0.11111', '0.0', 'F', 'F', 'F'],
['0.22222', '0.44444', '0.0', 'F', 'F', 'F'],
['0.22222', '0.77778', '0.0', 'F', 'F', 'F'],
['0.55556', '0.11111', '0.0', 'F', 'F', 'F'],
['0.55556', '0.44444', '0.0', 'T', 'T', 'F']],
dtype='<U32')
```
|
44,472,252
|
I am very beginner Python. So my question could be quite naive.
I just started to study this language principally due to mathematical tools such as Numpy and Matplotlib which seem to be very useful.
In fact I don't see how python works in fields other than maths
I wonder if it is possible (and if yes, how?) to use Python for problems such as text files treatment.
And more precisely is it possible to solve such problem:
I have two files A.txt and B.txt . The A.txt file contains three columns of numbers and looks like this
```
0.22222000 0.11111000 0.00000000
0.22222000 0.44444000 0.00000000
0.22222000 0.77778000 0.00000000
0.55556000 0.11111000 0.00000000
0.55556000 0.44444000 0.00000000
.....
```
The B.txt file contains three columns of letters F or T and looks like this :
```
F F F
F F F
F F F
F F F
T T F
......
```
The number of lines is the same in files A.txt and B.txt
I need to create a file which would look like this
```
0.22222000 0.11111000 0.00000000 F F F
0.22222000 0.44444000 0.00000000 F F F
0.22222000 0.77778000 0.00000000 F F F
0.55556000 0.11111000 0.00000000 F F F
0.55556000 0.44444000 0.00000000 T T F
```
.......
In other words I need to create a file containing 3 columns of A.txt and then the 3 columns of B.txt file.
Could someone help me to write the lines in python needed for this?
I could easily do it in fortran but have heard that the script in python would be much smaller.
And since I started to study math tools in Python I wish also to extend my knowledge to other opportunities that this language offers.
Thanks in advance
|
2017/06/10
|
[
"https://Stackoverflow.com/questions/44472252",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/8141006/"
] |
Of course Python can be used for text processing (possibly it is even better suited for that than for numerical jobs). The task in question, however, can be done with a single Unix command: `paste A.txt B.txt > output.txt`
And here is a Python solution without using `numpy`:
```
with open('A.txt') as a:
with open('B.txt') as b:
with open('output.txt', 'w') as c:
for line_a, line_b in zip(a, b):
c.write(line_a.rstrip() + ' ' + line_b)
```
|
If you want to concatenate them the good ol' fashioned way, and put them in a new file, you can do this:
```
a = open('A.txt')
b = open('B.txt')
c = open('C.txt', 'w')
for a_line, b_line in zip(a, b):
c.write(a_line.rstrip() + ' ' + b_line)
a.close()
b.close()
c.close()
```
|
3,999,829
|
I wouldn't call myself programmer, but I've started learning Python recently and really enjoy it.
I mainly use it for small tasks so far - scripting, text processing, KML generation and ArcGIS.
From my experience with R (working with excellent Notepad++ and [NppToR](http://sourceforge.net/projects/npptor/) combo) I usually try to work with my scripts line by line (or region by region) in order to understand what each step of my script is doing.. and to check results on the fly.
My question: is there and IDE (or editor?) for Windows that lets you evaluate single line of Python script?
I [have](https://stackoverflow.com/questions/81584/what-ide-to-use-for-python) [seen](https://stackoverflow.com/questions/60784/poll-which-python-ide-editor-is-the-best) [quite](https://stackoverflow.com/questions/126753/is-there-a-good-free-python-ide-for-windows) a lot of discussion regarding IDEs in Python context.. but havent stubled upon this specific question so far.
Thanks for help!
|
2010/10/22
|
[
"https://Stackoverflow.com/questions/3999829",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/319487/"
] |
I use Notepad++ for most of my Windows based Python development and for debugging I use [Winpdb](http://winpdb.org/about/). It's a cross platform GUI based debugger. You can actually setup a keyboard shortcut in Notepad++ to launch the debugger on your current script:
To do this go to "Run" -> "Run ..." in the menu and enter the following, making sure the path points to your winpdb\_.pyw file:
```
C:\python26\Scripts\winpdb_.pyw "$(FULL_CURRENT_PATH)"
```
Then choose "Save..." and pick a shortcut that you wish to use to launch the debugger.
PS: You can also setup a shortcut to execute your python scripts similarly using this string instead:
```
C:\python26\python.exe "$(FULL_CURRENT_PATH)"
```
|
[Light Table](http://lighttable.com/) was doing that for me, unfortunately it is discontinued:
>
> INLINE EVALUTION No more printing to the console in order to view your
> results. Simply evaluate your code and the results will be displayed
> inline.
>
>
>
[](https://i.stack.imgur.com/H0VlW.png)
|
3,999,829
|
I wouldn't call myself programmer, but I've started learning Python recently and really enjoy it.
I mainly use it for small tasks so far - scripting, text processing, KML generation and ArcGIS.
From my experience with R (working with excellent Notepad++ and [NppToR](http://sourceforge.net/projects/npptor/) combo) I usually try to work with my scripts line by line (or region by region) in order to understand what each step of my script is doing.. and to check results on the fly.
My question: is there and IDE (or editor?) for Windows that lets you evaluate single line of Python script?
I [have](https://stackoverflow.com/questions/81584/what-ide-to-use-for-python) [seen](https://stackoverflow.com/questions/60784/poll-which-python-ide-editor-is-the-best) [quite](https://stackoverflow.com/questions/126753/is-there-a-good-free-python-ide-for-windows) a lot of discussion regarding IDEs in Python context.. but havent stubled upon this specific question so far.
Thanks for help!
|
2010/10/22
|
[
"https://Stackoverflow.com/questions/3999829",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/319487/"
] |
I use Notepad++ for most of my Windows based Python development and for debugging I use [Winpdb](http://winpdb.org/about/). It's a cross platform GUI based debugger. You can actually setup a keyboard shortcut in Notepad++ to launch the debugger on your current script:
To do this go to "Run" -> "Run ..." in the menu and enter the following, making sure the path points to your winpdb\_.pyw file:
```
C:\python26\Scripts\winpdb_.pyw "$(FULL_CURRENT_PATH)"
```
Then choose "Save..." and pick a shortcut that you wish to use to launch the debugger.
PS: You can also setup a shortcut to execute your python scripts similarly using this string instead:
```
C:\python26\python.exe "$(FULL_CURRENT_PATH)"
```
|
I like [vim-ipython](https://github.com/ivanov/vim-ipython#id2). With it I can `<ctrl>+s` to run a specific line. Or several lines selected on visual modes. Take a look at this [video demo](http://pirsquared.org/vim-ipython/).
|
3,999,829
|
I wouldn't call myself programmer, but I've started learning Python recently and really enjoy it.
I mainly use it for small tasks so far - scripting, text processing, KML generation and ArcGIS.
From my experience with R (working with excellent Notepad++ and [NppToR](http://sourceforge.net/projects/npptor/) combo) I usually try to work with my scripts line by line (or region by region) in order to understand what each step of my script is doing.. and to check results on the fly.
My question: is there and IDE (or editor?) for Windows that lets you evaluate single line of Python script?
I [have](https://stackoverflow.com/questions/81584/what-ide-to-use-for-python) [seen](https://stackoverflow.com/questions/60784/poll-which-python-ide-editor-is-the-best) [quite](https://stackoverflow.com/questions/126753/is-there-a-good-free-python-ide-for-windows) a lot of discussion regarding IDEs in Python context.. but havent stubled upon this specific question so far.
Thanks for help!
|
2010/10/22
|
[
"https://Stackoverflow.com/questions/3999829",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/319487/"
] |
I use Notepad++ for most of my Windows based Python development and for debugging I use [Winpdb](http://winpdb.org/about/). It's a cross platform GUI based debugger. You can actually setup a keyboard shortcut in Notepad++ to launch the debugger on your current script:
To do this go to "Run" -> "Run ..." in the menu and enter the following, making sure the path points to your winpdb\_.pyw file:
```
C:\python26\Scripts\winpdb_.pyw "$(FULL_CURRENT_PATH)"
```
Then choose "Save..." and pick a shortcut that you wish to use to launch the debugger.
PS: You can also setup a shortcut to execute your python scripts similarly using this string instead:
```
C:\python26\python.exe "$(FULL_CURRENT_PATH)"
```
|
Visual Studio and PTVS: <http://www.hanselman.com/blog/OneOfMicrosoftsBestKeptSecretsPythonToolsForVisualStudioPTVS.aspx>
(There is also a REPL inside VS)
|
3,999,829
|
I wouldn't call myself programmer, but I've started learning Python recently and really enjoy it.
I mainly use it for small tasks so far - scripting, text processing, KML generation and ArcGIS.
From my experience with R (working with excellent Notepad++ and [NppToR](http://sourceforge.net/projects/npptor/) combo) I usually try to work with my scripts line by line (or region by region) in order to understand what each step of my script is doing.. and to check results on the fly.
My question: is there and IDE (or editor?) for Windows that lets you evaluate single line of Python script?
I [have](https://stackoverflow.com/questions/81584/what-ide-to-use-for-python) [seen](https://stackoverflow.com/questions/60784/poll-which-python-ide-editor-is-the-best) [quite](https://stackoverflow.com/questions/126753/is-there-a-good-free-python-ide-for-windows) a lot of discussion regarding IDEs in Python context.. but havent stubled upon this specific question so far.
Thanks for help!
|
2010/10/22
|
[
"https://Stackoverflow.com/questions/3999829",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/319487/"
] |
The upcoming RStudio 1.2 is so good that you have to try to write some python with it.
|
[PyCharm](http://www.jetbrains.com/pycharm/) from JetBrains has a very nice debugger that you can step through code with.
Django and console integration built in.
|
3,999,829
|
I wouldn't call myself programmer, but I've started learning Python recently and really enjoy it.
I mainly use it for small tasks so far - scripting, text processing, KML generation and ArcGIS.
From my experience with R (working with excellent Notepad++ and [NppToR](http://sourceforge.net/projects/npptor/) combo) I usually try to work with my scripts line by line (or region by region) in order to understand what each step of my script is doing.. and to check results on the fly.
My question: is there and IDE (or editor?) for Windows that lets you evaluate single line of Python script?
I [have](https://stackoverflow.com/questions/81584/what-ide-to-use-for-python) [seen](https://stackoverflow.com/questions/60784/poll-which-python-ide-editor-is-the-best) [quite](https://stackoverflow.com/questions/126753/is-there-a-good-free-python-ide-for-windows) a lot of discussion regarding IDEs in Python context.. but havent stubled upon this specific question so far.
Thanks for help!
|
2010/10/22
|
[
"https://Stackoverflow.com/questions/3999829",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/319487/"
] |
If you are on Windows, give [Pyscripter](http://code.google.com/p/pyscripter/) a try -- it offers comprehensive, step-through debugging, which will let you examine the state of your variables at each step of your code.
|
Visual Studio and PTVS: <http://www.hanselman.com/blog/OneOfMicrosoftsBestKeptSecretsPythonToolsForVisualStudioPTVS.aspx>
(There is also a REPL inside VS)
|
3,999,829
|
I wouldn't call myself programmer, but I've started learning Python recently and really enjoy it.
I mainly use it for small tasks so far - scripting, text processing, KML generation and ArcGIS.
From my experience with R (working with excellent Notepad++ and [NppToR](http://sourceforge.net/projects/npptor/) combo) I usually try to work with my scripts line by line (or region by region) in order to understand what each step of my script is doing.. and to check results on the fly.
My question: is there and IDE (or editor?) for Windows that lets you evaluate single line of Python script?
I [have](https://stackoverflow.com/questions/81584/what-ide-to-use-for-python) [seen](https://stackoverflow.com/questions/60784/poll-which-python-ide-editor-is-the-best) [quite](https://stackoverflow.com/questions/126753/is-there-a-good-free-python-ide-for-windows) a lot of discussion regarding IDEs in Python context.. but havent stubled upon this specific question so far.
Thanks for help!
|
2010/10/22
|
[
"https://Stackoverflow.com/questions/3999829",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/319487/"
] |
If you like R's layout. I highly recommend trying out [Spyder](https://www.spyder-ide.org/). If you are using windows, try out Python(x,y). It is a package with a few different editors and a lot of common extra modules like scipy and numpy.
|
Visual Studio and PTVS: <http://www.hanselman.com/blog/OneOfMicrosoftsBestKeptSecretsPythonToolsForVisualStudioPTVS.aspx>
(There is also a REPL inside VS)
|
3,999,829
|
I wouldn't call myself programmer, but I've started learning Python recently and really enjoy it.
I mainly use it for small tasks so far - scripting, text processing, KML generation and ArcGIS.
From my experience with R (working with excellent Notepad++ and [NppToR](http://sourceforge.net/projects/npptor/) combo) I usually try to work with my scripts line by line (or region by region) in order to understand what each step of my script is doing.. and to check results on the fly.
My question: is there and IDE (or editor?) for Windows that lets you evaluate single line of Python script?
I [have](https://stackoverflow.com/questions/81584/what-ide-to-use-for-python) [seen](https://stackoverflow.com/questions/60784/poll-which-python-ide-editor-is-the-best) [quite](https://stackoverflow.com/questions/126753/is-there-a-good-free-python-ide-for-windows) a lot of discussion regarding IDEs in Python context.. but havent stubled upon this specific question so far.
Thanks for help!
|
2010/10/22
|
[
"https://Stackoverflow.com/questions/3999829",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/319487/"
] |
If you are on Windows, give [Pyscripter](http://code.google.com/p/pyscripter/) a try -- it offers comprehensive, step-through debugging, which will let you examine the state of your variables at each step of your code.
|
[PyCharm](http://www.jetbrains.com/pycharm/) from JetBrains has a very nice debugger that you can step through code with.
Django and console integration built in.
|
3,999,829
|
I wouldn't call myself programmer, but I've started learning Python recently and really enjoy it.
I mainly use it for small tasks so far - scripting, text processing, KML generation and ArcGIS.
From my experience with R (working with excellent Notepad++ and [NppToR](http://sourceforge.net/projects/npptor/) combo) I usually try to work with my scripts line by line (or region by region) in order to understand what each step of my script is doing.. and to check results on the fly.
My question: is there and IDE (or editor?) for Windows that lets you evaluate single line of Python script?
I [have](https://stackoverflow.com/questions/81584/what-ide-to-use-for-python) [seen](https://stackoverflow.com/questions/60784/poll-which-python-ide-editor-is-the-best) [quite](https://stackoverflow.com/questions/126753/is-there-a-good-free-python-ide-for-windows) a lot of discussion regarding IDEs in Python context.. but havent stubled upon this specific question so far.
Thanks for help!
|
2010/10/22
|
[
"https://Stackoverflow.com/questions/3999829",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/319487/"
] |
WingIDE, I've been using it successfully for over a year, and very pleased with it.
|
I like [vim-ipython](https://github.com/ivanov/vim-ipython#id2). With it I can `<ctrl>+s` to run a specific line. Or several lines selected on visual modes. Take a look at this [video demo](http://pirsquared.org/vim-ipython/).
|
3,999,829
|
I wouldn't call myself programmer, but I've started learning Python recently and really enjoy it.
I mainly use it for small tasks so far - scripting, text processing, KML generation and ArcGIS.
From my experience with R (working with excellent Notepad++ and [NppToR](http://sourceforge.net/projects/npptor/) combo) I usually try to work with my scripts line by line (or region by region) in order to understand what each step of my script is doing.. and to check results on the fly.
My question: is there and IDE (or editor?) for Windows that lets you evaluate single line of Python script?
I [have](https://stackoverflow.com/questions/81584/what-ide-to-use-for-python) [seen](https://stackoverflow.com/questions/60784/poll-which-python-ide-editor-is-the-best) [quite](https://stackoverflow.com/questions/126753/is-there-a-good-free-python-ide-for-windows) a lot of discussion regarding IDEs in Python context.. but havent stubled upon this specific question so far.
Thanks for help!
|
2010/10/22
|
[
"https://Stackoverflow.com/questions/3999829",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/319487/"
] |
If you like R's layout. I highly recommend trying out [Spyder](https://www.spyder-ide.org/). If you are using windows, try out Python(x,y). It is a package with a few different editors and a lot of common extra modules like scipy and numpy.
|
I use Notepad++ for most of my Windows based Python development and for debugging I use [Winpdb](http://winpdb.org/about/). It's a cross platform GUI based debugger. You can actually setup a keyboard shortcut in Notepad++ to launch the debugger on your current script:
To do this go to "Run" -> "Run ..." in the menu and enter the following, making sure the path points to your winpdb\_.pyw file:
```
C:\python26\Scripts\winpdb_.pyw "$(FULL_CURRENT_PATH)"
```
Then choose "Save..." and pick a shortcut that you wish to use to launch the debugger.
PS: You can also setup a shortcut to execute your python scripts similarly using this string instead:
```
C:\python26\python.exe "$(FULL_CURRENT_PATH)"
```
|
3,999,829
|
I wouldn't call myself programmer, but I've started learning Python recently and really enjoy it.
I mainly use it for small tasks so far - scripting, text processing, KML generation and ArcGIS.
From my experience with R (working with excellent Notepad++ and [NppToR](http://sourceforge.net/projects/npptor/) combo) I usually try to work with my scripts line by line (or region by region) in order to understand what each step of my script is doing.. and to check results on the fly.
My question: is there and IDE (or editor?) for Windows that lets you evaluate single line of Python script?
I [have](https://stackoverflow.com/questions/81584/what-ide-to-use-for-python) [seen](https://stackoverflow.com/questions/60784/poll-which-python-ide-editor-is-the-best) [quite](https://stackoverflow.com/questions/126753/is-there-a-good-free-python-ide-for-windows) a lot of discussion regarding IDEs in Python context.. but havent stubled upon this specific question so far.
Thanks for help!
|
2010/10/22
|
[
"https://Stackoverflow.com/questions/3999829",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/319487/"
] |
I would plump for EMACS all round.
If you're looking for a function to run code line by line (or a region if you have one highlighted), try adding this to your .emacs (I'm using python.el and Pymacs):
```
;; send current line to *Python
(defun my-python-send-region (&optional beg end)
(interactive)
(let ((beg (cond (beg beg)
((region-active-p)
(region-beginning))
(t (line-beginning-position))))
(end (cond (end end)
((region-active-p)
(copy-marker (region-end)))
(t (line-end-position)))))
(python-shell-send-region beg end)))
(add-hook 'python-mode-hook
'(lambda()
(local-set-key [(shift return)] 'my-python-send-region)))
```
I've bound it to `[shift-Return]`. This is borrowed from [here](http://lists.gnu.org/archive/html/help-gnu-emacs/2010-08/msg00429.html). There's a similar keybinding for running `.R` files line by line [here](http://www.emacswiki.org/emacs/ESSShiftEnter). I find both handy.
|
[Light Table](http://lighttable.com/) was doing that for me, unfortunately it is discontinued:
>
> INLINE EVALUTION No more printing to the console in order to view your
> results. Simply evaluate your code and the results will be displayed
> inline.
>
>
>
[](https://i.stack.imgur.com/H0VlW.png)
|
18,149,636
|
I have a centos 6 server, and I'm trying to configure apache + wsgi + django but I can't.
As I have Python 2.6 for my system and I using Python2.7.5, I can't install wit yum. I was dowload a tar file and try to compile using:
```
./configure --with-python=/usr/local/bin/python2.7
```
But not works. Systems respond:
```
/usr/bin/ld: /usr/local/lib/libpython2.7.a(abstract.o): relocation R_X86_64_32 against `.rodata.str1.8' can not be used when making a shared object; recompile with -fPIC
```
I don't understand where I have to use `-fPIC`. I'm execute:
```
./configure -fPIC --with-python=/usr/local/bin/python2.7
```
But not works.
Can anyone help me?
|
2013/08/09
|
[
"https://Stackoverflow.com/questions/18149636",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1623769/"
] |
**I assume** you are on a shared hosting server, **mod\_wsgi** is not supported on most of shared hosting providers.
|
Try using [nginx](http://wiki.nginx.org/Main) server, it seems to be much easier to deploy.
Here is a [good tutorial for deploying to EC2](http://www.yaconiello.com/blog/setting-aws-ec2-instance-nginx-django-uwsgi-and-mysql/#sthash.E2Gg8vGi.dpbs) but you can use parts of it to configure the server.
|
18,149,636
|
I have a centos 6 server, and I'm trying to configure apache + wsgi + django but I can't.
As I have Python 2.6 for my system and I using Python2.7.5, I can't install wit yum. I was dowload a tar file and try to compile using:
```
./configure --with-python=/usr/local/bin/python2.7
```
But not works. Systems respond:
```
/usr/bin/ld: /usr/local/lib/libpython2.7.a(abstract.o): relocation R_X86_64_32 against `.rodata.str1.8' can not be used when making a shared object; recompile with -fPIC
```
I don't understand where I have to use `-fPIC`. I'm execute:
```
./configure -fPIC --with-python=/usr/local/bin/python2.7
```
But not works.
Can anyone help me?
|
2013/08/09
|
[
"https://Stackoverflow.com/questions/18149636",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1623769/"
] |
This is covered in the mod\_wsgi documentation.
* <http://code.google.com/p/modwsgi/wiki/InstallationIssues>
Your Python installation wasn't configured with the --enable-shared option when it was built. You cannot workaround it at the time of building mod\_wsgi. Your Python installation needs to be reinstalled with the correct configure option.
|
**I assume** you are on a shared hosting server, **mod\_wsgi** is not supported on most of shared hosting providers.
|
18,149,636
|
I have a centos 6 server, and I'm trying to configure apache + wsgi + django but I can't.
As I have Python 2.6 for my system and I using Python2.7.5, I can't install wit yum. I was dowload a tar file and try to compile using:
```
./configure --with-python=/usr/local/bin/python2.7
```
But not works. Systems respond:
```
/usr/bin/ld: /usr/local/lib/libpython2.7.a(abstract.o): relocation R_X86_64_32 against `.rodata.str1.8' can not be used when making a shared object; recompile with -fPIC
```
I don't understand where I have to use `-fPIC`. I'm execute:
```
./configure -fPIC --with-python=/usr/local/bin/python2.7
```
But not works.
Can anyone help me?
|
2013/08/09
|
[
"https://Stackoverflow.com/questions/18149636",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1623769/"
] |
This is covered in the mod\_wsgi documentation.
* <http://code.google.com/p/modwsgi/wiki/InstallationIssues>
Your Python installation wasn't configured with the --enable-shared option when it was built. You cannot workaround it at the time of building mod\_wsgi. Your Python installation needs to be reinstalled with the correct configure option.
|
Try using [nginx](http://wiki.nginx.org/Main) server, it seems to be much easier to deploy.
Here is a [good tutorial for deploying to EC2](http://www.yaconiello.com/blog/setting-aws-ec2-instance-nginx-django-uwsgi-and-mysql/#sthash.E2Gg8vGi.dpbs) but you can use parts of it to configure the server.
|
41,320,092
|
I am new to `python` and `boto3`, I want to get the latest snapshot ID.
I am not sure if I wrote the sort with lambda correctly and how to access the last snapshot, or maybe I can do it with the first part of the script when I print the `snapshot_id` and `snapshot_date` ?
Thanks.
Here is my script
```
import boto3
ec2client = mysession.client('ec2', region_name=region)
ec2resource = mysession.resource('ec2', region_name=region)
def find_snapshots():
for snapshot in ec2client_describe_snapshots['Snapshots']:
snapshot_volume = snapshot['VolumeId']
mnt_vol = "vol-xxxxx"
if mnt_vol == snapshot_volume:
snapshot_date = snapshot['StartTime']
snapshot_id = snapshot['SnapshotId']
print(snapshot_id)
print(snapshot_date)
find_snapshots()
snapshots = ec2resource.snapshots.filter(Filters=[{'Name': 'volume-id', 'Values': [mnt_vol]}]).all()
print(snapshots)
snapshots = sorted(snapshots, key=lambda ss:ss.start_time)
print(snapshots)
snapshot_ids = map(lambda ss:ss.id, snapshots)
print(snapshot_ids)
last_snap_id = ?
```
output:
```
snap-05a8e27b15161d3d5
2016-12-25 05:00:17+00:00
snap-0b87285592e21f0
2016-12-25 03:00:17+00:00
snap-06fa39b86961ffa89
2016-12-24 03:00:17+00:00
ec2.snapshotsCollection(ec2.ServiceResource(), ec2.Snapshot)
[]
<map object at 0x7f8d91ea9cc0>
```
\*update question to @roshan answer:
```
def find_snapshots():
list_of_snaps = []
for snapshot in ec2client_describe_snapshots['Snapshots']:
snapshot_volume = snapshot['VolumeId']
if mnt_vol == snapshot_volume:
snapshot_date = snapshot['StartTime']
snapshot_id = snapshot['SnapshotId']
list_of_snaps.append({'date':snapshot['StartTime'], 'snap_id': snapshot['SnapshotId']})
return(list_of_snaps)
find_snapshots()
print(find_snapshots())
#sort snapshots order by date
newlist = sorted(list_to_be_sorted, key=lambda k: k['name'])
print(newlist)
```
output:
```
[{'date': datetime.datetime(2016, 12, 25, 14, 23, 37, tzinfo=tzutc()), 'snap_id': 'snap-0de26a40c1d1e53'}, {'date': datetime.datetime(2016, 12, 24, 22, 9, 34, tzinfo=tzutc()), 'snap_id': 'snap-0f0341c53f47a08'}]
Traceback (most recent call last):
File "test.py", line 115, in <module>
newlist = sorted(list_to_be_sorted, key=lambda k: k['name'])
NameError: name 'list_to_be_sorted' is not defined
```
If I do:
```
list_to_be_sorted = (find_snapshots())
print(list_to_be_sorted)
#sort snapshots order by date
newlist = sorted(list_to_be_sorted, key=lambda k: k['snap_id'])
print(newlist[0])
```
the latest snapshot does not appear in the output:
```
{'date': datetime.datetime(2016, 12, 23, 3, 0, 18, tzinfo=tzutc()), 'snap_id': 'snap-0225cff1675c369'}
```
this one is the latest:
```
[{'date': datetime.datetime(2016, 12, 25, 5, 0, 17, tzinfo=tzutc()), 'snap_id': 'snap-05a8e27b15161d5'}
```
How do I get the latest snapshot (`snap_id`) ?
Thanks
|
2016/12/25
|
[
"https://Stackoverflow.com/questions/41320092",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4308473/"
] |
You should it sort in `reverse` order.
```
sorted(list_to_be_sorted, key=lambda k: k['date'], reverse=True)[0]
```
sorts the list by the date (in ascending order - oldest to the latest)
If you add `reverse=True`, then it sorts in descending order (latest to the oldest). `[0]` returns the first element in the list which is the latest snapshot.
If you want the `snap_id` of the latest snapshot, just access that key.
```
sorted(list_to_be_sorted, key=lambda k: k['date'], reverse=True)[0]['snap_id']
```
|
You can append the snapshots to list, then sort the list based on date
```
def find_snapshots():
list_of_snaps = []
for snapshot in ec2client_describe_snapshots['Snapshots']:
snapshot_volume = snapshot['VolumeId']
mnt_vol = "vol-xxxxx"
if mnt_vol == snapshot_volume:
snapshot_date = snapshot['StartTime']
snapshot_id = snapshot['SnapshotId']
list_of_snaps.append({'date':snapshot['StartTime'], 'snap_id': snapshot['SnapshotId']})
print(snapshot_id)
print(snapshot_date)
#sort snapshots order by date
newlist = sorted(list_to_be_sorted, key=lambda k: k['name'])
```
**Hope it helps !!**
|
41,320,092
|
I am new to `python` and `boto3`, I want to get the latest snapshot ID.
I am not sure if I wrote the sort with lambda correctly and how to access the last snapshot, or maybe I can do it with the first part of the script when I print the `snapshot_id` and `snapshot_date` ?
Thanks.
Here is my script
```
import boto3
ec2client = mysession.client('ec2', region_name=region)
ec2resource = mysession.resource('ec2', region_name=region)
def find_snapshots():
for snapshot in ec2client_describe_snapshots['Snapshots']:
snapshot_volume = snapshot['VolumeId']
mnt_vol = "vol-xxxxx"
if mnt_vol == snapshot_volume:
snapshot_date = snapshot['StartTime']
snapshot_id = snapshot['SnapshotId']
print(snapshot_id)
print(snapshot_date)
find_snapshots()
snapshots = ec2resource.snapshots.filter(Filters=[{'Name': 'volume-id', 'Values': [mnt_vol]}]).all()
print(snapshots)
snapshots = sorted(snapshots, key=lambda ss:ss.start_time)
print(snapshots)
snapshot_ids = map(lambda ss:ss.id, snapshots)
print(snapshot_ids)
last_snap_id = ?
```
output:
```
snap-05a8e27b15161d3d5
2016-12-25 05:00:17+00:00
snap-0b87285592e21f0
2016-12-25 03:00:17+00:00
snap-06fa39b86961ffa89
2016-12-24 03:00:17+00:00
ec2.snapshotsCollection(ec2.ServiceResource(), ec2.Snapshot)
[]
<map object at 0x7f8d91ea9cc0>
```
\*update question to @roshan answer:
```
def find_snapshots():
list_of_snaps = []
for snapshot in ec2client_describe_snapshots['Snapshots']:
snapshot_volume = snapshot['VolumeId']
if mnt_vol == snapshot_volume:
snapshot_date = snapshot['StartTime']
snapshot_id = snapshot['SnapshotId']
list_of_snaps.append({'date':snapshot['StartTime'], 'snap_id': snapshot['SnapshotId']})
return(list_of_snaps)
find_snapshots()
print(find_snapshots())
#sort snapshots order by date
newlist = sorted(list_to_be_sorted, key=lambda k: k['name'])
print(newlist)
```
output:
```
[{'date': datetime.datetime(2016, 12, 25, 14, 23, 37, tzinfo=tzutc()), 'snap_id': 'snap-0de26a40c1d1e53'}, {'date': datetime.datetime(2016, 12, 24, 22, 9, 34, tzinfo=tzutc()), 'snap_id': 'snap-0f0341c53f47a08'}]
Traceback (most recent call last):
File "test.py", line 115, in <module>
newlist = sorted(list_to_be_sorted, key=lambda k: k['name'])
NameError: name 'list_to_be_sorted' is not defined
```
If I do:
```
list_to_be_sorted = (find_snapshots())
print(list_to_be_sorted)
#sort snapshots order by date
newlist = sorted(list_to_be_sorted, key=lambda k: k['snap_id'])
print(newlist[0])
```
the latest snapshot does not appear in the output:
```
{'date': datetime.datetime(2016, 12, 23, 3, 0, 18, tzinfo=tzutc()), 'snap_id': 'snap-0225cff1675c369'}
```
this one is the latest:
```
[{'date': datetime.datetime(2016, 12, 25, 5, 0, 17, tzinfo=tzutc()), 'snap_id': 'snap-05a8e27b15161d5'}
```
How do I get the latest snapshot (`snap_id`) ?
Thanks
|
2016/12/25
|
[
"https://Stackoverflow.com/questions/41320092",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4308473/"
] |
```
def find_snapshots():
list_of_snaps = []
for snapshot in ec2client_describe_snapshots['Snapshots']:
snapshot_volume = snapshot['VolumeId']
mnt_vol = "vol-xxxxx"
if mnt_vol == snapshot_volume:
snapshot_date = snapshot['StartTime']
snapshot_id = snapshot['SnapshotId']
list_of_snaps.append({'date':snapshot['StartTime'], 'snap_id': snapshot['SnapshotId']})
print(snapshot_id)
print(snapshot_date)
#sort snapshots order by date
newlist = sorted(list_of_snaps, key=lambda k: k['date'], reverse= True)
latest_snap_id = newlist[0]['snap_id']
#The latest_snap_id provides the exact output snapshot ID
print(latest_snap_id)
```
|
You can append the snapshots to list, then sort the list based on date
```
def find_snapshots():
list_of_snaps = []
for snapshot in ec2client_describe_snapshots['Snapshots']:
snapshot_volume = snapshot['VolumeId']
mnt_vol = "vol-xxxxx"
if mnt_vol == snapshot_volume:
snapshot_date = snapshot['StartTime']
snapshot_id = snapshot['SnapshotId']
list_of_snaps.append({'date':snapshot['StartTime'], 'snap_id': snapshot['SnapshotId']})
print(snapshot_id)
print(snapshot_date)
#sort snapshots order by date
newlist = sorted(list_to_be_sorted, key=lambda k: k['name'])
```
**Hope it helps !!**
|
41,320,092
|
I am new to `python` and `boto3`, I want to get the latest snapshot ID.
I am not sure if I wrote the sort with lambda correctly and how to access the last snapshot, or maybe I can do it with the first part of the script when I print the `snapshot_id` and `snapshot_date` ?
Thanks.
Here is my script
```
import boto3
ec2client = mysession.client('ec2', region_name=region)
ec2resource = mysession.resource('ec2', region_name=region)
def find_snapshots():
for snapshot in ec2client_describe_snapshots['Snapshots']:
snapshot_volume = snapshot['VolumeId']
mnt_vol = "vol-xxxxx"
if mnt_vol == snapshot_volume:
snapshot_date = snapshot['StartTime']
snapshot_id = snapshot['SnapshotId']
print(snapshot_id)
print(snapshot_date)
find_snapshots()
snapshots = ec2resource.snapshots.filter(Filters=[{'Name': 'volume-id', 'Values': [mnt_vol]}]).all()
print(snapshots)
snapshots = sorted(snapshots, key=lambda ss:ss.start_time)
print(snapshots)
snapshot_ids = map(lambda ss:ss.id, snapshots)
print(snapshot_ids)
last_snap_id = ?
```
output:
```
snap-05a8e27b15161d3d5
2016-12-25 05:00:17+00:00
snap-0b87285592e21f0
2016-12-25 03:00:17+00:00
snap-06fa39b86961ffa89
2016-12-24 03:00:17+00:00
ec2.snapshotsCollection(ec2.ServiceResource(), ec2.Snapshot)
[]
<map object at 0x7f8d91ea9cc0>
```
\*update question to @roshan answer:
```
def find_snapshots():
list_of_snaps = []
for snapshot in ec2client_describe_snapshots['Snapshots']:
snapshot_volume = snapshot['VolumeId']
if mnt_vol == snapshot_volume:
snapshot_date = snapshot['StartTime']
snapshot_id = snapshot['SnapshotId']
list_of_snaps.append({'date':snapshot['StartTime'], 'snap_id': snapshot['SnapshotId']})
return(list_of_snaps)
find_snapshots()
print(find_snapshots())
#sort snapshots order by date
newlist = sorted(list_to_be_sorted, key=lambda k: k['name'])
print(newlist)
```
output:
```
[{'date': datetime.datetime(2016, 12, 25, 14, 23, 37, tzinfo=tzutc()), 'snap_id': 'snap-0de26a40c1d1e53'}, {'date': datetime.datetime(2016, 12, 24, 22, 9, 34, tzinfo=tzutc()), 'snap_id': 'snap-0f0341c53f47a08'}]
Traceback (most recent call last):
File "test.py", line 115, in <module>
newlist = sorted(list_to_be_sorted, key=lambda k: k['name'])
NameError: name 'list_to_be_sorted' is not defined
```
If I do:
```
list_to_be_sorted = (find_snapshots())
print(list_to_be_sorted)
#sort snapshots order by date
newlist = sorted(list_to_be_sorted, key=lambda k: k['snap_id'])
print(newlist[0])
```
the latest snapshot does not appear in the output:
```
{'date': datetime.datetime(2016, 12, 23, 3, 0, 18, tzinfo=tzutc()), 'snap_id': 'snap-0225cff1675c369'}
```
this one is the latest:
```
[{'date': datetime.datetime(2016, 12, 25, 5, 0, 17, tzinfo=tzutc()), 'snap_id': 'snap-05a8e27b15161d5'}
```
How do I get the latest snapshot (`snap_id`) ?
Thanks
|
2016/12/25
|
[
"https://Stackoverflow.com/questions/41320092",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4308473/"
] |
The last line needs to be like this:
```
newlist = sorted(list_of_snaps, key=lambda k: k['snap_id'])
```
|
You can append the snapshots to list, then sort the list based on date
```
def find_snapshots():
list_of_snaps = []
for snapshot in ec2client_describe_snapshots['Snapshots']:
snapshot_volume = snapshot['VolumeId']
mnt_vol = "vol-xxxxx"
if mnt_vol == snapshot_volume:
snapshot_date = snapshot['StartTime']
snapshot_id = snapshot['SnapshotId']
list_of_snaps.append({'date':snapshot['StartTime'], 'snap_id': snapshot['SnapshotId']})
print(snapshot_id)
print(snapshot_date)
#sort snapshots order by date
newlist = sorted(list_to_be_sorted, key=lambda k: k['name'])
```
**Hope it helps !!**
|
41,320,092
|
I am new to `python` and `boto3`, I want to get the latest snapshot ID.
I am not sure if I wrote the sort with lambda correctly and how to access the last snapshot, or maybe I can do it with the first part of the script when I print the `snapshot_id` and `snapshot_date` ?
Thanks.
Here is my script
```
import boto3
ec2client = mysession.client('ec2', region_name=region)
ec2resource = mysession.resource('ec2', region_name=region)
def find_snapshots():
for snapshot in ec2client_describe_snapshots['Snapshots']:
snapshot_volume = snapshot['VolumeId']
mnt_vol = "vol-xxxxx"
if mnt_vol == snapshot_volume:
snapshot_date = snapshot['StartTime']
snapshot_id = snapshot['SnapshotId']
print(snapshot_id)
print(snapshot_date)
find_snapshots()
snapshots = ec2resource.snapshots.filter(Filters=[{'Name': 'volume-id', 'Values': [mnt_vol]}]).all()
print(snapshots)
snapshots = sorted(snapshots, key=lambda ss:ss.start_time)
print(snapshots)
snapshot_ids = map(lambda ss:ss.id, snapshots)
print(snapshot_ids)
last_snap_id = ?
```
output:
```
snap-05a8e27b15161d3d5
2016-12-25 05:00:17+00:00
snap-0b87285592e21f0
2016-12-25 03:00:17+00:00
snap-06fa39b86961ffa89
2016-12-24 03:00:17+00:00
ec2.snapshotsCollection(ec2.ServiceResource(), ec2.Snapshot)
[]
<map object at 0x7f8d91ea9cc0>
```
\*update question to @roshan answer:
```
def find_snapshots():
list_of_snaps = []
for snapshot in ec2client_describe_snapshots['Snapshots']:
snapshot_volume = snapshot['VolumeId']
if mnt_vol == snapshot_volume:
snapshot_date = snapshot['StartTime']
snapshot_id = snapshot['SnapshotId']
list_of_snaps.append({'date':snapshot['StartTime'], 'snap_id': snapshot['SnapshotId']})
return(list_of_snaps)
find_snapshots()
print(find_snapshots())
#sort snapshots order by date
newlist = sorted(list_to_be_sorted, key=lambda k: k['name'])
print(newlist)
```
output:
```
[{'date': datetime.datetime(2016, 12, 25, 14, 23, 37, tzinfo=tzutc()), 'snap_id': 'snap-0de26a40c1d1e53'}, {'date': datetime.datetime(2016, 12, 24, 22, 9, 34, tzinfo=tzutc()), 'snap_id': 'snap-0f0341c53f47a08'}]
Traceback (most recent call last):
File "test.py", line 115, in <module>
newlist = sorted(list_to_be_sorted, key=lambda k: k['name'])
NameError: name 'list_to_be_sorted' is not defined
```
If I do:
```
list_to_be_sorted = (find_snapshots())
print(list_to_be_sorted)
#sort snapshots order by date
newlist = sorted(list_to_be_sorted, key=lambda k: k['snap_id'])
print(newlist[0])
```
the latest snapshot does not appear in the output:
```
{'date': datetime.datetime(2016, 12, 23, 3, 0, 18, tzinfo=tzutc()), 'snap_id': 'snap-0225cff1675c369'}
```
this one is the latest:
```
[{'date': datetime.datetime(2016, 12, 25, 5, 0, 17, tzinfo=tzutc()), 'snap_id': 'snap-05a8e27b15161d5'}
```
How do I get the latest snapshot (`snap_id`) ?
Thanks
|
2016/12/25
|
[
"https://Stackoverflow.com/questions/41320092",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4308473/"
] |
You should it sort in `reverse` order.
```
sorted(list_to_be_sorted, key=lambda k: k['date'], reverse=True)[0]
```
sorts the list by the date (in ascending order - oldest to the latest)
If you add `reverse=True`, then it sorts in descending order (latest to the oldest). `[0]` returns the first element in the list which is the latest snapshot.
If you want the `snap_id` of the latest snapshot, just access that key.
```
sorted(list_to_be_sorted, key=lambda k: k['date'], reverse=True)[0]['snap_id']
```
|
```
def find_snapshots():
list_of_snaps = []
for snapshot in ec2client_describe_snapshots['Snapshots']:
snapshot_volume = snapshot['VolumeId']
mnt_vol = "vol-xxxxx"
if mnt_vol == snapshot_volume:
snapshot_date = snapshot['StartTime']
snapshot_id = snapshot['SnapshotId']
list_of_snaps.append({'date':snapshot['StartTime'], 'snap_id': snapshot['SnapshotId']})
print(snapshot_id)
print(snapshot_date)
#sort snapshots order by date
newlist = sorted(list_of_snaps, key=lambda k: k['date'], reverse= True)
latest_snap_id = newlist[0]['snap_id']
#The latest_snap_id provides the exact output snapshot ID
print(latest_snap_id)
```
|
41,320,092
|
I am new to `python` and `boto3`, I want to get the latest snapshot ID.
I am not sure if I wrote the sort with lambda correctly and how to access the last snapshot, or maybe I can do it with the first part of the script when I print the `snapshot_id` and `snapshot_date` ?
Thanks.
Here is my script
```
import boto3
ec2client = mysession.client('ec2', region_name=region)
ec2resource = mysession.resource('ec2', region_name=region)
def find_snapshots():
for snapshot in ec2client_describe_snapshots['Snapshots']:
snapshot_volume = snapshot['VolumeId']
mnt_vol = "vol-xxxxx"
if mnt_vol == snapshot_volume:
snapshot_date = snapshot['StartTime']
snapshot_id = snapshot['SnapshotId']
print(snapshot_id)
print(snapshot_date)
find_snapshots()
snapshots = ec2resource.snapshots.filter(Filters=[{'Name': 'volume-id', 'Values': [mnt_vol]}]).all()
print(snapshots)
snapshots = sorted(snapshots, key=lambda ss:ss.start_time)
print(snapshots)
snapshot_ids = map(lambda ss:ss.id, snapshots)
print(snapshot_ids)
last_snap_id = ?
```
output:
```
snap-05a8e27b15161d3d5
2016-12-25 05:00:17+00:00
snap-0b87285592e21f0
2016-12-25 03:00:17+00:00
snap-06fa39b86961ffa89
2016-12-24 03:00:17+00:00
ec2.snapshotsCollection(ec2.ServiceResource(), ec2.Snapshot)
[]
<map object at 0x7f8d91ea9cc0>
```
\*update question to @roshan answer:
```
def find_snapshots():
list_of_snaps = []
for snapshot in ec2client_describe_snapshots['Snapshots']:
snapshot_volume = snapshot['VolumeId']
if mnt_vol == snapshot_volume:
snapshot_date = snapshot['StartTime']
snapshot_id = snapshot['SnapshotId']
list_of_snaps.append({'date':snapshot['StartTime'], 'snap_id': snapshot['SnapshotId']})
return(list_of_snaps)
find_snapshots()
print(find_snapshots())
#sort snapshots order by date
newlist = sorted(list_to_be_sorted, key=lambda k: k['name'])
print(newlist)
```
output:
```
[{'date': datetime.datetime(2016, 12, 25, 14, 23, 37, tzinfo=tzutc()), 'snap_id': 'snap-0de26a40c1d1e53'}, {'date': datetime.datetime(2016, 12, 24, 22, 9, 34, tzinfo=tzutc()), 'snap_id': 'snap-0f0341c53f47a08'}]
Traceback (most recent call last):
File "test.py", line 115, in <module>
newlist = sorted(list_to_be_sorted, key=lambda k: k['name'])
NameError: name 'list_to_be_sorted' is not defined
```
If I do:
```
list_to_be_sorted = (find_snapshots())
print(list_to_be_sorted)
#sort snapshots order by date
newlist = sorted(list_to_be_sorted, key=lambda k: k['snap_id'])
print(newlist[0])
```
the latest snapshot does not appear in the output:
```
{'date': datetime.datetime(2016, 12, 23, 3, 0, 18, tzinfo=tzutc()), 'snap_id': 'snap-0225cff1675c369'}
```
this one is the latest:
```
[{'date': datetime.datetime(2016, 12, 25, 5, 0, 17, tzinfo=tzutc()), 'snap_id': 'snap-05a8e27b15161d5'}
```
How do I get the latest snapshot (`snap_id`) ?
Thanks
|
2016/12/25
|
[
"https://Stackoverflow.com/questions/41320092",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4308473/"
] |
You should it sort in `reverse` order.
```
sorted(list_to_be_sorted, key=lambda k: k['date'], reverse=True)[0]
```
sorts the list by the date (in ascending order - oldest to the latest)
If you add `reverse=True`, then it sorts in descending order (latest to the oldest). `[0]` returns the first element in the list which is the latest snapshot.
If you want the `snap_id` of the latest snapshot, just access that key.
```
sorted(list_to_be_sorted, key=lambda k: k['date'], reverse=True)[0]['snap_id']
```
|
The last line needs to be like this:
```
newlist = sorted(list_of_snaps, key=lambda k: k['snap_id'])
```
|
25,985,491
|
I have a sentiment analysis task and i need to specify how much data (in my case text) does scikit can handle. I have a corpus of 2500 opinions all ready tagged. I now that it´s a small corpus but my thesis advisor is asking me to specifically argue how many data does scikit learn can handle. My advisor has his doubts about python/scikit and she wants facts about how many text parameters, featues and stuff related can handle scikit-learn.
|
2014/09/23
|
[
"https://Stackoverflow.com/questions/25985491",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3930105/"
] |
Here are some timings for scikit-learn's document classification example on my machine (Python 2.7, NumPy 1.8.2, SciPy 0.13.3, scikit-learn 0.15.2, Intel Core i7-3540M laptop running on battery power). The dataset is twenty newsgroups; I've trimmed the output quite a bit.
```
$ python examples/document_classification_20newsgroups.py --all_categories
data loaded
11314 documents - 22.055MB (training set)
7532 documents - 13.801MB (test set)
20 categories
Extracting features from the training dataset using a sparse vectorizer
done in 2.849053s at 7.741MB/s
n_samples: 11314, n_features: 129792
Extracting features from the test dataset using the same vectorizer
done in 1.526641s at 9.040MB/s
n_samples: 7532, n_features: 129792
________________________________________________________________________________
Training:
LinearSVC(C=1.0, class_weight=None, dual=False, fit_intercept=True,
intercept_scaling=1, loss='l2', multi_class='ovr', penalty='l2',
random_state=None, tol=0.001, verbose=0)
train time: 5.274s
test time: 0.033s
f1-score: 0.860
dimensionality: 129792
density: 1.000000
________________________________________________________________________________
Training:
SGDClassifier(alpha=0.0001, class_weight=None, epsilon=0.1, eta0=0.0,
fit_intercept=True, l1_ratio=0.15, learning_rate='optimal',
loss='hinge', n_iter=50, n_jobs=1, penalty='l2', power_t=0.5,
random_state=None, shuffle=False, verbose=0, warm_start=False)
train time: 3.521s
test time: 0.038s
f1-score: 0.857
dimensionality: 129792
density: 0.390184
________________________________________________________________________________
Training:
MultinomialNB(alpha=0.01, class_prior=None, fit_prior=True)
train time: 0.161s
test time: 0.036s
f1-score: 0.836
dimensionality: 129792
density: 1.000000
________________________________________________________________________________
Training:
BernoulliNB(alpha=0.01, binarize=0.0, class_prior=None, fit_prior=True)
train time: 0.167s
test time: 0.153s
f1-score: 0.761
dimensionality: 129792
density: 1.000000
```
Timings for dataset loading aren't shown, but it didn't take more than half a second; the input is a zipfile containing texts. "Extracting features" includes tokenization and stopword filtering. So in all, I can load up 18.8k documents and train a naive Bayes classifier on 11k of them in five seconds, or an SVM in ten seconds. That means solving a 20×130k dimensional optimization problem.
I advise you to re-run this example on your machine, because the actual time taken depends on a lot of factors including the speed of the disk.
[Disclaimer: I'm one of the scikit-learn developers.]
|
[This Scikit page](http://scikit-learn.org/stable/modules/scaling_strategies.html) may provide some answers if you are experiencing some issues with the amount of data that you are trying to load.
As stated in your other question on data size [concerning Weka](https://stackoverflow.com/questions/25979182/how-much-text-can-weka-handle), I agree with ealdent that your dataset does seem small (unless you have an exceptionally large number of features) and it should not be a problem to load a dataset of this size into memory.
Hope this helps!
|
25,985,491
|
I have a sentiment analysis task and i need to specify how much data (in my case text) does scikit can handle. I have a corpus of 2500 opinions all ready tagged. I now that it´s a small corpus but my thesis advisor is asking me to specifically argue how many data does scikit learn can handle. My advisor has his doubts about python/scikit and she wants facts about how many text parameters, featues and stuff related can handle scikit-learn.
|
2014/09/23
|
[
"https://Stackoverflow.com/questions/25985491",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3930105/"
] |
Here are some timings for scikit-learn's document classification example on my machine (Python 2.7, NumPy 1.8.2, SciPy 0.13.3, scikit-learn 0.15.2, Intel Core i7-3540M laptop running on battery power). The dataset is twenty newsgroups; I've trimmed the output quite a bit.
```
$ python examples/document_classification_20newsgroups.py --all_categories
data loaded
11314 documents - 22.055MB (training set)
7532 documents - 13.801MB (test set)
20 categories
Extracting features from the training dataset using a sparse vectorizer
done in 2.849053s at 7.741MB/s
n_samples: 11314, n_features: 129792
Extracting features from the test dataset using the same vectorizer
done in 1.526641s at 9.040MB/s
n_samples: 7532, n_features: 129792
________________________________________________________________________________
Training:
LinearSVC(C=1.0, class_weight=None, dual=False, fit_intercept=True,
intercept_scaling=1, loss='l2', multi_class='ovr', penalty='l2',
random_state=None, tol=0.001, verbose=0)
train time: 5.274s
test time: 0.033s
f1-score: 0.860
dimensionality: 129792
density: 1.000000
________________________________________________________________________________
Training:
SGDClassifier(alpha=0.0001, class_weight=None, epsilon=0.1, eta0=0.0,
fit_intercept=True, l1_ratio=0.15, learning_rate='optimal',
loss='hinge', n_iter=50, n_jobs=1, penalty='l2', power_t=0.5,
random_state=None, shuffle=False, verbose=0, warm_start=False)
train time: 3.521s
test time: 0.038s
f1-score: 0.857
dimensionality: 129792
density: 0.390184
________________________________________________________________________________
Training:
MultinomialNB(alpha=0.01, class_prior=None, fit_prior=True)
train time: 0.161s
test time: 0.036s
f1-score: 0.836
dimensionality: 129792
density: 1.000000
________________________________________________________________________________
Training:
BernoulliNB(alpha=0.01, binarize=0.0, class_prior=None, fit_prior=True)
train time: 0.167s
test time: 0.153s
f1-score: 0.761
dimensionality: 129792
density: 1.000000
```
Timings for dataset loading aren't shown, but it didn't take more than half a second; the input is a zipfile containing texts. "Extracting features" includes tokenization and stopword filtering. So in all, I can load up 18.8k documents and train a naive Bayes classifier on 11k of them in five seconds, or an SVM in ten seconds. That means solving a 20×130k dimensional optimization problem.
I advise you to re-run this example on your machine, because the actual time taken depends on a lot of factors including the speed of the disk.
[Disclaimer: I'm one of the scikit-learn developers.]
|
The question is not about `scikit-learn`, it's about what algorithms you want to use. Most of `scikit-learn`'s internals are implemented in `C` or `Fortran`, so it's quite efficient. For example, `scikit-learn` random forest is the fastest out there ([see page 116 in this link](http://www.montefiore.ulg.ac.be/%7Eglouppe/pdf/phd-thesis.pdf)). For converting text to vectors, I have had success on a dataset of 50k documents running on my desktop in a couple of seconds and a few GB of memory. You can do even better than that if you are willing to use a hashing vectoriser (in exchange for not being able to interpret the feature vocabulary). In terms of classifiers, Naive Bayes takes `O(NVC)` time where `N` is the number of documents, `V` is the number of features and `C` is the number of classes. Non-linear SVM may give you better results, but will take **much** longer. If you start storing stuff in dense numpy matrices (as opposed to sparse scipy ones) you are much more likely to run out memory.
|
9,291,036
|
I'm putting together a basic photoalbum on appengine using python 27. I have written the following method to retrieve image details from the datastore matching a particular "adventure". I'm using limits and offsets for pagination, however it is very inefficient. After browsing 5 pages (of 5 photos per page) I've already used 16% of my Datastore Small Operations. Interestingly I've only used 1% of my datastore read operations. How can I make this more efficient for datastore small operations - I'm not sure what these consist of.
```
def grab_images(adventure, the_offset=0, the_limit = 10):
logging.info("grab_images")
the_photos = None
the_photos = PhotosModel.all().filter("adventure =", adventure)
total_number_of_photos = the_photos.count()
all_photos = the_photos.fetch(limit = the_limit, offset = the_offset)
total_number_of_pages = total_number_of_photos / the_limit
all_photo_keys = []
for photo in all_photos:
all_photo_keys.append(str(photo.blob_key.key()))
return all_photo_keys, total_number_of_photos, total_number_of_pages
```
|
2012/02/15
|
[
"https://Stackoverflow.com/questions/9291036",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/714852/"
] |
A few things:
1. You don't need to have count called each time, you can cache it
2. Same goes to the query, why are you querying all the time? cache it also.
3. Cache the pages also, you should not calc the data per page each time.
4. You only need the blob\_key but your are loading the entire photo entity, try to model it in a way that you won't need to load all the Photo atributes.
nitpicking: you don't need the\_photos = None
|
The way you handle paging is inefficient as it goes through every record before the offset to deliver the data. You should consider building the paging mechanisms using the bookmark methods described by Google <http://code.google.com/appengine/articles/paging.html>.
Using this method you only go through the items you need for each page. I also urge you to cache properly as suggested by Shay, it's both faster and cheaper.
|
9,291,036
|
I'm putting together a basic photoalbum on appengine using python 27. I have written the following method to retrieve image details from the datastore matching a particular "adventure". I'm using limits and offsets for pagination, however it is very inefficient. After browsing 5 pages (of 5 photos per page) I've already used 16% of my Datastore Small Operations. Interestingly I've only used 1% of my datastore read operations. How can I make this more efficient for datastore small operations - I'm not sure what these consist of.
```
def grab_images(adventure, the_offset=0, the_limit = 10):
logging.info("grab_images")
the_photos = None
the_photos = PhotosModel.all().filter("adventure =", adventure)
total_number_of_photos = the_photos.count()
all_photos = the_photos.fetch(limit = the_limit, offset = the_offset)
total_number_of_pages = total_number_of_photos / the_limit
all_photo_keys = []
for photo in all_photos:
all_photo_keys.append(str(photo.blob_key.key()))
return all_photo_keys, total_number_of_photos, total_number_of_pages
```
|
2012/02/15
|
[
"https://Stackoverflow.com/questions/9291036",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/714852/"
] |
A few things:
1. You don't need to have count called each time, you can cache it
2. Same goes to the query, why are you querying all the time? cache it also.
3. Cache the pages also, you should not calc the data per page each time.
4. You only need the blob\_key but your are loading the entire photo entity, try to model it in a way that you won't need to load all the Photo atributes.
nitpicking: you don't need the\_photos = None
|
You may want to consider moving to the new NDB API. Its use of futures, caches and autobatching may help you a lot. Explicit is better than implicit, but NDB's management of the details makes your code simpler and more readable.
BTW, did you try to use appstats and see how your requests are using the server resources?
|
9,291,036
|
I'm putting together a basic photoalbum on appengine using python 27. I have written the following method to retrieve image details from the datastore matching a particular "adventure". I'm using limits and offsets for pagination, however it is very inefficient. After browsing 5 pages (of 5 photos per page) I've already used 16% of my Datastore Small Operations. Interestingly I've only used 1% of my datastore read operations. How can I make this more efficient for datastore small operations - I'm not sure what these consist of.
```
def grab_images(adventure, the_offset=0, the_limit = 10):
logging.info("grab_images")
the_photos = None
the_photos = PhotosModel.all().filter("adventure =", adventure)
total_number_of_photos = the_photos.count()
all_photos = the_photos.fetch(limit = the_limit, offset = the_offset)
total_number_of_pages = total_number_of_photos / the_limit
all_photo_keys = []
for photo in all_photos:
all_photo_keys.append(str(photo.blob_key.key()))
return all_photo_keys, total_number_of_photos, total_number_of_pages
```
|
2012/02/15
|
[
"https://Stackoverflow.com/questions/9291036",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/714852/"
] |
The way you handle paging is inefficient as it goes through every record before the offset to deliver the data. You should consider building the paging mechanisms using the bookmark methods described by Google <http://code.google.com/appengine/articles/paging.html>.
Using this method you only go through the items you need for each page. I also urge you to cache properly as suggested by Shay, it's both faster and cheaper.
|
You may want to consider moving to the new NDB API. Its use of futures, caches and autobatching may help you a lot. Explicit is better than implicit, but NDB's management of the details makes your code simpler and more readable.
BTW, did you try to use appstats and see how your requests are using the server resources?
|
35,390,152
|
I am new to selenium with python. Tried this sample test script.
```
from selenium import webdriver
def browser():
driver= webdriver.Firefox()
driver.delete_all_cookies()
driver.get('http://www.gmail.com/')
driver.maximize_window()
driver.save_screenshot('D:\Python Programs\Screen shots\TC_01.png')
driver.find_element_by_xpath("//*[@id='next']").click()
message=driver.find_element_by_xpath("//*[@id='errormsg_0_Email']")
driver.save_screenshot('D:\Python Programs\Screen shots\TC_03.png')
name= driver.find_element_by_xpath("//*[@id='Email']").send_keys('gmail')
driver.save_screenshot('D:\Python Programs\Screen shots\TC_02.png')
print name
driver.find_element_by_xpath("//*[@id='next']").click()
password=driver.find_element_by_xpath("//*[@id='Passwd']").send_keys('password')
driver.save_screenshot('D:\Python Programs\Screen shots\TC_03.png')
print password
driver.find_element_by_xpath("//*[@id='signIn']").click()
driver.implicitly_wait(10)
driver.quit()
i=browser()
```
Till the below steps the script runs after that i am getting the error as
```
selenium.common.exceptions.NoSuchElementException: Message: Unable to locate element: {"method":"xpath","selector":"//*[@id='Passwd']"}
Stacktrace:.
```
|
2016/02/14
|
[
"https://Stackoverflow.com/questions/35390152",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5861605/"
] |
Just use [pandas](http://pandas.pydata.org/):
```
In [1]: import pandas as pd
```
Change the number of decimals:
```
In [2]: pd.options.display.float_format = '{:.3f}'.format
```
Make a data frame:
```
In [3]: df = pd.DataFrame({'gof_test': gof_test, 'gof_train': gof_train})
```
and display:
```
In [4]: df
Out [4]:
```
[](https://i.stack.imgur.com/WnX8M.png)
Another option would be the use of the [engineering prefix](http://pandas.pydata.org/pandas-docs/stable/options.html#number-formatting):
```
In [5]: pd.set_eng_float_format(use_eng_prefix=True)
df
Out [5]:
```
[](https://i.stack.imgur.com/p9i0s.png)
```
In [6]: pd.set_eng_float_format()
df
Out [6]:
```
[](https://i.stack.imgur.com/lZQLq.png)
|
Indeed you cannot affect the display of the Python output with CSS, but you can give your results to a formatting function that will take care of making it "beautiful". In your case, you could use something like this:
```
def compare_dicts(dict1, dict2, col_width):
print('{' + ' ' * (col_width-1) + '{')
for k1, v1 in dict1.items():
col1 = u" %s: %.3f," % (k1, v1)
padding = u' ' * (col_width - len(col1))
line = col1 + padding
if k1 in dict2:
line = u"%s %s: %.3f," % (line, k1, dict2[k1])
print(line)
print('}' + ' ' * (col_width-1) + '}')
dict1 = {
'foo': 0.43657,
'foobar': 1e6,
'bar': 0,
}
dict2 = {
'bar': 1,
'foo': 0.43657,
}
compare_dicts(dict1, dict2, 25)
```
This gives:
```
{ {
foobar: 1000000.000,
foo: 0.437, foo: 0.437,
bar: 0.000, bar: 1.000,
} }
```
|
42,449,296
|
In OpenAPI, inheritance is achieved with `allof`. For instance, in [this example](https://github.com/OAI/OpenAPI-Specification/blob/master/examples/v2.0/json/petstore-simple.json):
```js
"definitions": {
"Pet": {
"type": "object",
"allOf": [
{
"$ref": "#/definitions/NewPet" # <--- here
},
[...]
]
},
"NewPet": {
"type": "object",
"properties": {
"name": {
"type": "string"
},
}
},
```
I didn't find in the spec anything about multiple inheritance. For instance, if Pet inherits from both NewPet and OldPet.
In Python, I would write
```python
class Pet(NewPet, OldPet):
...
```
and the Method Resolution Order is deterministic about which parent class should be checked first for methods/attributes, so I can tell if Pet.age will be NewPet.age or OldPet.age.
So what if I let Pet inherit from both NewPet and OldPet, where name property is defined in both schemas, with a different value in each? What will be Pet.name?
```js
"definitions": {
"Pet": {
"type": "object",
"allOf": [
{
"$ref": "#/definitions/NewPet" # <--- multiple inheritance
"$ref": "#/definitions/OldPet"
},
[...]
]
},
"NewPet": {
"type": "object",
"properties": {
"name": {
"type": "string"
},
}
},
"OldPet": {
"type": "object",
"properties": {
"name": {
"type": "integer" # <-- name redefined here, different type
},
}
},
```
Will OldPet take precedence? NewPet? It is undefined/invalid? Is it application defined?
I tried this in [swagger-editor](http://editor.swagger.io/). Apparently, the schema with two refs is valid, but swagger-editor does not resolve the properties. If just displays allof with the two references.
Edit: According to [this tutorial](https://apihandyman.io/writing-openapi-swagger-specification-tutorial-part-4-advanced-data-modeling/#combining-multiple-definitions-to-ensure-consistency), having several refs in the same allof is valid. But nothing is said about the case where both schemas have a different attribute with the same name.
|
2017/02/24
|
[
"https://Stackoverflow.com/questions/42449296",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4653485/"
] |
JSON Schema doesn't support inheritance. The behavior of `allOf` can sometimes look like inheritance, but if you think about it that way you will end up getting confused.
The `allOf` keyword means just what it says: all of the schemas must validate. Nothing gets overridden or takes precedence over anything else. Everything must validate.
In your example, a JSON value must validate against both "NewPet" and "OldPet" in full. Because there is no JSON value that can validate as both a string and an integer, validation of the "name" property will always fail to validate against either "NewPet" or "OldPet" (or both). Therefore the "Pet" schema will never validate against any given JSON value.
|
One thing to consider is what inheritance means. In Eclipse Modeling Framework trying to create a class that extends 2 classes with the same attribute is an error. Never the less I consider that multiple inheritance.
This is called the Diamond Problem. See <https://en.wikipedia.org/wiki/Multiple_inheritance>
|
60,942,530
|
I'm working on an Django project, and building and testing with a database on GCP. Its full of test data and kind of a mess.
Now I want to release the app with a new and fresh another database.
How do I migrate to the new database? with all those `migrations/` folder?
I don't want to delete the folder cause the development might continue.
Data do not need to be preserved. It's test data only.
Django version is 2.2;
Python 3.7
Thank you.
========= update
After changing the `settings.py`, `python manage.py makemigrations` says no changes detected.
Then I did `python manage.py migrate`, and now it complains about relation does not exist.
=============== update2
The problem seems to be that, I had a table name `Customer`, and I changed it to 'Client'. Now it's complaining about "psycopg2.errors.UndefinedTable: relation "app\_customer" does not exist".
How can I fix it, maybe without deleting all files in `migrations/`?
================ update final
After eliminating all possibilities, I have found out that the "new" database is not new at all. I migrated on that database some months ago.
Now I created a fresh new one and `migrate` worked like a charm.
Again, thank you all for your suggestions.
|
2020/03/31
|
[
"https://Stackoverflow.com/questions/60942530",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4971866/"
] |
>
> or maybe a different data structure to keep both the name and the value of the variable?
>
>
>
Ding ding! You want to use a map for this. A map stores a key and a value as a pair. To give you a visual representation, a map looks like this:
```
Cats: 5
Dogs: 3
Horses: 2
```
Using a map you can access both the key (name of the word you found) and its paired value (number of times you found it in your file)
[How to use maps](http://tutorials.jenkov.com/java-collections/map.html#implementations)
|
You might want to consider priority queue in java. But first you need a class which have 2 attributes (the word and its quantity). This class should implement Comparable and compareTo method should be overridden. Here's an example: <https://howtodoinjava.com/java/collections/java-priorityqueue/>
|
37,972,029
|
[PEP 440](https://www.python.org/dev/peps/pep-0440) lays out what is the accepted format for version strings of Python packages.
These can be simple, like: `0.0.1`
Or complicated, like: `2016!1.0-alpha1.dev2`
What is a suitable regex which could be used for finding and validating such strings?
|
2016/06/22
|
[
"https://Stackoverflow.com/questions/37972029",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1728179/"
] |
I had the same question. This is the most thorough regex pattern I could find. PEP440 links to the codebase of the packaging library in it's references section.
```
pip install packaging
```
To access just the pattern string you can use the global
```
from packaging import version
version.VERSION_PATTERN
```
See: <https://github.com/pypa/packaging/blob/21.3/packaging/version.py#L225-L254>
```py
VERSION_PATTERN = r"""
v?
(?:
(?:(?P<epoch>[0-9]+)!)? # epoch
(?P<release>[0-9]+(?:\.[0-9]+)*) # release segment
(?P<pre> # pre-release
[-_\.]?
(?P<pre_l>(a|b|c|rc|alpha|beta|pre|preview))
[-_\.]?
(?P<pre_n>[0-9]+)?
)?
(?P<post> # post release
(?:-(?P<post_n1>[0-9]+))
|
(?:
[-_\.]?
(?P<post_l>post|rev|r)
[-_\.]?
(?P<post_n2>[0-9]+)?
)
)?
(?P<dev> # dev release
[-_\.]?
(?P<dev_l>dev)
[-_\.]?
(?P<dev_n>[0-9]+)?
)?
)
(?:\+(?P<local>[a-z0-9]+(?:[-_\.][a-z0-9]+)*))? # local version
"""
```
Of course this example is specific to Python's flavor of regex.
|
I think this should comply with PEP440:
```
^(\d+!)?(\d+)(\.\d+)+([\.\-\_])?((a(lpha)?|b(eta)?|c|r(c|ev)?|pre(view)?)\d*)?(\.?(post|dev)\d*)?$
```
### Explained
Epoch, e.g. `2016!`:
```
(\d+!)?
```
Version parts (major, minor, patch, etc.):
```
(\d+)(\.\d+)+
```
Acceptable separators (`.`, `-` or `_`):
```
([\.\-\_])?
```
Possible pre-release flags (and their normalisations; as well as post release flags `r` or `rev`), may have one or more digits following:
```
((a(lpha)?|b(eta)?|c|r(c|ev)?|pre(view)?)\d*)?
```
Post-release flags, and one or more digits:
```
(\.?(post|dev)\d*)?
```
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.