qid
int64 46k
74.7M
| question
stringlengths 54
37.8k
| date
stringlengths 10
10
| metadata
listlengths 3
3
| response_j
stringlengths 29
22k
| response_k
stringlengths 26
13.4k
| __index_level_0__
int64 0
17.8k
|
|---|---|---|---|---|---|---|
23,785,259
|
I'm new using Python 3.4 and I'll be using it for my internship in the next month. However, my instructor gave me a task to practice while I haven't started it yet. Thus, he gave me a set of data and he asked me to figure how to load this out. However, it keep showing me this:
```
Traceback (most recent call last):
File "<pyshell#3>", line 1, in <module>
raindata = loadtxt('slz_chuva.txt', comments='#', delimiter=',')
File "/usr/lib/python3/dist-packages/numpy/lib/npyio.py", line 848, in loadtxt
items = [conv(val) for (conv, val) in zip(converters, vals)]
File "/usr/lib/python3/dist-packages/numpy/lib/npyio.py", line 848, in <listcomp>
items = [conv(val) for (conv, val) in zip(converters, vals)]
ValueError: could not convert string to float: b'A203'
```
and this is my code:
```
from scipy import loadtxt
raindata = loadtxt('slz_chuva.txt', comments='#', delimiter= ',')
```
and this is my data:
codigo\_estacao,data,hora,temp\_inst,temp\_max,temp\_min,umid\_inst,umid\_max,umid\_min,pto\_orvalh#o\_inst,pto\_orvalho\_max,pto\_orvalho\_min,pressao,pressao\_max,pressao\_min,vento\_direcao,vento\_vel,vento\_rajada,radiacao,precipitacao
===============================================================================================================================================================================================================================================
A203,09/05,2014,00,24.8,24.8,24.5,95,95,94,23.9,24.0,23.7,1006.3,1006.3,1005.7,0.3,24,1.8,-3.08,0.0
A203,09/05/2014,01,24.5,24.8,24.5,95,95,95,23.7,24.0,23.7,1006.9,1006.9,1006.3,0.0,30,1.7,-2.78,0.0
A203,09/05/2014,02,24.6,24.6,24.4,96,96,95,23.8,23.8,23.7,1006.6,1006.9,1006.6,0.3,42,1.7,-2.86,0.0
A203,09/05/2014,03,24.8,25.0,24.5,96,96,95,24.1,24.2,23.8,1006.2,1006.6,1006.2,0.0,51,1.8,-1.70,0.0
Could someone help me out?
thanks
|
2014/05/21
|
[
"https://Stackoverflow.com/questions/23785259",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3661013/"
] |
On the SelectionChanged event you can do this:
```
private void dataGridView1_SelectionChanged(object sender, EventArgs e)
{
if (dataGridView1.SelectedCells.Count > 2)
{
dataGridView1.SelectedCells[0].Selected = false;
}
}
```
This will prevent/undo selecting any more cells after selecting two.
For whole rows:
```
private void dataGridView1_SelectionChanged(object sender, EventArgs e)
{
if (dataGridView1.SelectedRows.Count > 2)
{
dataGridView1.SelectedRows[0].Selected = false;
}
}
```
|
You could try overriding SetSelectedRowCore, calling the base with adding your new limitation to the selected condition.
```
protected virtual void SetSelectedRowCore(int rowIndex,bool selected )
{
base(rowIndex, selected && currentSelection < allowedSelectionCount);
}
```
[SetSelectedRowCore](http://msdn.microsoft.com/en-us/library/system.windows.forms.datagridview.setselectedrowcore%28v=vs.110%29.aspx)
| 5,926
|
38,994,265
|
When we
input : 10
output :01 02 03 04 05 06 07 08 09 10
When we
input :103
output :001 002 003...010 011 012 013.....100 101 002 103
How to create this sequence in ruby or python ?
|
2016/08/17
|
[
"https://Stackoverflow.com/questions/38994265",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/-1/"
] |
Ruby implementation:
```
n = gets
p (1..n.to_i).map{ |i| i.to_s.rjust(n.to_s.length, "0") }.join(" ")
```
Here `rjust` will add leading zeros.
|
A very basic Python implementation. Note that it's a generator so it returns one value at a time.
```
def get_range(n):
len_n = len(str(n))
for num in range(1, n + 1):
output = str(num)
while len(output) < len_n:
output = '0' + output
yield output
for i in get_range(100):
print(i)
>> 001
002
...
...
009
010
011
..
..
099
100
```
| 5,928
|
51,325,955
|
I am trying to scrape a website using `Selenium Firefox` (headless) driver in `python`.
I read all the anchors in the webpage and go through them all one by one. But I want for the browser to wait for the `Ajax` calls on the page to be over before moving to another page.
My code is the following:
```
import time
from selenium import webdriver
from selenium.webdriver.firefox.options import Options
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.common.by import By
from selenium.webdriver.support import expected_conditions as EC
from selenium.webdriver.common.desired_capabilities import DesiredCapabilities
caps = DesiredCapabilities().FIREFOX
caps["pageLoadStrategy"] = "eager" # complete
options = Options()
options.add_argument("--headless")
url = "http://localhost:3000/"
# Using Selenium's webdriver to open the page
driver = webdriver.Firefox(desired_capabilities=caps,firefox_options=options)
driver.get(url)
urls = WebDriverWait(driver, 10).until(EC.presence_of_all_elements_located((By.TAG_NAME, "a")))
links = []
for url in urls:
links.append(url.get_attribute("href"))
for link in links:
print 'navigating to: ' + link
driver.get(link)
body = WebDriverWait(driver, 10).until(EC.presence_of_all_elements_located((By.TAG_NAME, "p")))
driver.execute_script("window.scrollTo(0,1000);")
print(body)
driver.back()
driver.quit()
```
the line `print(body)` was added for testing purposes. and it returned uncomprehensible text , instead of the actual HTML of the page. Heres a part of the printed text :
```
[<selenium.webdriver.firefox.webelement.FirefoxWebElement (session="fb183e8b-ce36-47e7-a03e-d3aeea376304", element="e7dfa6b2-1ddf-438d-b562-1e2ac8416e07")>, <selenium.webdriver.firefox.webelement.FirefoxWebElement (session="fb183e8b-ce36-47e7-a03e-d3aeea376304", element="6fe1ffb0-17a8-4b64-9166-691478a0bbd4")>, <selenium.webdriver.firefox.webelement.FirefoxWebElement (session="fb183e8b-ce36-47e7-a03e-d3aeea376304", element="1f510a00-a587-4ae8-9ecf-dd4c90081a5a")>, <selenium.webdriver.firefox.webelement.FirefoxWebElement (session="fb183e8b-ce36-47e7-a03e-d3aeea376304", element="c1bfb1cd-5ccf-42b6-ad4c-c1a70486cc98")>, <selenium.webdriver.firefox.webelement.FirefoxWebElement (session="fb183e8b-ce36-47e7-a03e-d3aeea376304", element="be44db09-3948-48f1-8505-937db509a157")>, <selenium.webdriver.firefox.webelement.FirefoxWebElement (session="fb183e8b-ce36-47e7-a03e-d3aeea376304", element="68f3c9f2-80b0-493e-a47f-ad69caceaa06")>,
```
What is causing this?
Everything (content related) in the pages i'm scraping is static.
|
2018/07/13
|
[
"https://Stackoverflow.com/questions/51325955",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2378622/"
] |
You should add a `0` in your conversion specification to indicate that you want zero-padding:
```
$test = sprintf('%06d', rand(1, 1000000));
// ^-- here
```
The conversion specifications are documented on [the `sprintf` manual page](http://php.net/manual/en/function.sprintf.php).
|
You can just replace the empty character with 0.
```
$test = str_replace(" ", "0", sprintf('%6d', rand(1, 1000000)));
```
| 5,933
|
39,948,588
|
How can I read the contents of a binary or a text file in a non-blocking mode?
For binary files: when I `open(filename, mode='rb')`, I get an instance of `io.BufferedReader`. The documentation fort `io.BufferedReader.read` [says](https://docs.python.org/3.5/library/io.html#io.BufferedReader.read):
>
> Read and return size bytes, or if size is not given or negative, until EOF or if the read call would block in non-blocking mode.
>
>
>
Obviously a straightforward `open(filename, 'rb').read()` is in a blocking mode. To my surprise, I could not find an explanation anywhere in the `io` docs of how to choose the non-blocking mode.
For text files: when I `open(filename, mode='rt')`, I get `io.TextIOWrapper`. I assume the relevant docs are those for `read` in its base class, `io.TextIOBase`; and [according to those docs](https://docs.python.org/3.5/library/io.html#io.TextIOBase.read), there seems no way to do non-blocking read at all:
>
> Read and return at most size characters from the stream as a single str. If size is negative or None, reads until EOF.
>
>
>
|
2016/10/09
|
[
"https://Stackoverflow.com/questions/39948588",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/336527/"
] |
File operations are blocking. There is no non-blocking mode.
But you can create a thread which reads the file in the background. In Python 3, [`concurrent.futures` module](https://docs.python.org/3.4/library/concurrent.futures.html#module-concurrent.futures) can be useful here.
```
from concurrent.futures import ThreadPoolExecutor
def read_file(filename):
with open(filename, 'rb') as f:
return f.read()
executor = concurrent.futures.ThreadPoolExecutor(1)
future_file = executor.submit(read_file, 'C:\\Temp\\mocky.py')
# continue with other work
# later:
if future_file.done():
file_contents = future_file.result()
```
Or, if you need a callback to be called when the operation is done:
```
def on_file_reading_finished(future_file):
print(future_file.result())
future_file = executor.submit(read_file, 'C:\\Temp\\mocky.py')
future_file.add_done_callback(on_file_reading_finished)
# continue with other code while the file is loading...
```
|
I suggest using [**aiofiles**](https://github.com/Tinche/aiofiles) - a library for handling local disk files in asyncio applications.
```
import aiofiles
async def read_without_blocking():
f = await aiofiles.open('filename', mode='r')
try:
contents = await f.read()
finally:
await f.close()
```
| 5,935
|
48,146,921
|
I have a script that builds llvm/clang 3.42 from source
(with configure+make).
**It runs smooth on ubuntu 14.04.5 LTS**.
When I upgraded to **ubuntu 17.04**, the build fails.
Here is the building script:
```
svn co https://llvm.org/svn/llvm-project/llvm/tags/RELEASE_342/final llvm
svn co https://llvm.org/svn/llvm-project/cfe/tags/RELEASE_342/final llvm/tools/clang
svn co https://llvm.org/svn/llvm-project/compiler-rt/tags/RELEASE_342/final llvm/projects/compiler-rt
svn co https://llvm.org/svn/llvm-project/libcxx/tags/RELEASE_342/final llvm/projects/libcxx
rm -rf llvm/.svn
rm -rf llvm/tools/clang/.svn
rm -rf llvm/projects/compiler-rt/.svn
rm -rf llvm/projects/libcxx/.svn
cd llvm
./configure \
--enable-optimized \
--disable-assertions \
--enable-targets=host \
--with-python="/usr/bin/python2"
make -j `nproc`
```
Here are the errors I get (TLDR: problems with definitions of **malloc**, **calloc**, **realloc** and **free**)
```
/usr/include/malloc.h:38:14: error: declaration conflicts with target of using declaration already in scope
extern void *malloc (size_t __size) __THROW __attribute_malloc__ __wur;
^
/usr/include/stdlib.h:427:14: note: target of using declaration
extern void *malloc (size_t __size) __THROW __attribute_malloc__ __wur;
^
/usr/lib/gcc/x86_64-linux-gnu/6.3.0/../../../../include/c++/6.3./stdlib.h:65:12: note: using declaration
using std::malloc;
^
In file included from /home/oren/GIT/LatestKlee/llvm/projects/compiler-rt/lib/tsan/rtl/tsan_platform_linux.cc:47:
/usr/include/malloc.h:41:14: error: declaration conflicts with target of using declaration already in
scope
extern void *calloc (size_t __nmemb, size_t __size)
^
/usr/include/stdlib.h:429:14: note: target of using declaration
extern void *calloc (size_t __nmemb, size_t __size)
^
/usr/lib/gcc/x86_64-linux-gnu/6.3.0/../../../../include/c++/6.3.0/stdlib.h:59:12: note: using declaration
using std::calloc;
^
In file included from /home/oren/GIT/LatestKlee/llvm/projects/compiler-rt/lib/tsan/rtl/tsan_platform_linux.cc:47:
/usr/include/malloc.h:49:14: error: declaration conflicts with target of using declaration already in
scope
extern void *realloc (void *__ptr, size_t __size)
^
/usr/include/stdlib.h:441:14: note: target of using declaration
extern void *realloc (void *__ptr, size_t __size)
^
/usr/lib/gcc/x86_64-linux-gnu/6.3.0/../../../../include/c++/6.3.0/stdlib.h:73:12: note: using declaration
using std::realloc;
^
In file included from /home/oren/GIT/LatestKlee/llvm/projects/compiler-rt/lib/tsan/rtl/tsan_platform_linux.cc:47:
/usr/include/malloc.h:53:13: error: declaration conflicts with target of using declaration already in
scope
extern void free (void *__ptr) __THROW;
^
/usr/include/stdlib.h:444:13: note: target of using declaration
extern void free (void *__ptr) __THROW;
^
/usr/lib/gcc/x86_64-linux-gnu/6.3.0/../../../../include/c++/6.3.0/stdlib.h:61:12: note: using declaration
using std::free;
^
COMPILE: clang_linux/tsan-x86_64/x86_64: /home/oren/GIT/LatestKlee/llvm/projects/compiler-rt/lib/tsan/rtl/tsan_rtl_mutex.cc
4 errors generated.
Makefile:267: recipe for target '/home/oren/GIT/LatestKlee/llvm/tools/clang/runtime/compiler-rt/clang_linux/tsan-x86_64/x86_64/SubDir.lib__tsan__rtl/tsan_platform_linux.o' failed
make[5]: *** [/home/oren/GIT/LatestKlee/llvm/tools/clang/runtime/compiler-rt/clang_linux/tsan-x86_64/x86_64/SubDir.lib__tsan__rtl/tsan_platform_linux.o] Error 1
```
The default gcc version shipped with ubuntu 17.04 is 6.3.
Maybe this is an issue of default C++ dialect used by gcc 6.3?
Any help is very much appreciated, thanks!
|
2018/01/08
|
[
"https://Stackoverflow.com/questions/48146921",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3357352/"
] |
That seems to be an issue with LLVM 3.4.2`tsan` (Thread Sanitizer) failing to build with GCC 6.x, as previously reported here:
<https://aur.archlinux.org/packages/clang34-analyzer-split>
It seems the inclusion of `stdlib.h` and `malloc.h` is conflicting, since both define `malloc` and friends.
It's possible that this issue only manifets in `tsan`, so if `tsan` is not instrumental to your LLVM build (which is very likely), and you wish to stick with the system gcc for building LLVM, you may consider disabling `tsan` completely.
If you're running a CMake build (as in [here](https://stackoverflow.com/questions/48188796/clang-4-0-fails-to-build-clang-3-42-on-ubuntu-17-04/48226872#48226872)), you can do so by commenting line 29 of `llvm/projects/compiler-rt/lib/CMakeLists.txt`:
```
if (CMAKE_SYSTEM_NAME MATCHES "Linux" AND NOT ANDROID)
add_subdirectory(tsan) # comment out this line
```
If you're forced to stick to the `configure` build, my best guess would be removing the `tsan-x86_64` target in `llvm/projects/compiler-rt/make/clang_linux.mk`, line 63:
```
Configs += full-x86_64 profile-x86_64 san-x86_64 asan-x86_64 --> tsan-x86_64 <--
```
|
I faced the same problem on my Ubuntu 16.10.
It has default gcc 6.2. You need to instruct LLVM build system to use gcc 4.9. Also, I suggest you remove GCC6 completely.
```
$ sudo apt-get remove g++-6 gcc-6 cpp
$ sudo apt-get install gcc-4.9 g++4.9
$ export CC=/usr/bin/gcc-4.9
$ export CXX=/usr/bin/g++-4.9
$ export CPP=/usr/bin/cpp-4.9
$ ./configure
$ make
```
And maybe you will need:
```
$ sudo ln -s /usr/bin/cpp-4.9 /usr/bin/cpp
```
| 5,937
|
35,245,401
|
I work with conda environments and need some pip packages as well, e.g. pre-compiled wheels from [~gohlke](http://www.lfd.uci.edu/~gohlke/pythonlibs/).
At the moment I have two files: `environment.yml` for conda with:
```
# run: conda env create --file environment.yml
name: test-env
dependencies:
- python>=3.5
- anaconda
```
and `requirements.txt` for pip which can be used after activating above conda environment:
```
# run: pip install -i requirements.txt
docx
gooey
http://www.lfd.uci.edu/~gohlke/pythonlibs/bofhrmxk/opencv_python-3.1.0-cp35-none-win_amd64.whl
```
Is there a possibility to combine them in one file (for conda)?
|
2016/02/06
|
[
"https://Stackoverflow.com/questions/35245401",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5276734/"
] |
Pip dependencies can be included in the `environment.yml` file like this ([docs](https://conda.io/docs/user-guide/tasks/manage-environments.html#create-env-file-manually)):
```
# run: conda env create --file environment.yml
name: test-env
dependencies:
- python>=3.5
- anaconda
- pip
- numpy=1.13.3 # pin version for conda
- pip:
# works for regular pip packages
- docx
- gooey
- matplotlib==2.0.0 # pin version for pip
# and for wheels
- http://www.lfd.uci.edu/~gohlke/pythonlibs/bofhrmxk/opencv_python-3.1.0-cp35-none-win_amd64.whl
```
It also works for `.whl` files in the same directory (see [Dengar's answer](https://stackoverflow.com/a/41454032/5276734)) as well as with common pip packages.
|
Just want to add that adding a wheel in the directory also works. I was getting this error when using the entire URL:
```
HTTP error 404 while getting http://www.lfd.uci.edu/~gohlke/pythonlibs/f9r7rmd8/opencv_python-3.1.0-cp35-none-win_amd64.whl
```
Ended up downloading the wheel and saving it into the same directory as the yml file.
```
name: test-env
dependencies:
- python>=3.5
- anaconda
- pip
- pip:
- opencv_python-3.1.0-cp35-none-win_amd64.whl
```
| 5,938
|
55,746,170
|
I am trying to implement a neural network for an NLP task with a convolutional layer followed up by an LSTM layer. I am currently experimenting with the new Tensorflow 2.0 to do this. However, when building the model, I've encountered an error that I could not understand.
```
# Input shape of training and validation set
(1000, 1, 512), (500, 1, 512)
```
**The model**
```
model = keras.Sequential()
model.add(keras.layers.InputLayer(input_shape=(None, 512)))
model.add(keras.layers.Conv1D(128, 1, activation="relu"))
model.add(keras.layers.MaxPooling1D((2)))
model.add(keras.layers.LSTM(64, activation="tanh"))
model.add(keras.layers.Dense(6))
model.add(keras.layers.Activation("softmax"))
```
**The error**
```
InvalidArgumentError: Tried to stack elements of an empty list with non-fully-defined element_shape: [?,64]
[[{{node unified_lstm_16/TensorArrayV2Stack/TensorListStack}}]] [Op:__inference_keras_scratch_graph_26641]
```
At first, I tried to check if there are any issues regarding implementing a `Conv1D` layer with an `LSTM` layer. I found [this post](https://github.com/keras-team/keras/issues/129), that suggested so that I reshaped the layer between the convolutional layer and lstm layer. But that still did not work and I got a different error instead. [This post](https://stackoverflow.com/questions/55431081/how-to-connect-convlolutional-layer-with-lstm-in-tensorflow-keras) seems similar but it does not use Tensorflow 2.0 and not answer so far. I also found this post that has the same intention of stacking a convolutional and lstm layers. But it uses `Conv2D` instead of `Conv1D`. [This post](https://stackoverflow.com/questions/35254138/python-keras-how-to-change-the-size-of-input-after-convolution-layer-into-lstm-l) also suggests to use reshaped the output of the convolutional layer with a built-in layer called `Reshape`. Yet, I still got the same error.
I also tried to specify the `input_shape` in the LSTM layer.
```
model = keras.Sequential()
model.add(keras.layers.InputLayer(input_shape=(None, 512)))
model.add(keras.layers.Conv1D(128, 1, activation="relu"))
model.add(keras.layers.MaxPooling1D((2)))
model.add(keras.layers.LSTM(64, activation="tanh", input_shape=(None, 64)))
model.add(keras.layers.Dense(6))
model.add(keras.layers.Activation("softmax"))
```
And I still got the same error in the end.
I am not sure if I understand how to stack the 1-dimensional convolutional layer and lstm layer. I know that TF2.0 is still an Alpha, but did can someone point out what I was missing? Thanks in advance
|
2019/04/18
|
[
"https://Stackoverflow.com/questions/55746170",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/7274157/"
] |
The issue is a dimensionality issue. Your feature is of shape `[..., 1, 512]`; therefore, `MaxPooling1D` `pooling_size` 2 is bigger than 1 causing the issue.
Adding `padding="same"` will solve the issue.
```
model = tf.keras.Sequential()
model.add(tf.keras.layers.InputLayer(input_shape=(None, 512)))
model.add(tf.keras.layers.Conv1D(128, 1, activation="relu"))
model.add(tf.keras.layers.MaxPooling1D(2, padding="same"))
model.add(tf.keras.layers.LSTM(64, activation="tanh"))
model.add(tf.keras.layers.Flatten())
model.add(tf.keras.layers.Dense(6))
model.add(tf.keras.layers.Activation("softmax"))
```
|
**padding="same"** should solve your issue.
Change below line:
`model.add(tf.keras.layers.MaxPooling1D(2, padding="same"))`
| 5,944
|
23,793,774
|
```
omnia@ubuntu:~$ psql --version
psql (PostgreSQL) 9.3.4
omnia@ubuntu:~$ pg_dump --version
pg_dump (PostgreSQL) 9.2.8
omnia@ubuntu:~$ dpkg -l | grep pg
ii gnupg 1.4.11-3ubuntu2.5 GNU privacy guard - a free PGP replacement
ii gpgv 1.4.11-3ubuntu2.5 GNU privacy guard - signature verification tool
ii libgpg-error0 1.10-2ubuntu1 library for common error values and messages in GnuPG components
ii libpq5 9.3.4-1.pgdg60+1 PostgreSQL C client library
ii pgdg-keyring 2013.2 keyring for apt.postgresql.org
ii postgresql-9.2 9.2.8-1.pgdg60+1 object-relational SQL database, version 9.2 server
ii postgresql-9.3 9.3.4-1.pgdg60+1 object-relational SQL database, version 9.3 server
ii postgresql-client-9.2 9.2.8-1.pgdg60+1 front-end programs for PostgreSQL 9.2
ii postgresql-client-9.3 9.3.4-1.pgdg60+1 front-end programs for PostgreSQL 9.3
ii postgresql-client-common 154.pgdg60+1 manager for multiple PostgreSQL client versions
ii postgresql-common 154.pgdg60+1 PostgreSQL database-cluster manager
ii python-gnupginterface 0.3.2-9.1ubuntu3 Python interface to GnuPG (GPG)
ii unattended-upgrades 0.76ubuntu1 automatic installation of security upgrades
ii update-manager-core 1:0.156.14.13 manage release upgrades
omnia@ubuntu:~$
```
Seems I have both installed but pg\_dump is stuck in an older version? Weird since both are linked to the same "wrapper":
```
omnia@ubuntu:~$ readlink /usr/bin/psql
../share/postgresql-common/pg_wrapper
omnia@ubuntu:~$ readlink /usr/bin/pg_dump
../share/postgresql-common/pg_wrapper
```
What am I doing wrong?
|
2014/05/21
|
[
"https://Stackoverflow.com/questions/23793774",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/7595/"
] |
```
sudo rm /usr/bin/pg_dump
sudo ln -s /usr/lib/postgresql/9.3/bin/pg_dump /usr/bin/pg_dump
```
|
The `pgdg60` package suffix leads me to believe these packages are not from the official Ubuntu repository. Try looking into `/etc/apt/sources.list` or `/etc/apt/sources.list.d` and see if you have any third party PPA's or repositories specified.
Try getting the Postgresql packages either from your Ubuntu repo (although these may be a bit out-of-date depending on your Ubuntu version), or from the official postgres repo (they provide an apt server for Ubuntu/Debian): <https://wiki.postgresql.org/wiki/Apt>
| 5,945
|
64,808,992
|
I am stuck with code below. Either I cannot find simple answer to my problem due to not narrow enough search or I am just too blind to see. Anyway I am looking to put the "+" and "-" buttons to use. They suppose to literally do what their assigned symbols do.
With my level of python knowledge I can only achieve that by creating single function to each button which is a lot of code. I wonder if it is possible to create loop which could save tons of code and still be able to update label called "stock" in the same row as pressed button. At the moment I have assigned random numbers to that label, but in a bigger scope that label will be populated by integers taken from db.
I will be very grateful if anyone could point me into right direction.
```
import tkinter as tk
from tkinter import Tk
import random
root = tk.Tk()
my_list=dict(AAA=["aa1", "aa2", "aa3"],
BBB=["ab1", "ab2", "ab3", "ab4", "ab5"],
CCC=["ac1", "ac2", "ac3", "ac4", "ac5", "ac6"],
DDD=["ad1", "ad2", "ad3", "ad4", "ad5", "ad6"],
EEE=["ae1", "ae2", "ae3", "ae4", "ae5", "ae6"],
FFF=["af1", "af2", "af3", "af4", "af5", "af6"],
GGG=["ag1", "ag2", "ag3", "ag4", "ag5", "ag6"],
HHH=["ah1", "ah2", "ah3", "ah4", "ah5", "ah6"])
for x, y in enumerate(my_list):
xyz=x*4
tk.Label(root, text=y, width=25, bd=3, relief=tk.GROOVE).grid(row=0, column=xyz,columnspan=4,padx=(0,10))
for xing, ying in enumerate(my_list[y]):
tk.Label(root, text=ying, width=10,relief=tk.SUNKEN).grid(row=xing+1, column=xyz)
stock=tk.Label(root,text=random.randint(0,9), width=5,relief=tk.SUNKEN)
stock.grid(row=xing+1, column=xyz+1)
tk.Button(root, text="+", width=3).grid(row=xing+1, column=xyz+2)
tk.Button(root, text="-", width=3).grid(row=xing+1, column=xyz+3,padx=(0,10))
root.mainloop()
```
|
2020/11/12
|
[
"https://Stackoverflow.com/questions/64808992",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/11618118/"
] |
Currently, there is one solution **Real-World Super-Resolution via Kernel Estimation and Noise Injection**. The author proposes a degradation framework RealSR, which provides realistic images for super-resolution learning. It is a promising method for shakiness or motion effect images super-resolution.
The method is divided into two stages. The first stage *Realistic Degradation for Super-Resolution*
>
> is to estimate the degradation from real data and generate realistically
> LR images.
>
>
>
The second stage *Super-Resolution Model*
>
> is to train the SR model based on the constructed data.
>
>
>
You can look at this Github article: <https://github.com/jixiaozhong/RealSR>
|
I've also been working on this super-resolution field and found some promising results but haven't tried yet,
[first paper](https://doi.org/10.1016/j.heliyon.2021.e08341) (license plate base text) they implement the image enhancement first then do the super-resolution in a later stage.
[second paper](https://arxiv.org/abs/2106.15368) and [github](https://github.com/mjq11302010044/TPGSR) in this paper they use text prior to guide the super-resolution network.
| 5,948
|
69,280,273
|
So, I have a list of dicts in python that looks like this:
```
lis =
[
{'action': 'Notify', 'type': 'Something', 'Genre': 10, 'date': '2021-05-07 01:59:37'},
{'action': 'Notify', 'type': 'Something Else', 'Genre': 20, 'date': '2021-05-07 01:59:37'}
...
]
```
Now I want `lis` to be in a way, such that **each individual dict** is ordered using the `mapping` for the keys that I will provide. For example, if
```
mapping = {1:'date', 2:'Genre', 3:'action', 4:'type'}
```
Then, I want to make my original list of dicts look like this:
```
lis =
[
{'date': '2021-05-07 01:59:37', 'Genre': 10, 'action': 'Notify', 'type': 'Something'},
{'date': '2021-05-07 01:59:37', 'Genre': 20, 'action': 'Notify', 'type': 'Something Else'}
...
]
```
How do I implement this?
|
2021/09/22
|
[
"https://Stackoverflow.com/questions/69280273",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/11903403/"
] |
You might harness [`collections.OrderedDict`](https://docs.python.org/3/library/collections.html#collections.OrderedDict) for this task as follows
```
import collections
order = ['date', 'Genre', 'action', 'type']
dct1 = {'action': 'Notify', 'type': 'Something', 'Genre': 10, 'date': '2021-05-07 01:59:37'}
dct2 = {'action': 'Notify', 'type': 'Something Else', 'Genre': 20, 'date': '2021-05-07 01:59:37'}
odct1 = collections.OrderedDict.fromkeys(order)
odct1.update(dct1)
odct2 = collections.OrderedDict.fromkeys(order)
odct2.update(dct2)
print(odct1)
print(odct2)
```
output:
```
OrderedDict([('date', '2021-05-07 01:59:37'), ('Genre', 10), ('action', 'Notify'), ('type', 'Something')])
OrderedDict([('date', '2021-05-07 01:59:37'), ('Genre', 20), ('action', 'Notify'), ('type', 'Something Else')])
```
Disclaimer: this assume every dict you want to process has exactly all keys from `order`. This solution works with any python version which has `collections.OrderedDict` if you will be using solely `python3.7` or newer you might use common `dict` as follows
```
order = ['date', 'Genre', 'action', 'type']
dct1 = dict.fromkeys(order)
dct1.update({'action': 'Notify', 'type': 'Something', 'Genre': 10, 'date': '2021-05-07 01:59:37'})
print(dct1)
```
output
```
{'date': '2021-05-07 01:59:37', 'Genre': 10, 'action': 'Notify', 'type': 'Something'}
```
Disclaimer still holds
|
Try this:
```
def sort_dct(li, mapping):
return {v: li[v] for k,v in mapping.items()}
out = []
mapping = {1:'date', 2:'Genre', 3:'action', 4:'type'}
for li in lis:
out.append(sort_dct(li,mapping))
print(out)
```
Output:
```
[{'date': '2021-05-07 01:59:37',
'Genre': 10,
'action': 'Notify',
'type': 'Something'},
{'date': '2021-05-07 01:59:37',
'Genre': 20,
'action': 'Notify',
'type': 'Something Else'}]
```
| 5,949
|
36,534,186
|
I'm having this problem with a python script I'm writing that calls an exe file (subrocess.Popen). I'm redirecting the stdout and stderr to PIPE, but i cant read (subprocess.Popen.stdout.readline()) any output.
I did try to run the exec file in windows cli and redirecting both stdout and stderr... and nothing happens. So I reckon there is no stdout and stderr in this Qt app.
Is there any way I can get to the data that prints this exe on screen (by the way the application is photivo.exe)?
|
2016/04/10
|
[
"https://Stackoverflow.com/questions/36534186",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6185152/"
] |
Does this work? Set the alpha of the extra lines to 0 (so they become transparent. Using geom\_line as geom\_density uses alpha for fill only. (system problems prevent testing)
```
ggplotly(
ggplot(diamonds, aes(depth, colour = cut)) +
geom_density() +
geom_line(aes(text = paste("Clarity: ", clarity)), stat="density", alpha=0) +
xlim(55, 70)
)
```
|
I realize that this is an old answer, but the main problem here is that you're trying to do something that's logically impossible.
`clarity` and `cut` are two separate dimensions, so you can't simply put the `clarity` in a tooltip on the line that's grouped by `cut`, because that line represents diamonds of all different `clarity`s grouped together.
Once you add `clarity` into the mix (via the `text` aesthetic), ggplot rightly separates the various `clarities` out, so that it has a `clarity` to refer to. You could force it back to grouping just by `cut` by adding `group=cut` to the `aes`, but you'll lose the `clarity` tooltip, because there's no meaningful value of `clarity` when you're grouping just by `cut` - again, each point is all clarities at once.
Richard's solution simply displays both graphs at once, but makes the `clarity`-grouped ones invisible. I'm not sure what the original goal was here, but that doesn't accomplish anything useful, because it just lets you mouse-over invisible peaks in addition to the properly-grouped `cut` bands.
I'm not sure what your original data was, but you simply can't display two dimensions and group by only one of them. You'd either have to use the multiple curves, which accurately represent the second dimension, or flatten the second dimension by doing some sort of summarization of it - in the case of `clarity`, there's not really any sensible summarization you can do, but if it were, say, price, you could display an average.
| 5,951
|
54,900,964
|
Hi I have a question with regards to python programming for my assignment
The task is to replace the occurrence of a number in a given value in a recursive manner, and the final output must be in integer
i.e. digit\_swap(521, 1, 3) --> 523 where 1 is swapped out for 3
Below is my code and it works well for s = 0 - 9 if the final answer is outputted as string
```
def digit_swap(n, d, s):
result = ""
if len(str(n)) == 1:
if str(n) == str(d):
return str(s)
else:
return str(n)
elif str(n)[0] == str(d):
result = result + str(s) + str(digit_swap(str(n)[1:], d, s))
return result
else:
result = result + str(n)[0] + str(digit_swap(str(n)[1:], d, s))
return result
```
However, I have trouble making the final output as Integer
The code breaks down when s = 0
i.e. digit\_swap(65132, 1, 0) --> 6532 instead of 65032
Is there any fix to my code?
```
def digit_swap(n, d, s):
result = ""
if len(str(n)) == 1:
if str(n) == str(d):
return str(s)
else:
return str(n)
elif str(n)[0] == str(d):
result = result + str(s) + str(digit_swap(str(n)[1:], d, s))
return int(result) # Changes
else:
result = result + str(n)[0] + str(digit_swap(str(n)[1:], d, s))
return int(result) # Changes
```
|
2019/02/27
|
[
"https://Stackoverflow.com/questions/54900964",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/9369481/"
] |
Conversion to string is unnecessary, this can be implemented much easier
```
def digit_swap(n, d, s):
if n == 0:
return 0
lower_n = (s if (n % 10) == d else (n % 10))
higher_n = digit_swap(n // 10, d, s) * 10
return higher_n + lower_n
assert digit_swap(521, 1, 3) == 523
assert digit_swap(65132, 1, 0) == 65032
```
|
For example `int(00)` is casted to 0. Therefore a zero is discarded. I suggest not to cast, instead leave it as a string. If you have to give back an `int`, you should not cast until you return the number. However, you still discard 0s at the beginning. So all in all, I would suggest just return strings instead of ints:
```
return str(result) # instead of return int(result)
```
And call it:
```
int(digit_swap(n,d,s))
```
| 5,952
|
13,083,026
|
Imagine I have a script, let's say `my_tools.py` that I import as a module. But `my_tools.py` is saved twice: at `C:\Python27\Lib`
and at the same directory from where the script is run that does the import.
Can I change the order where python looks for `my_tools.py` first? That is, to check first if it exists at `C:\Python27\Lib` and if so, do the import?
|
2012/10/26
|
[
"https://Stackoverflow.com/questions/13083026",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1105929/"
] |
You can manipulate `sys.path` as much as you want... If you wanted to move the current directory to be scanned last, then just do `sys.path[1:] + sys.path[:1]`. Otherwise, if you want to get into the nitty gritty then the [imp module](http://docs.python.org/library/imp.html) can be used to customise until your hearts content - there's an example on that page, and one at <http://blog.dowski.com/2008/07/31/customizing-the-python-import-system/>
|
You can modify [`sys.path`](http://docs.python.org/library/sys.html#sys.path), which will determine the order and locations that Python searches for imports. (Note that you must do this *before* the import statement.)
| 5,954
|
46,092,292
|
I would like to split strings like the following:
```
x <- "abc-1230-xyz-[def-ghu-jkl---]-[adsasa7asda12]-s-[klas-bst-asdas foo]"
```
by dash (`-`) on the condition that those dashes must not be contained inside a pair of `[]`. The expected result would be
```
c("abc", "1230", "xyz", "[def-ghu-jkl---]", "[adsasa7asda12]", "s",
"[klas-bst-asdas foo]")
```
Notes:
* There is no nesting of square brackets inside each other.
* The square brackets can contain any characters / numbers / symbols except square brackets.
* The other parts of the string are also variable so that we can only assume that we split by `-` whenever it's not inside `[]`.
There's a similar question for python ([How to split a string by commas positioned outside of parenthesis?](https://stackoverflow.com/questions/1648537)) but I haven't yet been able to accurately adjust that to my scenario.
|
2017/09/07
|
[
"https://Stackoverflow.com/questions/46092292",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3521006/"
] |
You could use look ahead to verify that there is no `]` following sooner than a `[`:
[`-(?![^[]*\])`](https://regex101.com/r/x0WbVt/2)
So in R:
```
strsplit(x, "-(?![^[]*\\])", perl=TRUE)
```
### Explanation:
* `-`: match the hyphen
* `(?! )`: negative look ahead: if that part is found after the previously matched hyphen, it invalidates the match of the hyphen.
+ `[^[]`: match any character that is not a `[`
+ `*`: match any number of the previous
+ `\]`: match a literal `]`. If this matches, it means we found a `]` before finding a `[`. As all this happens in a negative look ahead, a match here means the hyphen is *not* a match. Note that a `]` is a special character in regular expressions, so it must be escaped with a backslash (although it *does* work without escape, as the engine knows there is no matching `[` preceding it -- but I prefer to be clear about it being a literal). And as backslashes have a special meaning in string literals (they also denote an escape), that backslash itself must be escaped again in this string, so it appears as `\\]`.
|
I am not familiar with `r` language, but I believe it can do regex based search and replace. Instead of struggling with one single regex split function, I would go in 3 steps:
* replace `-` in all `[....]` parts by a invisible char, like `\x99`
* split by `-`
* for each element in the above split result(array/list), replace `\x99` back to `-`
For the first step, you can find the parts by `\[[^]]`
| 5,956
|
57,398,668
|
I don't have o picture but I am asking did question because I am a beginner using python
|
2019/08/07
|
[
"https://Stackoverflow.com/questions/57398668",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/11897155/"
] |
`input()` takes user input as a string. It's very safe.
```
>>> usr = input('Enter some input: ')
Enter some input: hello, world
>>> usr
"hello, world"
```
`eval()` will execute a string as if it were python code. It's very dangerous.
```
>>>eval(input('Make it happen!'))
Make it happen! print('hello')
hello
>>>eval(input('Make it happen!'))
Make it happen! os.system('echo malicious things')
```
And now you've really messed up your computer.
|
`eval()` is used to evaluate an expression and `input()` is used to take user input. Here are the examples:
```
#evaluates expression
>> eval('5+2')
>> 7
# Takes user input
>> input()
10 (user enters)
>> 10
#evaluates user input
>> eval('input()')
15 (user enters)
>> 15
```
| 5,959
|
37,776,724
|
I've just completed [Tatiana Tylosky's tutorial for Python](https://www.thinkful.com/learn/intro-to-python-tutorial/#Creating-Your-Pypet) and created my own Python pypet.
In her tutorial, she shows how to do a "for" loop consisting of:
```
cat = {
'name': 'Fluffy',
'hungry': True,
'weight': 9.5,
'age': 5,
'photo': '(=^o.o^=)__',
}
mouse = {
'name': 'Mouse',
'age': 6,
'weight': 1.5,
'hungry': False,
'photo': '<:3 )~~~~',
}
pets = [cat, mouse]
def feed(pet):
if pet['hungry'] == True:
pet['hungry'] = False
pet['weight'] = pet['weight'] + 1
else:
print 'The Pypet is not hungry!'
for pet in pets:
feed(pet)
print pet
```
**I'd like to know how to repeat this "for" loop so that I feed both the cat and the mouse three times.** Most of the Python guides I've read say that you have to do something like:
```
for i in range(0, 6):
```
In this case, however, the "for" loop uses the list "pets." So the above code can't be used? What should I do? I've tried some wacky-looking things like:
```
for pet in pets(1,4):
feed(pet)
print pet
```
Or:
```
for pet in range(1,4):
feed(pet)
print pet
```
Naturally it doesn't work. What should I do to get the "for" loop to repeat?
|
2016/06/12
|
[
"https://Stackoverflow.com/questions/37776724",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6456667/"
] |
I would enclose your feed `for` loop in a `for` loop that iterates three times. I would use something like:
```
for _ in range(3):
for pet in pets:
feed(pet)
print pet
```
`for _ in range(3)` iterates three times. Note that I used `_` because you are not using the iteration variable, see e.g. [What is the purpose of the single underscore "\_" variable in Python?](https://stackoverflow.com/q/5893163/3001761)
|
Programming languages let you embed one structure in another. Put your current loop under a for loop that runs three times, as @intboolstring's answer already showed. Here are two more things you should do now:
1. Don't compare against `True`. `if pet["Hungry"] == True:` is better written as
```
if pet["Hungry"]:
...
```
2. Switch to Python 3. Why are you learning an outdated version of the language?
| 5,960
|
59,391,988
|
I am trying to set up dockerized production environment for Flask application with gunicorn. I follow this [Digital Ocean's](https://www.digitalocean.com/community/tutorials/how-to-serve-flask-applications-with-gunicorn-and-nginx-on-ubuntu-18-04) instructions together with [testdriven's one](https://testdriven.io/blog/dockerizing-django-with-postgres-gunicorn-and-nginx/) for dockerizing.
Project structure is following:
```
tree -L 2
.
├── Docker
│ ├── Dockerfile
│ ├── Dockerfile-nginx
│ └── nginx.conf
├── dev-requirements.txt
├── docker-compose.prod.yml
├── docker-compose.yml
├── gunicorn_conf.py
├── requirements.txt
├── setup.cfg
├── src
│ ├── __pycache__
│ ├── config.py
│ ├── main.py
│ ├── models.py
│ ├── tests.py
│ ├── views.py
│ └── wsgi.py
└── venv
├── bin
├── include
├── lib
└── pip-selfcheck.json
7 directories, 16 files
```
The config resides in `docker-compose.prod.yml`:
```
version: "3.7"
services:
web:
build:
context: .
dockerfile: Docker/Dockerfile
env_file:
- .web.env
ports:
- "5000:5000"
depends_on:
- db
command: gunicorn wsgi:app -c ../gunicorn_conf.py
working_dir: /app/src
db:
image: "postgres:11"
volumes:
- simple_app_data:/var/lib/postgresql/data
env_file:
- .db.env
volumes:
simple_app_data:
```
Contents of `gunicorn_conf.py`:
```
bind = "0.0.0.0:5000"
workers = 2
```
And `wsgi.py`:
```
from main import app
print('*'*10)
print(__name__)
print('*'*10+'\n')
if __name__ == '__main__':
app.run()
```
When I try to run this configuration with `docker-compose -f docker-compose.prod.yml build --force-rm --no-cache web && docker-compose -f docker-compose.prod.yml run web` I get following logs:
```
Starting simple_app_db_1 ... done
[2019-12-18 12:15:45 +0000] [1] [INFO] Starting gunicorn 20.0.4
[2019-12-18 12:15:45 +0000] [1] [INFO] Listening at: http://0.0.0.0:5000 (1)
[2019-12-18 12:15:45 +0000] [1] [INFO] Using worker: sync
[2019-12-18 12:15:45 +0000] [9] [INFO] Booting worker with pid: 9
[2019-12-18 12:15:45 +0000] [10] [INFO] Booting worker with pid: 10
/usr/local/lib/python3.7/site-packages/flask_sqlalchemy/__init__.py:835: FSADeprecationWarning: SQLALCHEMY_TRACK_MODIFICATIONS adds significant overhead and will be disabled by default in the future. Set it to True or False to suppress this warning.
'SQLALCHEMY_TRACK_MODIFICATIONS adds significant overhead and '
**********
wsgi
**********
/usr/local/lib/python3.7/site-packages/flask_sqlalchemy/__init__.py:835: FSADeprecationWarning: SQLALCHEMY_TRACK_MODIFICATIONS adds significant overhead and will be disabled by default in the future. Set it to True or False to suppress this warning.
'SQLALCHEMY_TRACK_MODIFICATIONS adds significant overhead and '
**********
wsgi
**********
```
So the `wsgi.py` file is not the `__main__`. However, when I try to get rid of this `if`:
```
from main import app
print('*'*10)
print(__name__)
print('*'*10+'\n')
app.run()
```
I get:
```
OSError: [Errno 98] Address already in use
```
How can I correct this config to use gunicorn?
|
2019/12/18
|
[
"https://Stackoverflow.com/questions/59391988",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4765864/"
] |
1. Close `Visual Studio`.
2. Delete the `*.testlog` files in:
*solutionfolder*\.vs\*solution name*\v16\TestStore\*number*.
|
I faced the same issue right now. A cleanup helped. As I had cleanup issues with VS in the last time (some DB-lock prevents a real cleanup to happen), my working cleanup was this way:
1. Close VS.
2. Git Bash in solution folder: `git clean -xfd`
Probably it helps.
| 5,962
|
52,305,075
|
Per [Google's Cloud Datastore Emulator installation instructions](https://cloud.google.com/datastore/docs/tools/datastore-emulator), I was able to install and run the emulator in a *bash* terminal window without problem with `gcloud beta emulators datastore start --project gramm-id`.
I also setup the environment variables, [per the instructions](https://cloud.google.com/datastore/docs/tools/datastore-emulator#automatically_setting_the_variables), in another terminal with `$(gcloud beta emulators datastore env-init)` and verified they were defined.
However, when I run my python script to add an entity to the local datastore with this code:
```py
from google.cloud import datastore
print(os.environ['DATASTORE_HOST']) # output: http://localhost:8081
print(os.environ['DATASTORE_EMULATOR_HOST']) # output: localhost:8081
client = datastore.Client('gramm-id')
kind = 'Task'
name = 'simpleTask'
task_key = client.key(kind, name)
task = client.Enity(key=task_key)
task['description'] = 'Buy milk'
client.put(task)
```
I get the error:
```
Traceback (most recent call last):
File "tools.py", line 237, in <module>
client = datastore.Client('gramm-id')
File "/home/.../lib/python3.6/site-packages/google/cloud/datastore/client.py", line 205, in __init__
project=project, credentials=credentials, _http=_http)
... long stack trace ....
File "/home/.../lib/python3.6/site-packages/google/auth/_default.py", line 306, in default
raise exceptions.DefaultCredentialsError(_HELP_MESSAGE)
google.auth.exceptions.DefaultCredentialsError: Could not automatically determine credentials. Please set GOOGLE_APPLICATION_CREDENTIALS or explicitly create credentials and re-run the application. For more information, please see https://developers.google.com/accounts/docs/application-default-credentials.
```
I don't think I need to [create a GCP service account and provide access credentials](https://cloud.google.com/docs/authentication/production#obtaining_and_providing_service_account_credentials_manually) to use the datastore emulator on my machine.
My system:
* Ubuntu 18.04
* Anaconda python 3.6.6
* Google Cloud SDK 215.0.0
* cloud-datastore-emulator 2.0.2.
What am I missing?
|
2018/09/13
|
[
"https://Stackoverflow.com/questions/52305075",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1181911/"
] |
>
> gcloud auth application-default login
>
>
>
This will prompt you to login through a browser window and will set your GOOGLE\_APPLICATION\_CREDENTIALS correctly for you. [[1]](https://cloud.google.com/docs/authentication/production#calling)
|
In theory you should be able to use mock credentials, e.g.:
```
class EmulatorCreds(google.auth.credentials.Credentials):
def __init__(self):
self.token = b'secret'
self.expiry = None
@property
def valid(self):
return True
def refresh(self, _):
raise RuntimeError('Should never be refreshed.')
client = datastore.Client(
project='gramm-id',
credentials=EmulatorCreds() ,
_http=requests.Session() # Un-authorized
)
```
However [it seems like this doesn't currently work](https://github.com/GoogleCloudPlatform/google-cloud-python/issues/3920), so for now you'll need to set `GOOGLE_APPLICATION_CREDENTIALS`
| 5,971
|
42,068,203
|
I am learning to use scrapinghub.com which runs in python 2.x
I have written a script which uses Scrapy, I have crawled a string like below:
```
%3Ctable%20width%3D%22100%25%22%3E%0D%0A%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%3Ctr%3E%3Ctd%3E%3Cp%20style%3D%22color%3A%23ff0000%3Bfont-size%3A20pt%3Btext-align%3Acenter%3Bfont-weight%3Abold%22%3E%0D%0A%09%E6%84%9B%E8%BF%AA%E9%81%94%20adidas%20Energy%20Boost%20%E8%B7%AF%E8%B7%91%20%E4%BD%8E%E7%AD%92%20%E9%81%8B%E5%8B%95%20%E4%BC%91%E9%96%92%20%E8%B7%91%E9%9E%8B%20%E8%B7%91%E6%AD%A5%20%E6%85%A2%E8%B7%91%20%E9%A6%AC%E6%8B%89%E6%9D%BE%20%E5%81%A5%E8%BA%AB%E6%88%BF%20%E6%B5%81%E8%A1%8C%20%E7%90%83%E9%9E%8B%20%E5%A5%B3%E8%A3%9D%20%E5%A5%B3%E6%AC%BE%20%E5%A5%B3%20%E5%A5%B3%E9%9E%8B%0D%0A%3C%2Fp%3E%0D%0A%3C%2Ftd%3E%3C%2Ftr%3E%0D%0A%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%3Ctr%3E%3Ctd%3E%3Cp%20style%3D%22color%3A%23000000%3Bfont-size%3A14pt%3Btext-align%3Acenter%22%3E%0D%0A%09%EF%BC%8A%E9%9D%88%E6%B4%BB%E3%80%81%E8%BC%95%E9%87%8F%E3%80%81%E8%88%92%E9%81%A9%E5%85%BC%E5%85%B7%E7%9A%84%E9%81%B8%E6%93%87%3Cbr%20%2F%3E%EF%BC%8A%E7%B0%A1%E7%B4%84%E7%8F%BE%E4%BB%A3%E7%9A%84%E7%94%A2%E5%93%81%E8%A8%AD%E8%A8%88%2C%E5%B9%B4%E8%BC%95%E5%A4%9A%E6%A8%A3%E5%8C%96%E7%9A%84%E9%85%8D%E8%89%B2%E6%96%B9%E6%A1%88%2C%E6%9B%B4%E7%82%BA%E7%AC%A6%E5%90%88%E5%B9%B4%E8%BC%95%E6%B6%88%E8%B2%BB%E8%80%85%E7%9A%84%E5%AF%A9%E7%BE%8E%E5%81%8F%E5%A5%BD%3Cbr%20%2F%3E%EF%BC%8A%E7%B0%A1%E5%96%AE%E7%9A%84%E7%B7%9A%E6%A2%9D%E5%92%8C%E4%B9%BE%E6%B7%A8%E7%9A%84%E8%A8%AD%E8%A8%88%2C%E6%8F%90%E4%BE%9B%E4%BA%86%E7%8D%A8%E7%89%B9%E7%9A%84%E7%A9%BF%E6%90%AD%E7%B5%84%E5%90%88%3Cbr%20%2F%3E%EF%BC%8A%E9%80%8F%E6%B0%A3%E8%88%87%E4%BF%9D%E8%AD%B7%E6%80%A7%2C%E7%B5%90%E5%90%88%E4%BA%86ADIDAS%E7%9A%84%E5%89%B5%E6%96%B0%E7%A7%91%E6%8A%80%2C%E5%89%B5%E9%80%A0%E4%BA%86%E5%AE%8C%E7%BE%8E%E7%9A%84%E7%94%A2%E5%93%81%3Cbr%20%2F%3E%0D%0A%3C%2Fp%3E%0D%0A%3C%2Ftd%3E%3C%2Ftr%3E%0D%0A%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%3Ctr%3E%3Ctd%3E%3Cdiv%20align%3D%22center%22%3E%3Cimg%20src%3D%22https%3A%2F%2Fs.yimg.com%2Fwb%2Fimages%2F2B558E585E39649599A9A266349EABD17A4ABC18%22%20%2F%3E%3C%2Fdiv%3E%3C%2Ftd%3E%3C%2Ftr%3E%0D%0A%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%3C%2Ftable%3E%3Ctable%20width%3D%22100%25%22%3E%0D%0A%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%3Ctr%3E%3Ctd%3E%3Cp%20style%3D%22color%3A%23000000%3Bfont-size%3A12pt%3Btext-align%3Aleft%3Bfont-weight%3A100%22%3E%0D%0A%09%0D%0A%3C%2Fp%3E%0D%0A%3C%2Ftd%3E%3C%2Ftr%3E%0D%0A%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%3Ctr%3E%3Ctd%3E%3Cdiv%20align%3D%22center%22%3E%3Cimg%20src%3D%22https%3A%2F%2Fs.yimg.com%2Fwb%2Fimages%2F0F1A6CBFE6F6631189D491A17A2A2E7C388F194E%22%20%2F%3E%3Cdiv%3E%3C%2Ftd%3E%3C%2Ftr%3E%0D%0A%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%3Ctr%3E%3Ctd%3E%3C%2Ftd%3E%3C%2Ftr%3E%0D%0A%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%3C%2Ftable%3E%3Ctable%20width%3D%22100%25%22%3E%0D%0A%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%3Ctr%3E%3Ctd%3E%3Cp%20style%3D%22color%3A%23000000%3Bfont-size%3A12pt%3Btext-align%3Aleft%3Bfont-weight%3A100%22%3E%0D%0A%09%0D%0A%3C%2Fp%3E%0D%0A%3C%2Ftd%3E%3C%2Ftr%3E%0D%0A%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%3Ctr%3E%3Ctd%3E%3Cdiv%20align%3D%22center%22%3E%3Cimg%20src%3D%22https%3A%2F%2Fs.yimg.com%2Fwb%2Fimages%2FA0C9B09CAC784E2CA81A572E8F9F2E5721812607%22%20%2F%3E%3Cdiv%3E%3C%2Ftd%3E%3C%2Ftr%3E%0D%0A%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%3Ctr%3E%3Ctd%3E%3C%2Ftd%3E%3C%2Ftr%3E%0D%0A%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%3C%2Ftable%3E
```
Which always gives me the following:
```html
<table width="100%"> <tr><td><p style="color:#fa6b81;font-size:18pt;text-align:center;font-weight:bold">(女) æè¿ªé ADIDAD ENERGY CLOUD W éæ°£ç¶²å¸ ç¾æ é» èè·ç¶ ä¼éé æ¢è·é</p></td></tr> <tr><td><p style="color:#000000;font-size:12pt;text-align:center"><font color="BLUE">â»æ¬è³£å ´åççºYAHOOè³¼ç©ä¸å¿å°ç¨ï¼å¶å®å¹³å°è¥ä½¿ç¨æ¬ç«ç¸éåç~ç屬侵æ¬!!</font><BR><BR></p></td></tr> <tr><td><div align="center"><img src="https://s.yimg.com/wb/images/739F6D54CD0AA4440D67A8BF0E569B0229AB1B37" /></div></td></tr> </table><table width="100%"> <tr><td><p style="color:#000000;font-size:12pt;text-align:left;font-weight:100"></p></td></tr> <tr><td><div align="center"><img src="https://s.yimg.com/wb/images/91D28279378AF5E3C26740855775ECAD3A7F4A6B" /><div></td></tr> <tr><td></td></tr> </table><table width="100%"> <tr><td><p style="color:#000000;font-size:12pt;text-align:left;font-weight:100"></p></td></tr> <tr><td><div align="center"><img src="https://s.yimg.com/wb/images/B2237D69C0886CCF330AFA459E3C03BB4454D01B" /><div></td></tr> <tr><td></td></tr> </table><table width="100%"> <tr><td><p style="color:#000000;font-size:12pt;text-align:left;font-weight:100"></p></td></tr> <tr><td><div align="center"><img src="https://s.yimg.com/wb/images/B60D486A89EDBAFBFE824F00309D069517654050" /><div></td></tr> <tr><td></td></tr> </table><table width="100%"> <tr><td><p style="color:#000000;font-size:12pt;text-align:left;font-weight:100"></p></td></tr> <tr><td><div align="center"><img src="https://s.yimg.com/wb/images/57EAC1C8B09A019AC734F50FB51DB87D0B319002" /><div></td></tr> <tr><td></td></tr> </table><table width="100%"> <tr><td><p style="color:#000000;font-size:12pt;text-align:left;font-weight:100"></p></td></tr> <tr><td><div align="center"><img src="https://s.yimg.com/wb/images/CEC5C31984853968755AE7465BCB251C82676B0B" /><div></td></tr> <tr><td></td></tr> </table><table width="100%"> <tr><td><p style="color:#000000;font-size:12pt;text-align:left;font-weight:100"></p></td></tr> <tr><td><div align="center"><img src="https://s.yimg.com/wb/images/B065DFBACAEC5ABED898492265DEB710EA052358" /><div></td></tr> <tr><td></td></tr> </table>
```
I always get the garbage text `(女) æè¿ªé ADIDAD ENERGY CLOUD W éæ°£ç¶²å`¸
The conversion code from url encoded text to unicode is like below
```
special_text = re.sub("<.*?>", "", special_text)
special_text = re.sub("<!--", "", special_text)
special_text = re.sub("-->", "", special_text)
special_text = re.sub("\n", "", special_text)
special_text = special_text.strip()
special_text = unquote(special_text)
special_text = re.sub("\n", "", special_text)
special_text = re.sub("\r", "", special_text)
special_text = re.sub("\t", "", special_text)
special_text = u' '.join((special_text, '')).encode('utf-8').strip()
```
I have tried a lot of different codes like
```
special_text = special_text.encode('utf-8')
special_text = special_text.decode('utf-8')
```
Which either gives me error or still the garbage text
Not sure what is the proper way to convert to unicode?
|
2017/02/06
|
[
"https://Stackoverflow.com/questions/42068203",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/339229/"
] |
Your data is perfectly valid UTF-8, encoded into a URL (so URLEncoded). Your output indicates you are looking at a [Mojibake](https://en.wikipedia.org/wiki/Mojibake), where your own software (console, terminal, text editor), is using a *different* codec to interpret the UTF-8 data. I suspect your setup is using CP-1254:
```
>>> print text.encode('utf8').decode('sloppy-cp1254') # codec from the ftfy project
æ„›è¿ªé” adidas Energy Boost 路跑 ä½ç’ é‹å‹• 休閒 è·‘é‹ è·‘æ¥ æ…¢è·‘ é¦¬æ‹‰æ¾ å¥èº«æˆ¿ æµè¡Œ çƒé‹ å¥³è£ å¥³æ¬¾ 女 女é‹
ï¼Šéˆæ´»ã€è¼•é‡ã€èˆ’é©å…¼å…·çš„鏿“‡
*簡約ç¾ä»£çš„產å“è¨è¨ˆ,年輕多樣化的é…色方案,更為符åˆå¹´è¼•消費者的審ç¾å好
*簡單的線æ¢å’Œä¹¾æ·¨çš„è¨è¨ˆ,æä¾›äº†ç¨ç‰¹çš„ç©¿æçµ„åˆ
ï¼Šé€æ°£èˆ‡ä¿è·æ€§,çµåˆäº†ADIDAS的創新科技,å‰µé€ äº†å®Œç¾çš„產å“
```
If you don't know how to fix your terminal, I suggest you write the data to a file instead and use an editor you can tell what codec to use to read the data:
```
import io
with io.open('somefilename.txt', encoding='utf8') as f:
f.write(unicode_value)
```
I also strongly recommend you use an actual HTML parser to handle the data, and not rely on regular expressions. The following code for Python 2 and 3 produces a Unicode value with the textual information from your URL:
```
from bs4 import BeautifulSoup
try:
from urllib import unquote
except ImportError:
from urllib.parse import unquote
soup = BeautifulSoup(unquote(special_text), 'html.parser') # consider installing lxml instead
text = soup.get_text('\n', strip=True) # put newlines between sections
print(text)
```
For your input, on my Mac OSX terminal configured for handling Unicode text as UTF-8, I see:
```none
愛迪達 adidas Energy Boost 路跑 低筒 運動 休閒 跑鞋 跑步 慢跑 馬拉松 健身房 流行 球鞋 女裝 女款 女 女鞋
*靈活、輕量、舒適兼具的選擇
*簡約現代的產品設計,年輕多樣化的配色方案,更為符合年輕消費者的審美偏好
*簡單的線條和乾淨的設計,提供了獨特的穿搭組合
*透氣與保護性,結合了ADIDAS的創新科技,創造了完美的產品
```
|
I don't know why, but for some reason I get it to work on scrapinghub.com like below.
Let say I have an HTML text like:
```
<html>
<div class="a">
Some chinese text
</div>
<div class="b">
QUOTED text got chinese in it
%3Ctable%20width%3D%22100%25%22%3E%0D%0A%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%3Ctr%3E%3Ctd%3E%3Cp%20style%3D%22color%3A%23ff0000%3Bfont-size%3A20pt%3Btext-align%3Acenter%3Bfont-weight%3Abold%22%3E%0D%0A%09%E6%84%9B%E8%BF%AA%E9%81%94%20adidas%20Energy%20Boost%20%E8%B7%AF%E8%B7%91%20%E4%BD%8E%E7%AD%92%20%E9%81%8B%E5%8B%95%20%E4%BC%91%E9%96%92%20%E8%B7%91%E9%9E%8B%20%E8%B7%91%E6%AD%A5%20%E6%85%A2%E8%B7%91%20%E9%A6%AC%E6%8B%89%E6%9D%BE%20%E5%81%A5%E8%BA%AB%E6%88%BF%20%E6%B5%81%E8%A1%8C%20%E7%90%83%E9%9E%8B%20%E5%A5%B3%E8%A3%9D%20%E5%A5%B3%E6%AC%BE%20%E5%A5%B3%20%E5%A5%B3%E9%9E%8B%0D%0A%3C%2Fp%3E%0D%0A%3C%2Ftd%3E%3C%2Ftr%3E%0D%0A%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%3Ctr%3E%3Ctd%3E%3Cp%20style%3D%22color%3A%23000000%3Bfont-size%3A14pt%3Btext-align%3Acenter%22%3E%0D%0A%09%EF%BC%8A%E9%9D%88%E6%B4%BB%E3%80%81%E8%BC%95%E9%87%8F%E3%80%81%E8%88%92%E9%81%A9%E5%85%BC%E5%85%B7%E7%9A%84%E9%81%B8%E6%93%87%3Cbr%20%2F%3E%EF%BC%8A%E7%B0%A1%E7%B4%84%E7%8F%BE%E4%BB%A3%E7%9A%84%E7%94%A2%E5%93%81%E8%A8%AD%E8%A8%88%2C%E5%B9%B4%E8%BC%95%E5%A4%9A%E6%A8%A3%E5%8C%96%E7%9A%84%E9%85%8D%E8%89%B2%E6%96%B9%E6%A1%88%2C%E6%9B%B4%E7%82%BA%E7%AC%A6%E5%90%88%E5%B9%B4%E8%BC%95%E6%B6%88%E8%B2%BB%E8%80%85%E7%9A%84%E5%AF%A9%E7%BE%8E%E5%81%8F%E5%A5%BD%3Cbr%20%2F%3E%EF%BC%8A%E7%B0%A1%E5%96%AE%E7%9A%84%E7%B7%9A%E6%A2%9D%E5%92%8C%E4%B9%BE%E6%B7%A8%E7%9A%84%E8%A8%AD%E8%A8%88%2C%E6%8F%90%E4%BE%9B%E4%BA%86%E7%8D%A8%E7%89%B9%E7%9A%84%E7%A9%BF%E6%90%AD%E7%B5%84%E5%90%88%3Cbr%20%2F%3E%EF%BC%8A%E9%80%8F%E6%B0%A3%E8%88%87%E4%BF%9D%E8%AD%B7%E6%80%A7%2C%E7%B5%90%E5%90%88%E4%BA%86ADIDAS%E7%9A%84%E5%89%B5%E6%96%B0%E7%A7%91%E6%8A%80%2C%E5%89%B5%E9%80%A0%E4%BA%86%E5%AE%8C%E7%BE%8E%E7%9A%84%E7%94%A2%E5%93%81%3Cbr%20%2F%3E%0D%0A%3C%2Fp%3E%0D%0A%3C%2Ftd%3E%3C%2Ftr%3E%0D%0A%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%3Ctr%3E%3Ctd%3E%3Cdiv%20align%3D%22center%22%3E%3Cimg%20src%3D%22https%3A%2F%2Fs.yimg.com%2Fwb%2Fimages%2F2B558E585E39649599A9A266349EABD17A4ABC18%22%20%2F%3E%3C%2Fdiv%3E%3C%2Ftd%3E%3C%2Ftr%3E%0D%0A%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%3C%2Ftable%3E%3Ctable%20width%3D%22100%25%22%3E%0D%0A%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%3Ctr%3E%3Ctd%3E%3Cp%20style%3D%22color%3A%23000000%3Bfont-size%3A12pt%3Btext-align%3Aleft%3Bfont-weight%3A100%22%3E%0D%0A%09%0D%0A%3C%2Fp%3E%0D%0A%3C%2Ftd%3E%3C%2Ftr%3E%0D%0A%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%3Ctr%3E%3Ctd%3E%3Cdiv%20align%3D%22center%22%3E%3Cimg%20src%3D%22https%3A%2F%2Fs.yimg.com%2Fwb%2Fimages%2F0F1A6CBFE6F6631189D491A17A2A2E7C388F194E%22%20%2F%3E%3Cdiv%3E%3C%2Ftd%3E%3C%2Ftr%3E%0D%0A%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%3Ctr%3E%3Ctd%3E%3C%2Ftd%3E%3C%2Ftr%3E%0D%0A%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%3C%2Ftable%3E%3Ctable%20width%3D%22100%25%22%3E%0D%0A%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%3Ctr%3E%3Ctd%3E%3Cp%20style%3D%22color%3A%23000000%3Bfont-size%3A12pt%3Btext-align%3Aleft%3Bfont-weight%3A100%22%3E%0D%0A%09%0D%0A%3C%2Fp%3E%0D%0A%3C%2Ftd%3E%3C%2Ftr%3E%0D%0A%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%3Ctr%3E%3Ctd%3E%3Cdiv%20align%3D%22center%22%3E%3Cimg%20src%3D%22https%3A%2F%2Fs.yimg.com%2Fwb%2Fimages%2FA0C9B09CAC784E2CA81A572E8F9F2E5721812607%22%20%2F%3E%3Cdiv%3E%3C%2Ftd%3E%3C%2Ftr%3E%0D%0A%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%3Ctr%3E%3Ctd%3E%3C%2Ftd%3E%3C%2Ftr%3E%0D%0A%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%3C%2Ftable%3E
</div>
</html>
```
So I parse it to assign class="a" to variable AAA, class="b" to variable BBB
if I want to unquote BBB and have the chinese characters display correctly I do the following:
```
BBB = u' '.join((BBB, ''))
BBB = BBB.encode('ascii')
BBB = unquote(BBB)
```
So when I output both AAA & BBB on scrapinghub, it will both display chinese text correctly.
I just want to point out that Martijn Pieters is also correct in his answers when I am doing this locally on my MAC. But just not sure whats going on in scrapinghub that I need to do the above.
| 5,972
|
9,851,156
|
I am managing a quite large python code base (>2000 lines) that I want anyway to be available as a single runnable python script. So I am searching for a method or a tool to merge a development folder, made of different python files into a single running script.
The thing/method I am searching for should take code split into different files, maybe with a starting `__init___.py` file that contains the imports and merge it into a single, big script.
Much like a preprocessor. Best if a near-native way, better if I can anyway run from the dev folder.
I have already checked out pypp and pypreprocessor but they don't seem to take the point.
Something like a strange use of `__import__()` or maybe a bunch of `from foo import *` replaced by the preprocessor with the code? Obviously I only want to merge my directory and not common libraries.
**Update**
What I want is exactly mantaining the code as a package, and then being able to "compile" it into a single script, easy to copy-paste, distribute and reuse.
|
2012/03/24
|
[
"https://Stackoverflow.com/questions/9851156",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/749014/"
] |
It sounds like you're asking how to merge your codebase into a single 2000-plus source file-- are you really, really sure you want to do this? It will make your code harder to maintain. Python files correspond to modules, so unless your main script does `from modname import *` for all its parts, you'll lose the module structure by converting it into one file.
What I would recommend is leaving the source structured as they are, and solving the problem of how to *distribute* the program:
1. You could use [PyInstaller](https://stackoverflow.com/a/112713/699305), py2exe or something similar to generate a single executable that doesn't even need a python installation. (If you can count on python being present, see @Sebastian's comment below.)
2. If you want to distribute your code base for use by other python programs, you should definitely start by structuring it as a package, so it can be loaded with a single `import`.
3. To distribute a lot of python source files easily, you can package everything into a zip archive or an "egg" (which is actually a zip archive with special housekeeping info). Python can import modules directly from a zip or egg archive.
|
[waffles](https://bitbucket.org/ArneBab/waffles) seems to do exactly what you're after, although I've not tried it
You could probably do this manually, something like:
```
# file1.py
from .file2 import func1, func2
def something():
func1() + func2()
# file2.py
def func1(): pass
def func2(): pass
# __init__.py
from .file1 import something
if __name__ == "__main__":
something()
```
Then you can concatenate all the files together, removing any line starting with `from .`, and.. it might work.
That said, an executable egg or regular PyPI distribution would be much simpler and more reliable!
| 5,973
|
61,452,787
|
I cannot install Django 3 on my Debian 9 system.
I follow <https://www.rosehosting.com/blog/how-to-install-python-3-6-4-on-debian-9/> this guide to install a Python 3 because there is no Python 3 in Debian repositories:
```sh
:~# python3
Python 3.5.3 (default, Sep 27 2018, 17:25:39)
```
```sh
~# pip3 -V
pip 9.0.1 from /usr/lib/python3/dist-packages (python 3.5)
```
```sh
~# pip3 install Django==3.0.5
Collecting Django==3.0.5
Could not find a version that satisfies the requirement Django==3.0.5 (from versions: 1.1.3, 1.1.4, 1.2, 1.2.1, 1.2.2, 1.2.3, 1.2.4, 1.2.5, 1.2.6, 1.2.7, 1.3, 1.3.1, 1.3.2, 1.3.3, 1.3.4, 1.3.5, 1.3.6, 1.3.7, 1.4, 1.4.1, 1.4.2, 1.4.3, 1.4.4, 1.4.5, 1.4.6, 1.4.7, 1.4.8, 1.4.9, 1.4.10, 1.4.11, 1.4.12, 1.4.13, 1.4.14, 1.4.15, 1.4.16, 1.4.17, 1.4.18, 1.4.19, 1.4.20, 1.4.21, 1.4.22, 1.5, 1.5.1, 1.5.2, 1.5.3, 1.5.4, 1.5.5, 1.5.6, 1.5.7, 1.5.8, 1.5.9, 1.5.10, 1.5.11, 1.5.12, 1.6, 1.6.1, 1.6.2, 1.6.3, 1.6.4, 1.6.5, 1.6.6, 1.6.7, 1.6.8, 1.6.9, 1.6.10, 1.6.11, 1.7, 1.7.1, 1.7.2, 1.7.3, 1.7.4, 1.7.5, 1.7.6, 1.7.7, 1.7.8, 1.7.9, 1.7.10, 1.7.11, 1.8a1, 1.8b1, 1.8b2, 1.8rc1, 1.8, 1.8.1, 1.8.2, 1.8.3, 1.8.4, 1.8.5, 1.8.6, 1.8.7, 1.8.8, 1.8.9, 1.8.10, 1.8.11, 1.8.12, 1.8.13, 1.8.14, 1.8.15, 1.8.16, 1.8.17, 1.8.18, 1.8.19, 1.9a1, 1.9b1, 1.9rc1, 1.9rc2, 1.9, 1.9.1, 1.9.2, 1.9.3, 1.9.4, 1.9.5, 1.9.6, 1.9.7, 1.9.8, 1.9.9, 1.9.10, 1.9.11, 1.9.12, 1.9.13, 1.10a1, 1.10b1, 1.10rc1, 1.10, 1.10.1, 1.10.2, 1.10.3, 1.10.4, 1.10.5, 1.10.6, 1.10.7, 1.10.8, 1.11a1, 1.11b1, 1.11rc1, 1.11, 1.11.1, 1.11.2, 1.11.3, 1.11.4, 1.11.5, 1.11.6, 1.11.7, 1.11.8, 1.11.9, 1.11.10, 1.11.11, 1.11.12, 1.11.13, 1.11.14, 1.11.15, 1.11.16, 1.11.17, 1.11.18, 1.11.20, 1.11.21, 1.11.22, 1.11.23, 1.11.24, 1.11.25, 1.11.26, 1.11.27, 1.11.28, 1.11.29, 2.0a1, 2.0b1, 2.0rc1, 2.0, 2.0.1, 2.0.2, 2.0.3, 2.0.4, 2.0.5, 2.0.6, 2.0.7, 2.0.8, 2.0.9, 2.0.10, 2.0.12, 2.0.13, 2.1a1, 2.1b1, 2.1rc1, 2.1, 2.1.1, 2.1.2, 2.1.3, 2.1.4, 2.1.5, 2.1.7, 2.1.8, 2.1.9, 2.1.10, 2.1.11, 2.1.12, 2.1.13, 2.1.14, 2.1.15, 2.2a1, 2.2b1, 2.2rc1, 2.2, 2.2.1, 2.2.2, 2.2.3, 2.2.4, 2.2.5, 2.2.6, 2.2.7, 2.2.8, 2.2.9, 2.2.10, 2.2.11, 2.2.12)
No matching distribution found for Django==3.0.5
```
|
2020/04/27
|
[
"https://Stackoverflow.com/questions/61452787",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4003516/"
] |
For the latest versions of Django you must be using python 3.6, 3.7, or 3.8. You're currently using 3.5
<https://docs.djangoproject.com/en/3.0/faq/install/#faq-python-version-support>
|
install python3-venv by command:
```
sudo apt install python3-venv
```
and
```
mkdir my_django_app
cd my_django_app; python3 -m venv venv
```
ref: <https://linuxize.com/post/how-to-install-django-on-debian-9>
| 5,974
|
39,849,641
|
I am using `flask migrate` to for database creation & migration in flask with flask-sqlalchemy.
Everything was working fine until I changed my database user password contains '@' then it stopped working so, I updated my code based on
[Writing a connection string when password contains special characters](https://stackoverflow.com/questions/1423804/writing-a-connection-string-when-password-contains-special-characters)
It working for application but not for flask-migration, Its showing error while migrating
i.e on `python manage.py db migrate`
```
ValueError: invalid interpolation syntax in u'mysql://user:p%40ssword@localhost/testdb' at position 15
```
Here password is `p@ssword` and its escaped by `urlquote` (see above question link).
Full error stack:
```
Traceback (most recent call last):
File "manage.py", line 20, in <module>
manager.run()
File "/usr/local/lib/python2.7/dist-packages/flask_script/__init__.py", line 412, in run
result = self.handle(sys.argv[0], sys.argv[1:])
File "/usr/local/lib/python2.7/dist-packages/flask_script/__init__.py", line 383, in handle
res = handle(*args, **config)
File "/usr/local/lib/python2.7/dist-packages/flask_script/commands.py", line 216, in __call__
return self.run(*args, **kwargs)
File "/usr/local/lib/python2.7/dist-packages/flask_migrate/__init__.py", line 177, in migrate
version_path=version_path, rev_id=rev_id)
File "/usr/local/lib/python2.7/dist-packages/alembic/command.py", line 117, in revision
script_directory.run_env()
File "/usr/local/lib/python2.7/dist-packages/alembic/script/base.py", line 407, in run_env
util.load_python_file(self.dir, 'env.py')
File "/usr/local/lib/python2.7/dist-packages/alembic/util/pyfiles.py", line 93, in load_python_file
module = load_module_py(module_id, path)
File "/usr/local/lib/python2.7/dist-packages/alembic/util/compat.py", line 79, in load_module_py
mod = imp.load_source(module_id, path, fp)
File "migrations/env.py", line 22, in <module>
current_app.config.get('SQLALCHEMY_DATABASE_URI'))
File "/usr/local/lib/python2.7/dist-packages/alembic/config.py", line 218, in set_main_option
self.set_section_option(self.config_ini_section, name, value)
File "/usr/local/lib/python2.7/dist-packages/alembic/config.py", line 245, in set_section_option
self.file_config.set(section, name, value)
File "/usr/lib/python2.7/ConfigParser.py", line 752, in set
"position %d" % (value, tmp_value.find('%')))
ValueError: invalid interpolation syntax in u'mysql://user:p%40ssword@localhost/testdb' at position 15
```
Please help
|
2016/10/04
|
[
"https://Stackoverflow.com/questions/39849641",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/873416/"
] |
I have a solution for this issue after experiencing it as well.
There's an issue with '%' (percent signs) in the db connection URI after you urlencode the string.
I tried substituting the percent sign with double percent signs ('%%') which gets me past the interpolation error. However, that resulted in not being able to connect to the database because of an incorrect password.
Solution I'm going with for now is to avoid using '%' in my db password. Not a satisfactory solution, but will do for now. I'll make a note in "alembic"'s github of the issue. Seems using RawConfigParser in their package could help avoid this issue.
|
You may want to look at <http://docs.sqlalchemy.org/en/latest/dialects/mysql.html#mysql-unicode>
I was having the same issue with my password and the mysql connector. using the mysql+pymysql connector allowed me to connect in application and in migration scripts.
| 5,975
|
36,329,606
|
This was the example picked from bokeh documentation.
It is showing attribute error.
I am using ipython in anaconda environment.
```
import pandas as pd
from bokeh.charts import TimeSeries, output_file, show
AAPL = pd.read_csv(
"http://ichart.yahoo.com/table.csv?s=AAPL&a=0&b=1&c=2000&d=0&e=1&f=2010",
parse_dates=['Date'])
output_file("timeseries.html")
data = dict(AAPL=AAPL['Adj Close'], Date=AAPL['Date'])
p = TimeSeries(data, index='Date', title="APPL", ylabel='Stock Prices')
show(p)
AttributeError Traceback (most recent call last)
<ipython-input-3-fe34a9860ab7> in <module>()
10 data = dict(AAPL=AAPL['Adj Close'], Date=AAPL['Date'])
11
---> 12 p = TimeSeries(data, index='Date', title="APPL", ylabel='Stock Prices')
13
14 show(p)
C:\Users\Bhaskara\AppData\Local\Continuum\Anaconda3\lib\site-packages\bokeh\charts\builders\timeseries_builder.py in TimeSeries(data, x, y, builder_type, **kws)
100 kws['x'] = x
101 kws['y'] = y
--> 102 return create_and_build(builder_type, data, **kws)
C:\Users\Bhaskara\AppData\Local\Continuum\Anaconda3\lib\site-packages\bokeh\charts\builder.py in create_and_build(builder_class, *data, **kws)
64 # create a chart to return, since there isn't one already
65 chart_kws = { k:v for k,v in kws.items() if k not in builder_props}
---> 66 chart = Chart(**chart_kws)
67 chart.add_builder(builder)
68 chart.start_plot()
C:\Users\Bhaskara\AppData\Local\Continuum\Anaconda3\lib\site-packages\bokeh\charts\chart.py in __init__(self, *args, **kwargs)
123 # supported types
124 tools = kwargs.pop('tools', None)
--> 125 super(Chart, self).__init__(*args, **kwargs)
126 defaults.apply(self)
127 if tools is not None:
C:\Users\Bhaskara\AppData\Local\Continuum\Anaconda3\lib\site-packages\bokeh\models\plots.py in __init__(self, **kwargs)
76 raise ValueError("Conflicting properties set on plot: background_fill, background_fill_color.")
77
---> 78 super(Plot, self).__init__(**kwargs)
79
80 def select(self, *args, **kwargs):
C:\Users\Bhaskara\AppData\Local\Continuum\Anaconda3\lib\site-packages\bokeh\model.py in __init__(self, **kwargs)
75 self._id = kwargs.pop("id", make_id())
76 self._document = None
---> 77 super(Model, self).__init__(**kwargs)
78 default_theme.apply_to_model(self)
79
C:\Users\Bhaskara\AppData\Local\Continuum\Anaconda3\lib\site-packages\bokeh\core\properties.py in __init__(self, **properties)
699
700 for name, value in properties.items():
--> 701 setattr(self, name, value)
702
703 def __setattr__(self, name, value):
C:\Users\Bhaskara\AppData\Local\Continuum\Anaconda3\lib\site-packages\bokeh\core\properties.py in __setattr__(self, name, value)
720
721 raise AttributeError("unexpected attribute '%s' to %s, %s attributes are %s" %
--> 722 (name, self.__class__.__name__, text, nice_join(matches)))
723
724 def set_from_json(self, name, json, models=None):
AttributeError: unexpected attribute 'index' to Chart, possible attributes are above, background_fill_alpha, background_fill_color, below, border_fill_alpha, border_fill_color, disabled, extra_x_ranges, extra_y_ranges, h_symmetry, height, hidpi, left, legend, lod_factor, lod_interval, lod_threshold, lod_timeout, logo, min_border, min_border_bottom, min_border_left, min_border_right, min_border_top, name, outline_line_alpha, outline_line_cap, outline_line_color, outline_line_dash, outline_line_dash_offset, outline_line_join, outline_line_width, plot_height, plot_width, renderers, responsive, right, tags, title, title_standoff, title_text_align, title_text_alpha, title_text_baseline, title_text_color, title_text_font, title_text_font_size, title_text_font_style, tool_events, toolbar_location, tools, v_symmetry, webgl, width, x_mapper_type, x_range, xgrid, xlabel, xscale, y_mapper_type, y_range, ygrid, ylabel or yscale
```
|
2016/03/31
|
[
"https://Stackoverflow.com/questions/36329606",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6139075/"
] |
Check which version you are using, If you are using 0.11.1 then you can use
<http://docs.bokeh.org/en/0.11.1/docs/user_guide/plotting.html> for doing the same.
|
instead of using attribute index, set x = 'Date'.
```
p = TimeSeries(data, x ='Date', title="APPL", ylabel='Stock Prices')
```
| 5,978
|
55,648,776
|
In apache beam pipeline, I am taking input from cloud storage and trying to write it in biqguery table. But during the execution of pipeline getting this error. "AttributeError: 'module' object has no attribute 'storage'"
```
def run(argv=None):
with open('gl_ledgers.json') as json_file:
schema = json.load(json_file)
schema = json.dumps(schema)
parser = argparse.ArgumentParser()
parser.add_argument('--input',
dest='input',
default='gs://bucket_name/poc/table_name/2019-04-12/2019-04-12 13:47:03.219000_file_name.csv',
help='Input file to process.')
parser.add_argument('--output',
dest='output',
required=False,
default="path to bigquery table",
help='Output file to write results to.')
known_args, pipeline_args = parser.parse_known_args(argv)
pipeline_options = PipelineOptions(pipeline_args)
pipeline_options.view_as(SetupOptions).save_main_session = True
p = beam.Pipeline(options=pipeline_options)
(p
| 'read' >> ReadFromText(known_args.input)
# | 'Format to json' >> (beam.ParDo(self.format_output_json))
| 'Write to BigQuery' >> beam.io.WriteToBigQuery(known_args.output, schema=schema)
)
result = p.run()
result.wait_until_finish()
if __name__ == '__main__':
run()
```
```
Traceback (most recent call last):
File "/usr/local/lib/python2.7/dist-packages/dataflow_worker/batchworker.py", line 773, in run
self._load_main_session(self.local_staging_directory)
File "/usr/local/lib/python2.7/dist-packages/dataflow_worker/batchworker.py", line 489, in _load_main_session
pickler.load_session(session_file)
File "/usr/local/lib/python2.7/dist-packages/apache_beam/internal/pickler.py", line 269, in load_session
return dill.load_session(file_path)
File "/usr/local/lib/python2.7/dist-packages/dill/_dill.py", line 410, in load_session
module = unpickler.load()
File "/usr/lib/python2.7/pickle.py", line 864, in load
dispatch[key](self)
File "/usr/lib/python2.7/pickle.py", line 1139, in load_reduce
value = func(*args)
File "/usr/local/lib/python2.7/dist-packages/dill/_dill.py", line 828, in _import_module
return getattr(__import__(module, None, None, [obj]), obj)
AttributeError: 'module' object has no attribute 'storage'```
```
|
2019/04/12
|
[
"https://Stackoverflow.com/questions/55648776",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/11303943/"
] |
This is probably related to `pipeline_options.view_as(SetupOptions).save_main_session = True`. Do you need that line?
Try removing that and see if it fixes the problem. It is likely that one of your imports can not be pickled. Without imports I can't help you debug further. You could also try moving your imports into the run function.
|
Possibly a [duplicate](https://stackoverflow.com/questions/53860066/gitlab-ci-runner-cant-import-google-cloud-in-python), in which case the problem would be that `google-cloud-storage` needs to be installed, not `google-cloud`.
| 5,979
|
14,909,365
|
I Have planned to build an application with a server and multiple clients.When the clients connect to the server for the first time it must be given a id.Each time the client sends a request,the server sends the client a set of strings.the client then processes these strings and once it is done it again sends a request to the server for another set of strings.The strings are present in a database on the server.
I have implemented part of the client program which processes the strings but i don't know how to achieve communication between the server and the clients.
I am developing this application using python.I do not know network programming and hence i dont know how to get this working.
I came upon socket programming and message oriented middleware,message queues,message brokers and am not sure if that is what i need.Could anyone please tell me what i need to use and which topics i need to learn to get this working.I hope that i don't sound vague.
|
2013/02/16
|
[
"https://Stackoverflow.com/questions/14909365",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2078134/"
] |
Reason why your app is crashing because you are trying to deal with your GUI elements i.e `UIAlertView` in background thread, you need to run it on the main thread or try to use dispatch queues
Using Dispatch Queues
```
dispatch_queue_t queue = dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_HIGH, 0ul);
dispatch_async(queue, ^{
//show your GUI stuff here...
});
```
OR you can show the GUI elements on the main thread like this
`[alertView performSelectorOnMainThread:@selector(show) withObject:nil waitUntilDone:YES];`
You can have more detail about using GUI elements on Threads on this [link](http://developer.apple.com/library/ios/#documentation/Cocoa/Conceptual/Multithreading/AboutThreads/AboutThreads.html#//apple_ref/doc/uid/10000057i-CH6-SW21)
|
Try to
```
- (IBAction)sendForm:(id)sender {
[self performSelectorInBackground:@selector(loadData) withObject:activityIndicator];
[activityIndicator startAnimating];
UIAlertView* ahtung = [[UIAlertView alloc] initWithTitle:@"Спасибо" message:@"Ваша заявка принята!\nВ течение часа, Вам поступит звонок для подтверждения заказа" delegate:self cancelButtonTitle:@"OK" otherButtonTitles:nil];
[ahtung show];
}
```
| 5,980
|
36,076,012
|
Say I have some class that manages a database connection. The user is supposed to call `close()` on instances of this class so that the db connection is terminated cleanly.
Is there any way in python to get this object to call `close()` if the interpreter is closed or the object is otherwise picked up by the garbage collector?
Edit: This question assumes the user of the object failed to instantiate it within a `with` block, either because he forgot or isn't concerned about closing connections.
|
2016/03/18
|
[
"https://Stackoverflow.com/questions/36076012",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1391717/"
] |
The only way to ensure such a method is called if you don't trust users is using `__del__` ([docs](https://docs.python.org/2/reference/datamodel.html#object.__del__)). From the docs:
>
> Called when the instance is about to be destroyed.
>
>
>
Note that there are lots of issues that make using del tricky. For example, at the moment it is called, the interpreter may be shutting down already - meaning other objects and *modules* may have been destroyed already. See the notes and warnings for details.
---
If you really cannot rely on users to be consenting adults, I would prevent them from implicitly avoiding `close` - don't give them a public `open` in the first place. Only supply the methods to support `with`. If anybody explicitly digs into your code to do otherwise, they probably have a good reason for it.
|
Define [`__enter__`](https://docs.python.org/2/reference/datamodel.html#object.__enter__) and [`__exit__`](https://docs.python.org/2/reference/datamodel.html#object.__exit__) methods on your class and then use it with the [`with` statement](https://docs.python.org/2/reference/compound_stmts.html#with):
```
with MyClass() as c:
# Do stuff
```
When the `with` block ends your `__exit__()` method will be called automatically.
| 5,982
|
64,160,370
|
I am writhing a python script in order to communicate to my tello drone via wifi.
Once connected with the drone I can send UDP packets to send commands (this works perfectly fine).
I want to receive the video stream from the drone via UDP packets arriving at my udp server on port 11111. This is described in the SDK documentation, "https://dl-cdn.ryzerobotics.com/downloads/tello/20180910/Tello%20SDK%20Documentation%20EN\_1.3.pdf".
```
print ('\r\n\r\nTello drone communication tool\r\n')
print("...importing modules...")
import threading
import socket
import sys
import time
import platform
import cv2
print("Modules imported")
print("...Initialiasing UDP server to get video stream....")
drone_videostream = cv2.VideoCapture('udp://@0.0.0.0:11111')
print("Server initialised")
# my local adress to receive UDP packets from tello DRONE
host = ''
port = 9000
locaddr = (host,port)
print("...creation of UDP socket...")
# Create a UDP socket (UDP Portocol to receive and send UDP packets from/to drone)
sock = socket.socket(socket.AF_INET, socket.SOCK_DGRAM)
# Got drone port and ip adress from network (explained in official SDK documentation)
tello_address = ('192.168.10.1', 8889)
print("UDP socket created")
sock.bind(locaddr)
width = 320
height = 240
def receiveStream() :
print("...receiving stream...")
while True :
ret, frame = drone_videostream.read()
img = cv2.resize(frame, (width, height))
cv2.imshow("LiveStream", frame)
cv2.waitKey(1)
drone_videostream.release()
cv2.destroyAllWindows()
def receiving():
while True:
try:
data, server = sock.recvfrom(1518)
print(data.decode(encoding="utf-8"))
except Exception:
print ('\nExit . . .\n')
break
print ("...initialiazing connection with tello drone...")
message = "command"
message = message.encode(encoding="utf-8")
sent = sock.sendto(message, tello_address)
print("Connection established")
#create a thread that will excute the receiving() function
receiveThread = threading.Thread(target=receiving)
receiveThread.start()
receiveStreamThread = threading.Thread(target=receiveStream)
while True :
message = input(str("Enter a command :\r\n"))
if message == "streamon" :
message = message.encode(encoding="utf-8")
sent = sock.sendto(message, tello_address)
receiveStreamThread.start()
else :
message = message.encode(encoding="utf-8")
sent = sock.sendto(message, tello_address)
```
When I send the "streamon" command to the drone, I am unable to read the sended UDP packets. I get the following error :
```
error: (-215:Assertion failed) !ssize.empty() in function 'cv::resize'
```
This means that the frames are empty, thus, no image received.
Do you know why I dont receive them ?
Thank you very much for your help in advance,
Best :)
|
2020/10/01
|
[
"https://Stackoverflow.com/questions/64160370",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/13410369/"
] |
Hi Those who are following murtaza workshop ,
and unable to get the video stream ,
Use Open CV library version 4.4.0.46, and python interpreter 3.9.0.
Try to make sure you use the above specified versions.
|
I play with tello a lot recently.
what I saw from you code is you have entered "command" by right the light should turn green. The once you "stream on" the should be a return message. Check this message to see if there is any error.
The only apparent error is video source ID. You did what manually said.
[](https://i.stack.imgur.com/A88cY.png)
But from my experience, the only IP that I`m able to get the UDP stream is from udp://192.168.10.1:11111
You can check if you can see it by seeing from ffplay udp://192.168.10.1:11111
[](https://i.stack.imgur.com/KGXjI.jpg)
| 5,983
|
15,167,615
|
So basically my question relates to 'zip' (or izip), and this question which was asked before....
[Is there a better way to iterate over two lists, getting one element from each list for each iteration?](https://stackoverflow.com/questions/1919044/is-there-a-better-way-to-iterate-over-two-lists-getting-one-element-from-each-l)
If i have two variables - where they either are a 1d array of values length n, or are a single value, how do i loop through them so that I get n values returned.
'zip' kindof does what I want - except that when I pass in a single value, and an array it complains.
I have an example of what I'm aiming for below - basically i have a c function that does a more efficient calculation than python. I want it to act like some of the numpy functions - that deal ok with mixtures of arrays and scalars, so i wrote a python wrapper for it. However - like I say 'zip' fails. I guess in principle I can do some testing of the input s and write a different statement for each variation of scalars and arrays - but it seems like python should have something more clever.... ;) Any advice?
```
"""
Example of zip problems.
"""
import numpy as np
import time
def cfun(a, b) :
"""
Pretending to be c function which doesn't deal with arrays
"""
if not np.isscalar(a) or not np.isscalar(b) :
raise Exception('c is freaking out')
else :
return a+b
def pyfun(a, b) :
"""
Python Wrappper - to deal with arrays input
"""
if not np.isscalar(a) or not np.isscalar(b) :
return np.array([cfun(a_i,b_i) for a_i, b_i in zip(a,b)])
else :
return cfun(a, b)
return cfun(a,b)
a = np.array([1,2])
b= np.array([1,2])
print pyfun(a, b)
a = [1,2]
b = 1
print pyfun(a, b)
```
**edit :**
Many thanks everyone for the suggestions everyone. Think i have to go for np.braodcast for the solution - since it seems the simplest from my perspective.....
|
2013/03/01
|
[
"https://Stackoverflow.com/questions/15167615",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1448052/"
] |
If you want to force broadcasting, you can use `numpy.lib.stride_tricks.broadcast_arrays`. Reusing your `cfun`:
```
def pyfun(a, b) :
if not (np.isscalar(a) and np.isscalar(b)) :
a_bcast, b_bcast = np.lib.stride_tricks.broadcast_arrays(a, b)
return np.array([cfun(j, k) for j, k in zip(a_bcast, b_bcast)])
return cfun(a, b)
```
And now:
```
>>> pyfun(5, 6)
11
>>> pyfun(5, [6, 7, 8])
array([11, 12, 13])
>>> pyfun([3, 4, 5], [6, 7, 8])
array([ 9, 11, 13])
```
For your particular application there is probably no advantage over Rob's pure python thing, since your function is still running in a python loop.
|
A decorator that optinally converts each of the arguments to a sequence might help. Here is the ordinary python (not numpy) version:
```
# TESTED
def listify(f):
def dolistify(*args):
from collections import Iterable
return f(*(a if isinstance(a, Iterable) else (a,) for a in args))
return dolistify
@listify
def foo(a,b):
print a, b
foo( (1,2), (3,4) )
foo( 1, [3,4] )
foo( 1, 2 )
```
So, in your example we need to use `not np.isscalar` as the predicate and `np.array` as the modifier. Because of the decorator, `pyfun` always receives an array.
```
#UNTESTED
def listify(f):
def dolistify(*args):
from collections import Iterable
return f(*(np.array([a]) if np.isscalar(a) else a for a in args))
return dolistify
@listify
def pyfun(a, b) :
"""
Python Wrappper - to deal with arrays input
"""
return np.array([cfun(a_i,b_i) for a_i, b_i in zip(a,b)])
```
Or maybe you could apply the same idea to `zip`:
```
#UNTESTED
def MyZip(*args):
return zip(np.array([a]) if np.isscalar(a) else a for a in args)
def pyfun(a, b) :
"""
Python Wrappper - to deal with arrays input
"""
return np.array([cfun(a_i,b_i) for a_i, b_i in MyZip(a,b)])
```
| 5,986
|
68,077,240
|
I have a python file that runs a machine learning algorithm that identifies circles in an image. From this python file, I am able to get all the coordinates (x and y) of every bounding box placed around the circles. I am appending all the coordinates into a local variable `xlist`/`ylist` (a list of all the integer values of the coordinates).
What is the best way to save an external file (either a `.txt` or `.py`) of `xlist` and `ylist`
|
2021/06/22
|
[
"https://Stackoverflow.com/questions/68077240",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/-1/"
] |
You can use the pickle library. It saves the data as its original data type only.
|
You can store them in a `.txt` file.
**Try this**
```
file = open('xlistFile.txt', 'w')
for item in xlist:
file.write(str(item))
file.close()
```
You can do the same for ylist
| 5,989
|
74,060,609
|
Despite im used to program stuff, im new in Python so i decide to learn by myself.
So, i install VS code and python. At the moment i tryied to use stuff like *tensorflow*, is showing an error saying that **my imports are missing**.
I've already tryed to install everything again, search for a solution online and nothing worked.
If someone knows anything about how to fix this i'd be greatfull.
|
2022/10/13
|
[
"https://Stackoverflow.com/questions/74060609",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/20234417/"
] |
Whether there are **multiple versions of python** in your environment, which will make the pip installed in one version of python instead of the python you are using.
Use shortcuts **"Ctrl+shift+P"** and type **"Python: Select Interpreter"** to choose the correct python. Then use `pip install packagename` to reinstall the package which you need.
Generally, we recommend people new to python to use the [conda virtual environment](https://code.visualstudio.com/docs/python/environments#_conda-environments).
[](https://i.stack.imgur.com/2NzZr.png)
|
Confirm you have downloaded python correctly:
* Open terminal
* Run `python --version`
+ (if that doesn't work try `python3 --version`
| 5,991
|
73,269,344
|
I am new to python and I have a file that I am trying to read.. this file contains many lines and to determine when to stop reading the file I wrote this line:
```
while True:
s=file.readline().strip() # this strip method cuts the '' character presents at the end
# if we reach at the end of the file we'll break the loop
if s=='':
break
```
this is because the file ends with an empty line, so to stop reading the file I used the above code, but the problem is that the file also starts with an empty line so this code will stop it before it reads the remaining lines... how to solve that ?
I know it may sound silly but as I said I am to python and I trying to learn .
|
2022/08/07
|
[
"https://Stackoverflow.com/questions/73269344",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/19711709/"
] |
The thing you are looking for is called "end of file" character or EOF
[how to find out wether a file is at its eof](https://stackoverflow.com/questions/10140281/how-to-find-out-whether-a-file-is-at-its-eof)
|
You can iterate on the opened file
```
lines = []
with open("some-file.txt") as some_file:
for line in some_file:
lines.append(line)
```
| 5,992
|
35,360,863
|
I'm trying to code a python script that finds an unknown number with the least amount of tries possible.
All I know is the number is < 10000
Everytime I make a wrong input I get an "error" response.
When I find the right number I get a "success" response.
Let's assume in this case the number is 124.
How would you solve that in Python?
Thanks for helping. I'm really stuck on this one :(
|
2016/02/12
|
[
"https://Stackoverflow.com/questions/35360863",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5647184/"
] |
If the number being `< 10000` is *all* you know, you have to try all numbers between `1` and `9999` (inclusive). The binary search algorithm as suggested in the comments does not help since a miss does not tell you if you are too high or too low.
```
for i in range(1, 10000):
if i == number_you_are_looking_for:
print("found it")
break
```
|
I believe the fastest way is to use binary search which gives the answer in O(log n).
```
def binary_search(n, min_value, max_value):
tries = 0
found = False
if max_value < min_value:
print("Maximum value must be bigger than the minimum value")
elif n < min_value or n > max_value:
print("The number must be between min_value and max_value")
else:
while min_value < max_value and not found:
tries += 1
mid_value = (min_value + max_value)//2
if mid_value == n:
found = True
else:
if n < mid_value:
max_value = mid_value - 1
else:
min_value = mid_value + 1
print([(min_value, max_value), (mid_value, n), tries])
print("The number is:", str(n))
print("Tries:", str(tries))
```
Examples:
```
binary_search(7, 0, 10)
>> The number is: 7
>> Tries: 2
binary_search(667, 0, 1000)
>> The number is: 667
>> Tries: 8
binary_search(2**19, 2**18, 2**20)
>> The number is: 524288
>> Tries: 19
```
| 5,997
|
34,032,681
|
Hi I'm seriously stuck when trying to filter out my xml document. Here is some example of the contents:
```
<sentence id="1" document_id="Perseus:text:1999.02.0029" >
<primary>millermo</primary>
<word id="1" />
<word id="2" />
<word id="3" />
<word id="4" />
</sentence>
<sentence id="2" document_id="Perseus:text:1999.02.0029" >
<primary>millermo</primary>
<word id="1" />
<word id="2" />
<word id="3" />
<word id="4" />
<word id="5" />
<word id="6" />
<word id="7" />
<word id="8" />
</sentence>
```
There are many sentences (Over 3000) but all I want to do is write some code (preferably in java or python) that will go through my xml file and remove all the sentences which have more than 5 word ids,
so in other words I will be left with just sentences tags with 5 or less word ids. Thanks. (Just to note my xml isnt great, I get mixed up with nodes/tags/element/ids.
I'm trying this atm but not sure:
```
import xml.etree.ElementTree as ET
tree = ET.parse('treebank.xml')
root = tree.getroot()
parent_map = dict((c, p) for p in tree.getiterator() for c in p)
iterator = list(root.getiterator('word id'))
for item in iterator:
old = item.find('word id')
text = old.text
if 'id=16' in text:
parent_map[item].remove(item)
continue
tree.write('out.xml')
```
|
2015/12/02
|
[
"https://Stackoverflow.com/questions/34032681",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5628041/"
] |
A loop and `String.format` should give you what you need:
```
for (int i = 1; i <= 10; i++) {
String bob = String.format("C:\\bob\\Myfile%02d.txt", Integer.valueOf(i));
// ...
}
```
The format pattern `%02d` pads an integer with a zero given that it is less than two digits in length, as defined in the [syntax for string formatting](https://docs.oracle.com/javase/8/docs/api/java/util/Formatter.html#syntax).
|
If you want to walk through subdirectories you may also try:
```
try {
Files.walk(Paths.get(directory)).filter(f -> Pattern.matches("myFile\\d{2}\\.txt", f.toFile().getName())).forEach(f -> {
System.out.println("WHAT YOU WANT TO DO WITH f");
});
} catch (IOException e) {
e.printStackTrace();
}
```
| 5,998
|
49,627,914
|
I'm trying to execute a shell command through python. The command is like the following one:
```
su -c "lftp -c 'open -u user,password ftp://127.0.0.1; get ivan\'s\ filename.pdf' " someuser
```
So, when I try to do it in python:
```
command = "su -c \"lftp -c 'open -u user,password ftp://127.0.0.1; get ivan\'s\ filename.pdf' \" someuser"
os.system(command)
```
Or:
```
command = subprocess.Popen(["su", "-c", "lftp -c 'open -u user,password ftp://127.0.0.1; get ivan\'s\ filename.pdf'", "someuser"])
```
I get the following error:
```
bash: -c: line 0: unexpected EOF while looking for matching `''
bash: -c: line 1: syntax error: unexpected end of file
```
Referred to: ivan\'s single quote.
I know there are a lot of single/double quotes in that but how can I escape this?
Thanks in advance!
**EDIT: THIS WORKED FOR ME:**
```
subprocess.call(["su","-c",r"""lftp -c "open -u user,password ftp://127.0.0.1; get ivan\'s\ filename.pdf" """, "someuser"])
```
Thank you all very much!
|
2018/04/03
|
[
"https://Stackoverflow.com/questions/49627914",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/8369505/"
] |
If you printed your test string you would notice that it results in the following:
```
su -c "lftp -c 'open -u user,password ftp://127.0.0.1; get ivan's\ filename.pdf' " someuser
```
The problem is that you need to escape the slash that you use to escape the single quote in order to keep Python from eating it.
```
command = "su -c \"lftp -c 'open -u user,password ftp://127.0.0.1; get ivan\\'s\\ filename.pdf' \" someuser"
```
will get the backslash across, you will then get an error from lftp instead...
This works:
```
command = "su -c \"lftp -c \\\"open -u user,password ftp://127.0.0.1; get ivan\\'s\\ filename.pdf\\\" \" someuser"
```
(It uses (escaped) double quotes instead, to ensure that the shell started by su still interprets the escape sequences)
(`os.system(a)` effectively does `subprocess.call(["sh","-c",a])`, which means that `sh` sees `su -c "lftp -c 'open -u user,password ftp://127.0.0.1; get ivan's\ filename.pdf' " someuser` (for the original one). It does escape sequence processing on this and sees an unclosed single quote (it is initially closed by `ivan'`), resulting in your error). Once that is fixed, `sh` calls su, which in turn starts up another instance of `sh` doing more escape processing, resulting in the error from lftp (since `sh` doesn't handle escape sequences in the single quotes)
`subprocess.call()` or `curl` are better ways to implement this - `curl` will need much less escaping, you can use `curl "ftp://user:password@127.0.0.1/ivan's filename.pdf" on the command line, some more escaping is needed for going via`su -c`and for python.`sudo`instead of`su` also results in less escaping being needed....
If you want to use `subprocess.call()` (which removes one layer of shell), you can use
```
subprocess.call(["su","-c","lftp -c \\\"open -u user,password ftp://127.0.0.1; get ivan\\'s\\ filename.pdf\\\"", "someuser"])
```
(The problem is that python deals with one level of escaping, and the `sh -c` invoked from `su` with the next layer... This results in quite an ugly command...) (different quotes might slightly reduce that...)
Using `r""` can get rid of the python level escape processing: (needing only the shell level escapes) (Using triple quotes to allow quotes in the string)
```
subprocess.call(["su","-c",r"""lftp -c \"open -u user,password ftp://127.0.0.1; get ivan\'s\ filename.pdf\"""", "someuser"])
```
Adding a space allows for stripping the shell escapes, since `lftp` doesn't seem to need the filename escaped for the spaces and single quote.
```
subprocess.call(["su","-c",r"""lftp -c "open -u user,password ftp://127.0.0.1; get ivan's filename.pdf" """, "someuser"])
```
This results in the eventual `lftp` ARGV being
```
["lftp","-c","open -u user,password ftp://127.0.0.1; get ivan's filename.pdf"]
```
For curl instead (it still ends up bad due to the `su` being involved):
```
subprocess.call(["su","-c",r"""curl "ftp://user:password@127.0.0.1/ivan's filename.pdf" """, "someuser"])
```
|
Using subprocess.call() is the best and more secure way to perform this task.
Here's an example from the [documentation page](https://docs.python.org/2/library/subprocess.html#subprocess.call):
```
subprocess.call(["ls", "-l"]) # As you can see we have here the command and a parameter
```
About the error I think it is something related to the spaces and the ' charachter.
Try using [string literals](https://docs.python.org/2.0/ref/strings.html) (pay attention to the r before the string, also be sure that the command is 100% matching the one you use in BASH):
```
r"My ' complex & string"
```
So, in your case:
```
command = subprocess.Popen(["su", "-c", r"lftp -c 'open -u user,password ftp://127.0.0.1; get ivan's filename.pdf'", "someuser"])
```
| 5,999
|
44,869,938
|
i have create an image processing python function.
my system have 4 cores + 4 threads.
i want to use multiprocessing to speed up my function,but anytime to use multiprocessing packages my function is not faster and is 1 minute slowly.
any idea why ?first time use multiprocessing packages.
main function :
```
if __name__ == '__main__':
in_path="C:/Users/username/Desktop/in.tif"
out_path="C:/Users/username/Desktop/out.tif"
myfun(in_path, out_path)
```
time=3.4 minutes
with multiprocessing map :
```
if __name__ == '__main__':
p = Pool(processes=4)
in_path="C:/Users/username/Desktop/in.tif"
out_path="C:/Users/username/Desktop/out.tif"
result = p.map(myfun(in_path,out_path))
```
time=4.4 minutes
```
if __name__ == '__main__':
pool = multiprocessing.Pool(4)
in_path="C:/Users/username/Desktop/in.tif"
out_path="C:/Users/username/Desktop/out.tif"
pool.apply_async(myfun, args=(in_path,out_path,))
pool.close()
pool.join()
```
time=4.5 minutes
|
2017/07/02
|
[
"https://Stackoverflow.com/questions/44869938",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5738116/"
] |
`multiprocessing.Pool.map()` does not automatically make a function run in parallel. So doing `Pool.map(my_function(single_input))` will not make it run any faster. In fact, it may make it slower.
The purpose of `map()` is to allow you to run the same function *multiple times* in parallel if you have *multiple inputs*.
If you had:
```
in_paths = ['C:/Users/in1.tif', 'C:/Users/in2.tif', ... ]
out_paths = ['C:/Users/out1.tif', 'C:/Users/out2.tif', ... ]
```
Then you could use `Pool.map()` to speed up your program, for example:
```
p = Pool(4)
results = p.map(my_function, zip(in_paths, out_paths))
```
But since you have a single input, you only need to run your function once, and so there is no difference. Hope this helps.
|
You're executing the same function with the same parameters in a sub-process - this is bound to be slower as, at the very least, there is a system overhead of creating a new process, and then comes the Python's own overhead. It creates a whole new interpreter, stack, GIL... and that takes time.
On POSIX systems this overhead is a bit faster as it can use forking and copy-on-write memory, but on Windows this is essentially as if you called `python -c "from your_script import myfun; myfun('C:/Users/username/Desktop/in.tif', 'C:/Users/username/Desktop/out.tif')"` from the command line and that takes a considerable amount of time.
You'll notice benefits of multiprocessing only if you have considerable computation requirements which can be parallelized from your function. Check a simple benchmark in [this answer](https://stackoverflow.com/a/44525554/7553525).
| 6,000
|
45,179,302
|
Tornado has an open socket, and I can't seem to get it closed.
I was really surprised as I've turned my computer on and off since the last time I ran this server a week ago, and terminal is not running. All in all, I thought this server was off for the past week.
The things I've tried so far are the solution to this similar question: [python websocket with tornado. Socket aren't closed](https://stackoverflow.com/questions/25082336/python-websocket-with-tornado-socket-arent-closed), which did nothing.
And I've tried using `IOLoop.close(all_fds=True)`
[PyDoc for this function](http://www.tornadoweb.org/en/stable/ioloop.html?highlight=IOLoop#tornado.ioloop.IOLoop.close), which returned the error below.
>
> `>>>` tornado.ioloop.IOLoop.close(all\_fds=True)
>
>
> Traceback (most recent call last):
>
>
> File "", line 1, in
>
>
> TypeError: unbound method close() must be called with IOLoop instance as first argument (got nothing instead)
>
>
>
**How do I close all sockets so I can start up again from a clean slate?**
|
2017/07/19
|
[
"https://Stackoverflow.com/questions/45179302",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4808079/"
] |
Interesting.
Firstly, you should call `close()` method for `tornado.ioloop.IOLoop` **object**, not for **class object**. You can get current `tornado.ioloop.IOLoop` object using the method `tornado.ioloop.IOLoop.current()`.
Example:
```
my_ioloop = tornado.ioloop.IOLoop.current()
my_ioloop.close(all_fds=True)
```
Further reading:
* [IOLoop.current()](http://www.tornadoweb.org/en/stable/ioloop.html#tornado.ioloop.IOLoop.current) documentation
* [Explanation](https://realpython.com/blog/python/instance-class-and-static-methods-demystified/) why you've got `TypeError`
|
In my case, the issue was not with Tornado specifically, but with a process it started which continued even after it lost track of it.
When I restarted my computer, OSX kept track of the process, but Tornado did not. The solution was to find open ports and close the one Tornado was using.
The answer comes from here originally:
<https://stackoverflow.com/a/17703016/4808079>
```
//first, check the port which your code opens.
$ sudo lsof -i :8528
COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME
Python 29748 root 4u IPv6 0xe782a7ce5603265 0t0 TCP *:8528 (LISTEN)
Python 29748 root 5u IPv4 0xe782a7ce4aec61d 0t0 TCP *:8528 (LISTEN)
//then kill the process, using the PID
$ sudo kill 29748
```
| 6,001
|
47,585,705
|
How do I make a file from a dictionary in python?
For example this is my dictionary:
dict = {'a':1,'b':2,'c':3}
How do I make it into the first sentence of a file that shows this?
a,1.b,2.c,3.
Thank you to anyone who answers my question.
|
2017/12/01
|
[
"https://Stackoverflow.com/questions/47585705",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/8922904/"
] |
You can try this:
```
f = open('file.txt', 'w')
dict = {'a':1,'b':2,'c':3}
f.write('.'.join('{},{}'.format(a, b) for a, b in dict.items())+'.\n')
f.close()
```
|
```
import json
mydict = {'a':1,'b':2,'c':3}
with open('dict_file.txt', 'w') as file:
file.write(json.dumps(mydict))
```
Hope this helps.
| 6,002
|
40,784,720
|
I don't know if it is possible or not. I am trying to find a way of sorting a nested list on the following condition
1. i want to sort form 1 point to another (NOT the whole list only part of it)
2. the sorting should be done on the basis of 3rd element of the sublists
an Idea of what i want:
```
PAE=[['a',0,8],
['b',2,1],
['c',4,3],
['d',7,2],
['e',8,4]]
#PAE[1:4].sort(key=itemgetter(2)) (something like this)
or
#sorted(PAE[1:4],key=itemgetter(2)) (something like this)
` # ^ i know both are wrong but just for an idea
`
#output should be like this
['a', 0, 8]
['b', 2, 1]
['d', 7, 2]
['c', 4, 3]
['e', 8, 4]
```
I am new to python, but i tried my level best to find a solution but failed.
|
2016/11/24
|
[
"https://Stackoverflow.com/questions/40784720",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5650215/"
] |
Sort the slice **and write it back**:
```
>>> PAE[1:4] = sorted(PAE[1:4], key=itemgetter(2))
>>> PAE
[['a', 0, 8], ['b', 2, 1], ['d', 7, 2], ['c', 4, 3], ['e', 8, 4]]
```
|
This should do :
```
from operator import itemgetter
PAE=[['a',0,8],
['b',2,1],
['c',4,3],
['d',7,2],
['e',8,4]]
split_index = 1
print PAE[:split_index]+sorted(PAE[split_index:],key=itemgetter(2))
#=> [['a', 0, 8], ['b', 2, 1], ['d', 7, 2], ['c', 4, 3], ['e', 8, 4]]
```
| 6,003
|
13,787,566
|
I haven't used my python/virtual environments in a while, but I do have virtualenvironment wrapper installed also.
My question is, in the doc page it says to do this:
```
export WORKON_HOME=~/Envs
$ mkdir -p $WORKON_HOME
$ source /usr/local/bin/virtualenvwrapper.sh
$ mkvirtualenv env1
```
I simply did this at my prompt:
```
source /usr/local/bin/virutalenvwrapper.sh
```
And now I can list and select an environment by doing:
```
>workon
>workon envtest1
```
My question is, since this works for me, I'm confused why I should be creating an environmental variable WORKON\_HOME and point it to the ~/Envs folder? What does that do and how come mine works fine w/o it? I don't have that /Envs folder either (I know the script creates it).
Reference: <http://virtualenvwrapper.readthedocs.org/en/latest/>
|
2012/12/09
|
[
"https://Stackoverflow.com/questions/13787566",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/39677/"
] |
If `WORKON_HOME` is not set, your default virtualenv folder will be set to `~/.virtualenvs`
(see [virtualenvwrapper.sh l.118](https://bitbucket.org/dhellmann/virtualenvwrapper/src/a766226010beb5df341bcb4bceb2befaba8603d4/virtualenvwrapper.sh?at=default#cl-118))
You will also use `WORKON_HOME` to specify to `pip` which folder to use (`export PIP_VIRTUALENV_BASE=$WORKON_HOME`)
source : [virtualenvwrapper.readthedocs.org : Tying to pip’s virtualenv support](http://virtualenvwrapper.readthedocs.org/en/latest/tips.html#tying-to-pip-s-virtualenv-support)
|
>
> I'm confused why I should be creating an environmental variable
> WORKON\_HOME and point it to the ~/Envs folder?
>
>
>
It's optional. You're confused (like I was) because the documentation is confusing.
>
> What does that do and how come mine works fine w/o it?
>
>
>
It tells `virtualenvwrapper` which folder to search for Python environments. The command `workon` searches the path `WORKON_HOME` if it's defined, or `~/.virtualenvs` if it's not, which is why it works by default.
A use case for defining a different `WORKON_HOME` directory would be if you have different environments you want to available to `virtualenvwrapper`. For example, if you save virtual env backups to a different folder or have multiple users who want to maintain their own environments.
| 6,006
|
6,760,536
|
I often find myself writing class constructors like this:
```
class foo:
def __init__(self, arg1, arg2, arg3):
self.arg1 = arg1
self.arg2 = arg2
self.arg3 = arg3
```
This can obviously become a pain if the number of arguments (and class attributes) gets high. I'm looking for the most pythonic way to loop through the constructor's arguments list and assign attributes accordingly. I'm working with Python 2.7, so ideally I'm looking for help with that version.
|
2011/07/20
|
[
"https://Stackoverflow.com/questions/6760536",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/403401/"
] |
How about this?
```
class foo:
def __init__(self, arg1, arg2, arg3):
for _prop in dir():
setattr(self, _prop, locals()[_prop])
```
This uses the builtin python dir function to iterate over *all* local variables.
It has a minor side effect of creating an extraneous self reference but you could filter that if you really wanted.
Also if you were to declare any other locals before the dir() call, they would get added as constructed object's attributes as well.
|
What about iterating over the explicit variable names?
I.e.
```
class foo:
def __init__(self, arg1, arg2, arg3):
for arg_name in 'arg1,arg2,arg3'.split(','):
setattr(self, arg_name, locals()[arg_name])
f = foo(5,'six', 7)
```
Resulting with
```
print vars(f)
{'arg1': 5, 'arg2': 'six', 'arg3': 7}
```
My suggestion is similar to @peepsalot, except it's more explicit and doesn't use `dir()`, which according to the [documentation](https://docs.python.org/2/library/functions.html#dir)
>
> its detailed behavior may change across releases
>
>
>
| 6,007
|
10,813,575
|
I am working on a html with selenium. After clicking the last link, pop up comes which says as save a file.
using selenium I am recording all the events and then generating the selenium RC script.
I want to know that how to get the pop up file from code using python?
|
2012/05/30
|
[
"https://Stackoverflow.com/questions/10813575",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1297123/"
] |
In the case of saving a file, you can get around the popup box by configuring the options of your browser profile. See [this](https://stackoverflow.com/questions/12099250/python-webcrawler-downloading-files/12099438) answer for an explanation using Firefox. General idea is that you need to tell Firefox itself to not prompt when saving files of certain types. Note that this will result in the file being saved somewhere, but you can also control where it goes in case you want to delete the file (or handle it separately in Python).
|
Webdriver cannot communicate with the browser modal popup.
But this can be done, check out the below link for your answer
<http://blog.codecentric.de/en/2010/07/file-downloads-with-selenium-mission-impossible/>
| 6,017
|
24,374,400
|
I am trying to open an https URL using the [`urlopen`](https://docs.python.org/3.2/library/urllib.request.html#urllib.request.urlopen) method in Python 3's [`urllib.request`](https://docs.python.org/3.2/library/urllib.request.html) module. It seems to work fine, but the documentation warns that "[i]f neither `cafile` nor `capath` is specified, an HTTPS request will not do any verification of the server’s certificate".
I am guessing I need to specify one of those parameters if I don't want my program to be vulnerable to man-in-the-middle attacks, problems with revoked certificates, and other vulnerabilities.
`cafile` and `capath` are supposed to point to a list of certificates. Where am I supposed to get this list from? Is there any simple and cross-platform way to use the same list of certificates that my OS or browser uses?
|
2014/06/23
|
[
"https://Stackoverflow.com/questions/24374400",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/28324/"
] |
Works in python 2.7 and above
```
context = ssl.create_default_context(cafile=certifi.where())
req = urllib2.urlopen(urllib2.Request(url, body, headers), context=context)
```
|
Different Linux distributives have different pack names. I tested in Centos and Ubuntu. These certificate bundles are updates with system update. So you may just detect which bundle is available and use it with `urlopen`.
```
cafile = None
for i in [
'/etc/ssl/certs/ca-bundle.crt',
'/etc/ssl/certs/ca-certificates.crt',
]:
if os.path.exists(i):
cafile = i
break
if cafile is None:
raise RuntimeError('System CA-certificates bundle not found')
```
| 6,018
|
9,066,774
|
I downloaded Open ERP server & web, having decided against the thicker gtk. I added the 2 as projects in eclipse, pydev running on Ubuntu 11.10 and started then up. I went through the web client setup & I though the installation had been done. At some point though I had executed a script that tried to copy all the bits and pieces out of my home folder into the file system some going to /ect or usr/local. I didn't want this so I stopped the process. Cause then I though I'd have to run eclipse as root & I'd not be able to trace process though the source cause it's all be scattered thought the file system.
Problems came when I tried to install a new module. I couldn't get it into the module list & even zipping it up and trying to import it through the client failed without errors.
While trying to get the module I added to show up I discovered this on the forums "You'll have to run setup.py install after putting the module in addons if you didn't specify an addons path when running openerp-server."
So it looked like I had to run:
`python setup.py build
sudo python setup.py install`
Firstly I'm confused about why you need to build I thought it was onlt the c libs that needed building and I'd done that when installing dependences.
Secondly `setup.py install` is obviosuly vital if you need to run it to get a new module recognised. How can I trace stuff through the source if it's running from all over the file system.
Everything has now been copied out of home into the file system as I had tried to avoid. Now the start up scripts are in usr/local/bin so I assume I can't run, using 'debug as' in eclipse or see the logs in the eclipse console. I also found in the documentation that that suggests starting the server with:
`./openerp-server.py –addons-path=~/home/workspace/stable/addons`
Which apparently overrides the addons in the file system created by the install, suggesting that you'd have just the modules in addon in eclipse where one could debug etc, but the other resources would be elsewhere?
I suppose that's ok, but I still have trouble visualizing how this is going to work, I suppose if this is the way it's done then how would one get standard out to go to the eclipse console?
I suppose I could have the complete project in eclipse but all the resources besides the addons would just be for reference purposes, while only the addons would actually be running since they are over-ridden by the –addons-path argument.
Then if I could get output to go to the console it would be like what I would expect.
I've seen some references to using links in the eclipse workspace or running eclipse as root like an eclipse php setup.
Can anyone tell me how to start the server and web apps from eclipse and have the log output appear in the console?
Maybe an experienced python developer can spot my blind spots & suggests what I may be else I might be missing here?
|
2012/01/30
|
[
"https://Stackoverflow.com/questions/9066774",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/536187/"
] |
I feel your pain. I went through the same process a couple of years ago when I started working with OpenERP. The good news is that it's not too hard to set up, and OpenERP runs smoothly in Eclipse with PyDev.
Start by looking at the [developer book for OpenERP](http://doc.openerp.com/v6.0/developer/1_1_Introduction/index.html). They lay out most of the requirements for getting it running.
To try and answer your specific questions, you shouldn't need to run the `setup.py` script at all in your development environment. It's only necessary when you deploy to a server. To get the server to recognize a new module, go to the administration menu, and choose Modules Management: Update Modules List. I'm still running OpenERP 5.0, so the names and locations might be slightly different in version 6.1.
For the project configuration in Eclipse, I just checked out each branch from launchpad, and then imported each one as a project into my Eclipse workspace. The launch details are a bit different between 6.0 and 6.1. Here are my command line arguments for each:
6.0:
>
> --addons-path ${workspace\_loc:openerp-addons-6.0} --config ${workspace\_loc:openerp-config/src/server.config} --xmlrpc-port=9069 --netrpc-port=9070 --xmlrpcs-port=9071
>
>
>
6.1 needs the web client to launch with the server:
>
> --addons-path ${workspace\_loc:openerp-addons-trunk},${workspace\_loc:openerp-web-trunk}/addons,${workspace\_loc:openerp-migration} --config ${workspace\_loc:openerp-config/src/server.config} --xmlrpc-port=9069 --netrpc-port=9070 --xmlrpcs-port=9071
>
>
>
|
using eclipse kepler sr 1, pydev 3.1.0, openerp 7.0 from launchpad using bzr, ubuntu 13.10. This is how I got the whole thing loaded. I have skipped the part where I got the thing to work. This only covers retrieving the sources and being able to open/modify the openerp source in eclipse/pydev.
There are three bzr repositories you need to get, the server, the web client addons and the bundled addons.
So I created a top level directory called `openerp-bzr`. In this directory, I checked out the sources with the following command. Note the use of `checkout` and `--lightweight`, these options prevent fetching of all the logs and history (making it much smaller and faster). You may want to omit the --lightweight if you want to get everything and change the checkout to `branch` if that is what you want to do. Back to business. You will have create an account on launchpad and register your ssh keys and configure your bzr.
```
bzr checkout --lightweight lp:openobject-server/7.0 openobject-server-7.0
bzr checkout --lightweight lp:openerp-web/7.0 openerp-web-7.0
bzr checkout --lightweight lp:openobject-addons/7.0 openobject-addons-7.0
```
(these folders that just got created, I will call them `source folders`).
(insert here the instructions to get this to work, which includes configuring the configuration file, setting the PYTHONPATH and downloading all the dependencies. I will add these in the weekend).
Then, still in the `openerp-bzr` folder, I create links. The first folder `openerp-7.0` that is created, I will call it `link folder`.
```
ln -s openobject-server-7.0 openerp-7.0
cd openerp-7.0/openerp/addons
ln -s ../../../openobject-addons-7.0/* .
ln -s ../../../openerp-web-7.0/addons/* .
```
Now, if your eclipse is properly setup, you create a new pydev project, checking the `create links to existing sources (select them on the next page), go next and add`openerp-7.0` (the link folder).
You can do bzr update in the source folders.
When you develop addons, create the actual folders somewhere else and then link them into the the addons folders in the link folder. This will make it look like you are working in the same tree, you will get all the references and code completion as well as (hopefully, because I have not tested this part!) debugging.
| 6,028
|
46,893,460
|
When I try to let my bot join my voice channel, I get this error:
`await client.join_voice_channel(voice_channel)` (line that generates the error)
```
Traceback (most recent call last):
File "/usr/local/lib/python3.5/site-packages/discord/ext/commands/core.py", line 50, in wrapped
ret = yield from coro(*args, **kwargs)
File "bot.py", line 215, in sfx
vc = await client.join_voice_channel(voice_channel)
File "/usr/local/lib/python3.5/site-packages/discord/client.py", line 3176, in join_voice_channel
session_id_future = self.ws.wait_for('VOICE_STATE_UPDATE', session_id_found)
AttributeError: 'NoneType' object has no attribute 'wait_for'
```
The above exception was the direct cause of the following exception:
```
Traceback (most recent call last):
File "/usr/local/lib/python3.5/site-packages/discord/ext/commands/bot.py", line 848, in process_commands
yield from command.invoke(ctx)
File "/usr/local/lib/python3.5/site-packages/discord/ext/commands/core.py", line 369, in invoke
yield from injected(*ctx.args, **ctx.kwargs)
File "/usr/local/lib/python3.5/site-packages/discord/ext/commands/core.py", line 54, in wrapped
raise CommandInvokeError(e) from e
discord.ext.commands.errors.CommandInvokeError: Command raised an exception: AttributeError: 'NoneType' object has no attribute 'wait_for'
```
I'm getting this error with channel name and channel ID
Function:
```
description = "Bot"
bot_prefix = "!"
client = discord.Client()
bot = commands.Bot(description=description, command_prefix=bot_prefix)
@bot.command(pass_context=True)
async def join(ctx):
author = ctx.message.author
voice_channel = author.voice_channel
vc = await client.join_voice_channel(voice_channel)
```
|
2017/10/23
|
[
"https://Stackoverflow.com/questions/46893460",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/8715621/"
] |
This is the code i use to make it work.
```
#Bot.py
import discord
from discord.ext import commands
from discord.ext.commands import Bot
from discord.voice_client import VoiceClient
import asyncio
bot = commands.Bot(command_prefix="|")
async def on_ready():
print ("Ready")
@bot.command(pass_context=True)
async def join(ctx):
author = ctx.message.author
channel = author.voice_channel
await bot.join_voice_channel(channel)
bot.run("token")
```
|
Get rid of the
>
> from discord.voice\_client import VoiceClient
> line and it shoudl be ok.
>
>
>
| 6,029
|
61,249,502
|
Today, I was testing my old python script, it was about fetching some details from an API and write then in a file. Until my last test it was working perfectly fine but today when I executed the script it worked, I mean no error at all but it neither write nor created any file. The API is returning complete data - I tested on it terminal, then I created another test.py file to check if file write statements are working, so the result was - they were not working.
I don't know what is causing the issue, it also ain't giving any error.
This is my sample TEST.PY file
```
filename = "domain.log"
with open(filename, 'a') as domain_file:
domain_file.write("HELLO\n")
domain_file.write("ANOTHER HELLO\n")
```
Thank you
|
2020/04/16
|
[
"https://Stackoverflow.com/questions/61249502",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/8332158/"
] |
Using `'a'` on the `open` call to open the file in append mode (as shown in your code) should work just fine.
I don't think your issue is on the Python side. The next thing to check are your directory permissions:
```
$ ls -al domain.log
-rw-r--r-- 1 taylor staff 60 Apr 16 07:57 domain.log
```
Here's my output after running your code a few times:
```
$ cat domain.log
HELLO
ANOTHER HELLO
HELLO
ANOTHER HELLO
HELLO
ANOTHER HELLO
```
|
It may be related to file permission or its directory. Use `ls -la` to see file and folder permissions.
| 6,032
|
59,440,445
|
I'm trying to scrape farefetch.com (<https://www.farfetch.com/ch/shopping/men/sale/all/items.aspx?page=1&view=180&scale=282>) with Beautifulsoup4 and I am not able to find the same components (tags or text in general) of the *parsed* text (dumped to soup.html) as in the browser in the dev tools view (when searching for matching strings with CTRL + F).
There is nothing wrong with my code but redardless of that here it is:
```
#!/usr/bin/python
# imports
import bs4
import requests
from bs4 import BeautifulSoup as soup
# parse website
url = 'https://www.farfetch.com/ch/shopping/men/sale/all/items.aspx?page=1&view=180&scale=282'
response = requests.get(url)
page_html = response.text
page_soup = soup(page_html, "html.parser")
# write parsed soup to file
with open("soup.html", "a") as dumpfile:
dumpfile.write(str(page_soup))
```
When I drag the soup.html file into the browser, all content loads as it should (like the real url). I assume it to be some kind of protection against parsing? I tried to put in a connection header which tells the webserver on the other side that I am requesting this from a real browser but it didnt work either.
1. Has anyone encountered something similar before?
2. Is there a way to get the REAL html as shown in the browser?
When I search the wanted content in the browser it (obviously) shows up...
[](https://i.stack.imgur.com/sFrRw.png)
Here the parsed html saved as "soup.html". The content I am looking for can not be found, regardless of *how* I search (CTRL+F) or bs4 function find\_all() or find() or what so ever.
[](https://i.stack.imgur.com/twqcz.png)
|
2019/12/21
|
[
"https://Stackoverflow.com/questions/59440445",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/8868950/"
] |
Based on your comment, here is an example how you could extract some information from products that are on discount:
```
import requests
from bs4 import BeautifulSoup
url = "https://www.farfetch.com/ch/shopping/men/sale/all/items.aspx?page=1&view=180&scale=282"
soup = BeautifulSoup(requests.get(url).text, 'html.parser')
for product in soup.select('[data-test="productCard"]:has([data-test="discountPercentage"])'):
link = 'https://www.farfetch.com' + product.select_one('a[itemprop="itemListElement"][href]')['href']
brand = product.select_one('[data-test="productDesignerName"]').get_text(strip=True)
desc = product.select_one('[data-test="productDescription"]').get_text(strip=True)
init_price = product.select_one('[data-test="initialPrice"]').get_text(strip=True)
price = product.select_one('[data-test="price"]').get_text(strip=True)
images = [i['content'] for i in product.select('meta[itemprop="image"]')]
print('Link :', link)
print('Brand :', brand)
print('Description :', desc)
print('Initial price :', init_price)
print('Price :', price)
print('Images :', images)
print('-' * 80)
```
Prints:
```
Link : https://www.farfetch.com/ch/shopping/men/dashiel-brahmann-printed-button-up-shirt-item-14100332.aspx?storeid=9359
Brand : Dashiel Brahmann
Description : printed button up shirt
Initial price : CHF 438
Price : CHF 219
Images : ['https://cdn-images.farfetch-contents.com/14/10/03/32/14100332_22273147_300.jpg', 'https://cdn-images.farfetch-contents.com/14/10/03/32/14100332_22273157_300.jpg']
--------------------------------------------------------------------------------
Link : https://www.farfetch.com/ch/shopping/men/dashiel-brahmann-corduroy-t-shirt-item-14100309.aspx?storeid=9359
Brand : Dashiel Brahmann
Description : corduroy T-Shirt
Initial price : CHF 259
Price : CHF 156
Images : ['https://cdn-images.farfetch-contents.com/14/10/03/09/14100309_21985600_300.jpg', 'https://cdn-images.farfetch-contents.com/14/10/03/09/14100309_21985606_300.jpg']
--------------------------------------------------------------------------------
... and so on.
```
|
The following helped me:
instead of the following code
```
page_soup = soup(page_html, "html.parser")
```
use
```
page_soup = soup(page_html, "html")
```
| 6,033
|
50,967,265
|
Please advice how to convert following using python
from:
```
2010-01-04 00:00:00
```
to:
```
2010-04-01 00:00:00
```
I have tried
```
df.Month = pd.to_datetime(df.Month, format('%Y/%m/%d'))
```
but didn't work
Thanks in advance
|
2018/06/21
|
[
"https://Stackoverflow.com/questions/50967265",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/405818/"
] |
Use `.dt.strftime("%Y-%d-%m")`
**Ex:**
```
import pandas as pd
df = pd.DataFrame({"Date": ["2010-01-04 00:00:00"]})
print( pd.to_datetime(df["Date"]).dt.strftime("%Y-%d-%m") )
```
**Output:**
```
0 2010-04-01
Name: Date, dtype: object
```
|
try the following using datetime parser and returning it in a defined format:
```
from datetime import datetime
old_date_string='2010-01-04 00:00:00'
dt=datetime.strptime(s, '%Y-%m-%d %H:%M:%S')
new_date_string=dt.strftime('%Y-%d-%m %H:%M:%S')
```
However, when you want to work with the date I would suggest using the datetime object
| 6,034
|
25,326,649
|
I would like to know if there is a faster and more "pythonic" way of doing the following, e.g. using some built in methods.
Given a pandas DataFrame or numpy array of floats, if the value is equal or smaller than 0.5 I need to calculate the reciprocal value and multiply with -1 and replace the old value with the newly calculated one.
"Transform" is probably a bad choice of words, please tell me if you have a better/more accurate description.
Thank you for your help and support!!
**Data:**
```
import numpy as np
import pandas as pd
dicti = {"A" : np.arange(0.0, 3, 0.1),
"B" : np.arange(0, 30, 1),
"C" : list("ELVISLIVES")*3}
df = pd.DataFrame(dicti)
```
**my function:**
```
def transform_colname(df, colname):
series = df[colname]
newval_list = []
for val in series:
if val <= 0.5:
newval = (1/val)*-1
newval_list.append(newval)
else:
newval_list.append(val)
df[colname] = newval_list
return df
```
**function call:**
```
transform_colname(df, colname="A")
```
**\*\*--> I'm summing up the results here, since comments wouldn't allow to post code (or I don't know how to do it).\*\***
**Thank you all for your fast and great answers!!**
using ipython "%timeit" with "real" data:
**my function:**
10 loops, best of 3: 24.1 ms per loop
**from jojo:**
```
def transform_colname_v2(df, colname):
series = df[colname]
df[colname] = np.where(series <= 0.5, 1/series*-1, series)
return df
```
100 loops, best of 3: 2.76 ms per loop
**from FooBar:**
```
def transform_colname_v3(df, colname):
df.loc[df[colname] <= 0.5, colname] = - 1 / df[colname][df[colname] <= 0.5]
return df
```
100 loops, best of 3: 3.32 ms per loop
**from dmvianna:**
```
def transform_colname_v4(df, colname):
df[colname] = df[colname].where(df[colname] <= 0.5, (1/df[colname])*-1)
return df
```
100 loops, best of 3: 3.7 ms per loop
Please tell/show me if you would implement your code in a different way!
One final QUESTION: (answered)
How could "FooBar" and "dmvianna" 's versions be made "generic"? I mean, I had to write the name of the column into the function (since using it as a variable didn't work). Please explain this last point!
--> thanks jojo, ".loc" isn't the right way, but very simple df[colname] is sufficient. changed the functions above to be more "generic". (also changed ">" to be "<=", and updated timing)
Thank you very much!!
|
2014/08/15
|
[
"https://Stackoverflow.com/questions/25326649",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2539824/"
] |
If we are talking about **arrays**:
```
import numpy as np
a = np.array([0.1, 0.2, 0.3, 0.4, 0.5, 0.6], dtype=np.float)
print 1 / a[a <= 0.5] * (-1)
```
This will, however only return the values smaller than `0.5`.
Alternatively use `np.where`:
```
import numpy as np
a = np.array([0.1, 0.2, 0.3, 0.4, 0.5, 0.6], dtype=np.float)
print np.where(a < 0.5, 1 / a * (-1), a)
```
Talking about `pandas` **DataFrame**:
As in **@dmvianna**'s answer (so give some credit to him ;) ), adapting it to `pd.DataFrame`:
```
df.a = df.a.where(df.a > 0.5, (1 / df.a) * (-1))
```
|
The typical trick is to write a general mathematical operation to apply to the whole column, but then use indicators to select rows for which we actually apply it:
```
df.loc[df.A < 0.5, 'A'] = - 1 / df.A[df.A < 0.5]
In[13]: df
Out[13]:
A B C
0 -inf 0 E
1 -10.000000 1 L
2 -5.000000 2 V
3 -3.333333 3 I
4 -2.500000 4 S
5 0.500000 5 L
6 0.600000 6 I
7 0.700000 7 V
8 0.800000 8 E
9 0.900000 9 S
10 1.000000 10 E
11 1.100000 11 L
12 1.200000 12 V
13 1.300000 13 I
14 1.400000 14 S
15 1.500000 15 L
16 1.600000 16 I
17 1.700000 17 V
18 1.800000 18 E
19 1.900000 19 S
20 2.000000 20 E
21 2.100000 21 L
22 2.200000 22 V
23 2.300000 23 I
24 2.400000 24 S
25 2.500000 25 L
26 2.600000 26 I
27 2.700000 27 V
28 2.800000 28 E
29 2.900000 29 S
```
| 6,035
|
21,444,951
|
I had an app that was working properly with old verions of wxpython
Now with wxpython 3.0, when trying to run the app, I get the following error
```
File "C:\Python27\lib\site-packages\wx-3.0-msw\wx\_controls.py", line 6523, in __init__
_controls_.DatePickerCtrl_swiginit(self,_controls_.new_DatePickerCtrl(*args, **kwargs))
wx._core.PyAssertionError: C++ assertion "strcmp(setlocale(LC_ALL, NULL), "C") == 0" failed at ..\..\src\common\intl.cpp(1449) in wxLocale::GetInfo(): You probably called setlocale() directly instead of using wxLocale and now there is a mismatch between C/C++ and Windows locale.
Things are going to break, please only change locale by creating wxLocale objects to avoid this!
```
the error comes from this line
```
File "C:\Users\hadi\Dropbox\Projects\Python\dialysis\profile.py", line 159, in __init__
style=wx.DP_DROPDOWN)
```
Help is much appreciated
|
2014/01/29
|
[
"https://Stackoverflow.com/questions/21444951",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/433261/"
] |
I know it's been a while since this question was asked, but I just had the same issue and thought I'd add my solution in case someone else finds this thread. Basically what's happening is that the locale of your script is somehow conflicting with the locale of the machine, although I'm not sure how or why. Maybe someone else with more specific knowledge on this can fill that in. Try manually setting the locale using the wxPython object wx.Locale:
`locale = wx.Locale(wx.LANGUAGE_ENGLISH)`
However, make sure that you assign the output to a non-local variable. As soon as the variable goes out of scope, the Locale object is destructed. So if it's in a class:
`class MyApp(wx.App):
...
def OnInit(self):
self.locale = wx.Locale(wx.LANGUAGE_ENGLISH)
...`
|
I've just faced the same kind of issue.
It seems we need to set the locale before using the wx.App :
```
import locale
locale.setlocale(locale.LC_ALL, 'C')
```
Two links helped me to solve this issue :
* Solution found in PHP : <https://github.com/wxphp/wxphp/issues/108>
* How to do the same in Python : [How to set Python default locale on Windows?](https://stackoverflow.com/questions/11234843/how-to-set-python-default-locale-on-windows)
My original error message :
```
..\..\src\common\intl.cpp(1449): assert "strcmp( setlocale(LC_ALL, NULL), "C")==0" failed in wxLocale::GetInfo() ...
```
| 6,038
|
11,908,725
|
```
#!/bin/python
import os
pipe=os.popen("ls /etc -alR| grep \"^[-l]\"|wc -l") #Expr1
a=int(pipe.read())
pipe.close()
b=sum([len(files) for root,dirs,files in os.walk("/etc")]) #Expr2
print a
print b
print "a equals to b ?", str(a==b) #False
print "Why?"
```
What is the **difference** between **Expr1**'s function and **Expr2**'s?
I think **Expr1** gives the right answer, but not sure.
|
2012/08/10
|
[
"https://Stackoverflow.com/questions/11908725",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1545784/"
] |
If you use walk, errors are ignored (see [this](http://docs.python.org/library/os.htm)), and ls sends a message for each error. These count as words.
|
On my machine, /etc is a symlink to /private/etc, so `ls /etc` has only one line of output. `ls /etc/` give the expected equivalence between `ls` and `os.walk`.
| 6,039
|
56,083,285
|
I'm trying to write a regex in python that that will either match a URL (for example <https://www.foo.com/>) or a domain that starts with "sc-domain:" but doesn't not have https or a path.
For example, the below entries should pass
```
https://www.foo.com/
https://www.foo.com/bar/
sc-domain:www.foo.com
```
However the below entries should fail
```
htps://www.foo.com/
https:/www.foo.com/bar/
sc-domain:www.foo.com/
sc-domain:www.foo.com/bar
scdomain:www.foo.com
```
Right now I'm working with the below:
```
^(https://*/|sc-domain:^[^/]*$)
```
This almost works, but still allows submissions like sc-domain:www.foo.com/ to go through. Specifically, the `^[^/]*$` part doesn't capture that a '/' should not pass.
|
2019/05/10
|
[
"https://Stackoverflow.com/questions/56083285",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3117494/"
] |
```
^((?:https://\S+)|(?:sc-domain:[^/\s]+))$
```
You can try this.
See demo.
<https://regex101.com/r/xXSayK/2>
|
You can use this regex,
```
^(?:https?://www\.foo\.com(?:/\S*)*|sc-domain:www\.foo\.com)$
```
**Explanation:**
* `^` - Start of line
* `(?:` - Start of non-group for alternation
* `https?://www\.foo\.com(?:/\S*)*` - This matches a URL starting with http:// or https:// followed by www.foo.com and further optionally followed by path using
* `|` - alternation for strings starting with sc-domain:
* `sc-domain:www\.foo\.com` - This part starts matching with sc-domain: followed by www.foo.com and further does not allow any file path
* `)$` - Close of non-grouping pattern and end of string.
**[Regex Demo](https://regex101.com/r/PxmPum/1)**
Also, a little not sure whether you wanted to allow any random domain, but in case you want to allow, you can use this regex,
```
^(?:https?://(?:\w+\.)+\w+(?:/\S*)*|sc-domain:(?:\w+\.)+\w+)$
```
**[Regex Demo allowing any domain](https://regex101.com/r/PxmPum/2/)**
| 6,042
|
65,391,704
|
I am working with Jupyter Notebook, writing some python code using numpy library.
For some reason, The output of arrays (as well as lists and strings) are displyed from right to left.
[](https://i.stack.imgur.com/eGhQ2.png)
|
2020/12/21
|
[
"https://Stackoverflow.com/questions/65391704",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/14864624/"
] |
Is your system set up for Hebrew? Note that the `:[4] In` is on the right as well. That may trigger array output to be right-to-left.
From [this comment on github](https://github.com/ipython/ipython/issues/10980):
>
> Press Ctrl-Shift-F to bring up the command palette. Search for 'rtl'
> and select 'toggle rtl layout'. It should switch around.
>
>
> If the first language selected in your browser is Arabic or Hebrew, it
> currently selects RTL by default. CCing @samarsultan in case that
> needs refining.
>
>
>
|
Thanks.
Now it works fine.
My browser was set to hebrew and by changing to english it fixed the problem.
| 6,044
|
28,079,035
|
OS: CentOS 6.6
Python 2.7
So, I've (re)installed Canopy after it suddenly stopped working after an abrupt shutdown. It worked fine immediately after the install (I installed as my default Python). But after one reboot, when I try to open it with /root/Canopy/canopy (the icon under applications no longer works, either), I get the following error:
```
(Canopy 64bit) [xxuser@xxlinux ~]$ /root/Canopy/canopy Traceback (most recent call last): File "/home/xxuser/qiime_software/sphinx-1.0.4-release/lib/python2.7/site-packages/site.py", line 73, in <module>
__boot() File "/home/xxuser/qiime_software/sphinx-1.0.4-release/lib/python2.7/site-packages/site.py", line 2, in __boot
import sys, imp, os, os.path ImportError: No module named path
```
I found this link: [Python - os.path doesn't exist: AttributeError: 'module' object has no attribute 'path'](https://stackoverflow.com/questions/21640794/python-os-path-doesnt-exist-attributeerror-module-object-has-no-attribute), but both of my os.py and os.pyc were 250 and 700 bytes, respectively. There was another file called site.py which was 0 bytes and site.pyc was about 100 bytes. What are these files? And would deleting them hurt anything (which is what they did)? And why is this happening after reboot? (using reboot command).
I also found this: <https://groups.google.com/forum/#!topic/spyderlib/hKB15JYyLqM> , which could be relevant. I've updated my python path before with sys.path.append('/..')
My guess is that for some reason os.path isn't in sys.path? and \_\_boot can't find it? But I'm new to Python and Linux and want to know what I'm doing before I go modifying any boot files, paths, etc.
Thanks in advance.
**More information** (saw that I'm supposed to update new info in an edit to original question. New to this.)
From one of the comments:
This is what I got:
import os.path
import posixpath
os.path
module 'posixpath' from '/home/xxuser/qiime\_software/python-2.7.3-release/lib/python2.7/posixpath.pyc'
posixpath
module 'posixpath' from '/home/xxuser/qiime\_software/python-2.7.3-release/lib/python2.7/posixpath.pyc'
Looks like os.path is there.
Could this have to do with a permissions error? I have it installed to /root/Canopy/canopy and I found this: docs.python.org/2/library/os.html#module-os (section 15.1.4). Does that make sense?
I'm also not sure if the following is related, but it might possibly. I can no longer seem to update my path with sys.path.append('/file/path/here'). It works until I close the terminal, then I have to re-append the next time I want to call a module from the new directory. Are sys.path and os.path related in any way?
|
2015/01/21
|
[
"https://Stackoverflow.com/questions/28079035",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4480411/"
] |
It turns out that I was onto something with my last comment.
I'd downloaded a bunch of biology modules that depend on python, and so many of them came with their own install. When I added the modules to ~/.bashrc, my bash began calling them in advance of my original CentOS install. Resetting ~/.bashrc and restarting (for some reason source ~/.bashrc didn't work) eliminated all of the extra stuff I'd added to my $PATH and Canopy began working again. I'm going to go through and remove the extra installations of python and hopefully the issue will be behind me. Thanks to everyone who posted answers, especially A.J., because that's what got me thinking about .bashrc .
Edit: With some better understanding, this was all because of using python in a virtual environment. Canopy was resetting my path every time I opened it. I'm using a self-installed virtual environment now and have configured my path.
|
Try seeing if you have `posixpath` by typing `import posixpath`:
```
>>> import os.path
>>> os.path
<module 'posixpath' from '/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/posixpath.pyc'>
>>> ^D
bash-3.2$ python
>>> import posixpath
>>> posixpath
<module 'posixpath' from '/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/posixpath.pyc'>
>>>
```
| 6,045
|
17,703,956
|
I am using Hash in Ruby, just check whether a certain word is in the “pairs” class and replace them. Initially I code in python and want to convert it into ruby that I am not familiar with. Here is the ruby code I wrote.
```
import sys
pairs = {'butter' => 'flies', 'cheese' => 'wheel', 'milk'=> 'expensive'}
for line in sys.stdin:
line_words = line.split(" ")
for word in line_words:
if word in pairs
line = line.gsub!(word, pairs[word])
puts line
```
It shows the following error
```
syntax error, unexpected kIN, expecting kTHEN or ':' or '\n' or ';'
if word in pairs
^
```
While below is the original python script which is right:
```
import sys
pairs = dict()
pairs = {'butter': 'flies', 'cheese': 'wheel', 'milk': 'expensive'}
for line in sys.stdin:
line = line.strip()
line_words = line.split(" ")
for word in line_words:
if word in pairs:
line = line.replace(word ,pairs[word])
print line
```
Is it because of "import sys" or “Indentation”?
|
2013/07/17
|
[
"https://Stackoverflow.com/questions/17703956",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2592038/"
] |
Try this:
```
pairs = {'butter' => 'flies', 'cheese' => 'wheel', 'milk'=> 'expensive'}
line = ARGV.join(' ').split(' ').map do |word|
pairs.include?(word) ? pairs[word] : word
end.join(" ")
puts line
```
This will loop over each item passed to the script and return the word or the replacement word, joined by a space.
|
`for` is generally not used in Ruby, as it's got some unusual scoping.
Here's how I would write it:
```
pairs = { "butter" => "flies", "cheese" => "wheel", "milk" => "expensive" }
until $stdin.eof?
line = $stdin.gets
pairs.each do |from, to|
line = line.gsub(from, to)
end
line
end
```
`import` doesn't exist in Ruby, so that shouldn't be there. You also have to "close" each block with `end` in Ruby, indentation alone isn't enough (indentation doesn't mean anything to Ruby, although you should still keep it for readability).
| 6,047
|
7,958,213
|
So I am trying to put the result of a query in a string. Let's say row by row (I don't need all the fields by the way), but that's not the point. I am using python against a sqlite db.
the problem is that when some of the fields are null, python will write None instead of "" or some blank neutral thing.
example:
```
t = "%s %s %s %s" % (field[1],field[2],field[3],field[4])
```
If field[3] is null for instance, t will be something like
```
"string1 string2 None string4"
```
instead of
```
"string1 string2 string4"
```
yes I would need to remove also the double space in case. I cannot just replace "None" with "" because some string might contain itself "None" since it is a common word. Of course I don't have only 4 field, they are a lot, and I am not trying to import every field of the row, only specific ones. I need a fast and easy way to fix this behavior. I cannot manually check if each field is None, that's insane. I cannot use
```
str.strip(field[i])
```
because when the field is None I get an error.
what could be a good approach?
|
2011/10/31
|
[
"https://Stackoverflow.com/questions/7958213",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/918420/"
] |
Inheriting from *object* automatically brings the *type* metaclass along with it. This overrides your module level *\_\_metaclass\_\_* specification.
If the metaclass is specified at the class level, then *object* won't override it:
```
def metaclass(future_class_name, future_class_parents, future_class_attrs):
print "module.__metaclass__"
future_class_attrs["bar"]="bar"
return type(future_class_name, future_class_parents, future_class_attrs)
class Foo(object):
__metaclass__ = metaclass
def __init__(self):
print 'Foo.__init__'
f=Foo()
```
See [http://docs.python.org/reference/datamodel.html?highlight=**metaclass**#customizing-class-creation](http://docs.python.org/reference/datamodel.html?highlight=__metaclass__#customizing-class-creation)
|
The specification [specifies the order in which Python will look for a metaclass](http://docs.python.org/reference/datamodel.html?highlight=__metaclass__#customizing-class-creation):
>
> The appropriate metaclass is determined by the following precedence
> rules:
>
>
> * If `dict['__metaclass__']` exists, it is used.
> * Otherwise, if there is at
> least one base class, its metaclass is used (this looks for a
> `__class__` attribute first and if not found, uses its type).
> * Otherwise, if a global variable named `__metaclass__` exists, it is used.
> * Otherwise, the old-style, classic metaclass (`types.ClassType`) is used.
>
>
>
You will see from the above that having a base class *at all* (whatever the base class is, even if it does not ultimately inherit from `object`) pre-empts the module-level `__metaclass__`.
| 6,048
|
17,682,571
|
This is the command that I am using. I have followed the steps in <https://developers.google.com/appengine/docs/python/tools/uploadingdata>. When I use the same command for the same application that I have hosted on the web, the command works and I can see the data in the datastore. But the same command is not working for my local copy of the application. The error I am getting is:
>
> HTTPError: HTTP Error 404: Not Found
>
> [ERROR ] Authentication Failed: Incorrect credentials or unsupported authentication type (e.g. OpenId).
>
>
>
But I am not really using any credentials to host it locally. Please help.
```
./appcfg.py upload_data --application=say_hello --config_file=bulkloader.yaml --filename=output.csv --kind=Dashboard --url=http:hostname:8080/_ah/remote_api
```
|
2013/07/16
|
[
"https://Stackoverflow.com/questions/17682571",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2582075/"
] |
If your parameters are right but the authentication is failing, pass in the -oauth2 flag:
appcfg.py --oauth2 update app.yaml
Then the rest of your appcfg.py should authenticate. If it still doesn't work your appid or url is probably off.
|
if you are using mac, you should have administration privileges on your mac. if not, put sudo on the beginning of the command
| 6,049
|
22,726,553
|
Trying to iterate through a number string in python and print the product of the first 5 numbers,then the second 5, then the third 5, etc etc. Unfortunately, I just keep getting the product of the first five digits over and over. Eventually I'll append them to a list. Why is my code stuck?
edit: Original number is an integer so I have to make it a string
```
def product_of_digits(number):
d= str(number)
for integer in d:
s = 0
k = []
while s < (len(d)):
print (int(d[s])*int(d[s+1])*int(d[s+2])*int(d[s+3])*int(d[s+4]))
s += 1
print (product_of_digits(a))
```
|
2014/03/29
|
[
"https://Stackoverflow.com/questions/22726553",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3462587/"
] |
There are a few problems with your code:
1) Your `s+=1` indentation is incorrect
2) It should be `s+=5` instead (assuming you want products of 1-5, 6-10, 11-15 and so on otherwise s+=1 is fine)
```
def product_of_digits(number):
d = str(number)
s = 0
while s < (len(d)-5):
print (int(d[s])*int(d[s+1])*int(d[s+2])*int(d[s+3])*int(d[s+4]))
s += 5 (see point 2)
print (product_of_digits(124345565534))
```
|
numpy.product([int(i) for i in str(s)])
where s is the number.
| 6,052
|
35,539,657
|
Environment
===========
* Raspberry Pi 2
* raspbian-jessie-lite
* Windows 8.1
* PuTTY 0.66 (SSH)
Issue
=====
Can't get cron to execute a python script with sudo. The script deals with GPIO input so it should be called with sudo. The program is supposed to save temperature and humidity to files but `cat temp.txt` and `cat humid.txt` gave me empty strings.
crontab
=======
`sudo crontab -e`
```
* * * * * python /home/dixhom/Adafruit_Python_DHT/examples/temphumid.py 1>>/tmp/cronoutput.log 2>>/tmp/cronerror.log
```
python script
=============
```
#!/usr/bin/python
import sys
import Adafruit_DHT
import datetime
# Adafruit_DHT.DHT22 : device name
# 4 : pin number
humidity, temperature = Adafruit_DHT.read_retry(Adafruit_DHT.DHT22, 4)
if humidity is not None:
f = open("humid.txt","w")
str = '{0}, {1}'.format(datetime.datetime.now().strftime("%Y/%m/%d %H:%M:%S"), humidity)
f.write(str)
else:
print 'Failed to get reading. Try again!'
sys.exit(1)
if temperature is not None:
f = open("temp.txt","w")
str = '{0}, {1}'.format(datetime.datetime.now().strftime("%Y/%m/%d %H:%M:%S"), temperature)
f.write(str)
else:
print 'Failed to get reading. Try again!'
sys.exit(1)
```
cronerror.log and cronoutput.log
================================
(empty)
What I tried
============
* `sudo crontab -e`
* `/usr/bin/python` in cron
* `chkconfig cron` (cron on)
* `sudo apt-get update` `sudo apt-get upgrade`
* `sudo reboot`
Any help would be appreciated. Thanks.
|
2016/02/21
|
[
"https://Stackoverflow.com/questions/35539657",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4061339/"
] |
I suggest learning how to use Swing. You will have several different classes interacting together. In fact, it is considered good practice to keep separate the code which creates and manages the GUI from the code which performs the underlying logic and data manipulation.
|
I would recommend using netbeans to start with. From there you can easily select pre created classes such as Jframes. Much easier to learn. You can create a GUI from there by dragging and dropping buttons and whatever you need.
Here is a youtube tut to create GUI's in netbeans.
<https://www.youtube.com/watch?v=LFr06ZKIpSM>
If you decide not to go with netbeans, you are gonna have to create swing containers withing your class to make the Interface.
| 6,054
|
43,327,194
|
Is there a python library or API that can use a camera to detect LED lights at know locations? The lights will be different colors.
I am interested in making an automated production test for a PCB. My board has many LEDs, and a test command makes the board turn LEDs on when some features work correctly. People may miss one of the many lights. I specify python because its the only high level language I am familiar with. Most of my embedded work is in C, and C is tricky to work with at higher levels.
|
2017/04/10
|
[
"https://Stackoverflow.com/questions/43327194",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3749646/"
] |
It is quite possible to solve this. As @John Percival Hackworth said, opencv is a good choice to solve this. I can give you some pointers on how to go about it.
* Take a picture of the board with LEDs, since you know the colors of LEDs, use that knowledge to filter the colors. For which I have given a code snippet.
* After filtering the colors, You can use houghcircles/blobs to locate the LEDs
* Presence of the blob means that LED is on. You can then make decisions based on this knowledge.
Opencv has python bindings, so you can program this with python.
Snippet to do color filtering.
```
import cv2 as cv2
import numpy as np
fn = 'image_or_videoframe'
# OpenCV reads image with BGR format
img = cv2.imread(fn)
# Convert to HSV format
img_hsv = cv2.cvtColor(img, cv2.COLOR_BGR2HSV)
# Choose the values based on the color on the point/mark
lower_red = np.array([0, 50, 50])
upper_red = np.array([10, 255, 255])
mask = cv2.inRange(img_hsv, lower_red, upper_red)
# Bitwise-AND mask and original image
masked_red = cv2.bitwise_and(img, img, mask=mask)
```
in this case, the red is filtered in the image and `masked_red` would contain only the red pixels in the image. You can change `lower_red` and `upper_red` to different values depending on the color you want to filter.
Good luck :)
|
[OpenCV](https://github.com/skvark/opencv-python%20'OpenCV') is a possible choice that would let you segue to another language later if needed.
| 6,059
|
44,469,620
|
Following is my code creating an HTTP or FTP connection depending on user input. The if and elif conditions somehow evaluate to FALSE all the time. Entering 1 and 0 both prints 'Sorry, wrong answer'.
```
domain = 'ftp.freebsd.org'
path = '/pub/FreeBSD/'
protocol = input('Connecting to {}. Which Protocol to use? (0-http, 1-ftp): '.format(domain))
print(protocol)
input()
if protocol == 0:
is_secure = bool(input('Should we use secure connection? (1-yes, 0-no): '))
factory = HTTPFactory(is_secure)
elif protocol == 1:
is_secure = False
factory = FTPFactory(is_secure)
else:
print('Sorry, wrong answer')
import sys
sys.exit(1)
connector = Connector(factory)
try:
content = connector.read(domain, path)
except URLError as e:
print('Can not access resource with this method')
else:
print(connector.parse(content))
```
**Output:**
```
Connecting to ftp.freebsd.org. Which Protocol to use? (0-http, 1-ftp): 0
0
Sorry, wrong answer
$ python abstractfactory.py
Connecting to ftp.freebsd.org. Which Protocol to use? (0-http, 1-ftp): http
http
Sorry, wrong answer
$ python abstractfactory.py
Connecting to ftp.freebsd.org. Which Protocol to use? (0-http, 1-ftp): 1
1
Sorry, wrong answer
$
```
Please advice. What am I doing wrong here?
Thanks.
|
2017/06/10
|
[
"https://Stackoverflow.com/questions/44469620",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5574481/"
] |
Input in Python 3, which it looks like you are using, comes in as a string. You would need to cast it via `int()` (although this needs to be done with caution and exception handling in the event of bad input) in order to compare it to an integer.
|
Python input() takes input as Unicode string, you need to explicitly compare input as integer with 0 like,
```
if int(input) == 0:
# Do something
elif int(input) == 1:
# Do something
```
| 6,060
|
42,566,496
|
I have a text file with this format:
>
>
> ```
> 1 1 (101): 3.7e+08 1.2e+02 5.1234
> 2 1 (101): 3.5e+08 8.2e+02 6.2222
> 2 2 (101): 1.7e+08 2.2e+02 7.4567
> 3 1 (101): 8.7e+08 3.2e+02 9.2123
>
> ```
>
>
I would like to get it into the following format:
>
>
> ```
> 1 3.7e+08 1.2e+02 5.1234
> 2 3.5e+08 8.2e+02 6.2222
> 2 1.7e+08 2.2e+02 7.4567
> 3 8.7e+08 3.2e+02 9.2123
>
> ```
>
>
I'm essentially trying to delete the second and third element/variable from each line. Any suggestions? I'm new to python and unsure how approach this. Any help would be appreciated. Thanks
The code is a work in progress.
So far, it reads file.txt and removes the first three lines and writes it out as newfile.txt. Here it is:
```
import sys
try:
f=open('file.txt', 'r')
lines = f.readlines()
except IOError:
print('File file.txt does not exist')
sys.exit(1)
for line in lines:
sys.stdout.write(line)
f.close()
# Deleting the first three lines
del lines[0:3]
# Deleting the second and third element of every line
f=open('newfile.txt','w')
f.writelines(lines)
print(lines)
f.close()
```
|
2017/03/02
|
[
"https://Stackoverflow.com/questions/42566496",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/7649922/"
] |
The complete solution was to use this code using ApplicationData.Current.LocalFolder.Path because we are not allowed to write files anywhere else then relative path to the application:
```
public static async Task<bool> TryDownloadFileAtPathAsync()
{
var createdFileId = await UserSnippets.CreateFileAsync(Guid.NewGuid().ToString(), STORY_DATA_IDENTIFIER);
var fileContent = await UserSnippets.GetCurrentUserFileAsync("/Documents","Imp.zip") ;
using (var fileStream = await Task.Run(() => System.IO.File.Create(ApplicationData.Current.LocalFolder.Path + "\\Imp.zip")))
{
fileContent.Seek(0, SeekOrigin.Begin);
fileContent.CopyTo(fileStream);
}
return fileContent != null;
}
```
For detailed information to Application's relative path one can lookup this page: [ApplicationData Class](https://learn.microsoft.com/en-us/uwp/api/windows.storage.applicationdata)
Thanks @RasmusW
|
If `await DownloadFileAsync(item.Id)` retrieves the file in the resulting stream, then it is up to the caller of your method `GetCurrentUserFileAsync` to write the stream contents somewhere.
That can be done using this code
```
var fileContent = await GetCurrentUserFileAsync(onedrivepath, onedrivefilename);
using (var fileStream = File.Create("C:\\localpath\\localfilename"))
{
fileContent.Seek(0, SeekOrigin.Begin);
fileContent.CopyTo(fileStream);
}
```
| 6,061
|
65,135,010
|
this select works in Workbench and Python:
```
#!/usr/bin/python3
import mysql.connector
mydb = mysql.connector.connect(
host="127.0.0.1",
user="root",
password="xxxxxxxx",
database="gnucash"
)
sqlcursor = mydb.cursor()
sqlcursor.execute("""
SELECT MAX(transactions.num) AS nr , MAX(transactions.enter_date) AS enter, MAX(transactions.post_date) AS post, MAX(transactions.description) AS "beschr", SUM(splits.value_num) AS "Euro"
FROM gnucash.splits
INNER JOIN gnucash.transactions ON gnucash.splits.tx_guid = gnucash.transactions.guid
INNER JOIN gnucash.accounts ON gnucash.splits.account_guid = gnucash.accounts.guid
WHERE transactions.guid IN (
SELECT transactions.guid FROM gnucash.splits
INNER JOIN gnucash.transactions ON gnucash.splits.tx_guid = gnucash.transactions.guid
INNER JOIN gnucash.accounts ON gnucash.splits.account_guid = gnucash.accounts.guid
WHERE accounts.guid LIKE "f2dd1f1e92cf41f687187d1e73fbc2c9")
AND accounts.guid NOT LIKE "f2dd1f1e92cf41f687187d1e73fbc2c9"
GROUP BY transactions.enter_date
ORDER BY post DESC, nr DESC, enter DESC
LIMIT 30;
""")
rohumsaetze = sqlcursor.fetchall()
print(rohumsaetze)
```
this select works on Workbench, **but not** in Python???
```
SELECT MAX(transactions.num) AS nr , MAX(transactions.enter_date) AS enter, MAX(transactions.post_date) AS post, MAX(transactions.description) AS "beschr", SUM(splits.value_num) AS "Euro"
FROM gnucash.splits
INNER JOIN gnucash.transactions ON gnucash.splits.tx_guid = gnucash.transactions.guid
INNER JOIN gnucash.accounts ON gnucash.splits.account_guid = gnucash.accounts.guid
WHERE transactions.guid IN (
SELECT transactions.guid FROM gnucash.splits
INNER JOIN gnucash.transactions ON gnucash.splits.tx_guid = gnucash.transactions.guid
INNER JOIN gnucash.accounts ON gnucash.splits.account_guid = gnucash.accounts.guid
WHERE accounts.guid LIKE "1df66c60180c4f3cb5cc080c1e7d4834")
AND accounts.guid NOT LIKE "1df66c60180c4f3cb5cc080c1e7d4834"
GROUP BY transactions.enter_date
ORDER BY post DESC, nr DESC, enter DESC
LIMIT 30;
```
only the "%Sparda%" and "%Commerz%" is the different. On Workbench works fine but i needed python. I have try to make the Statement with root, how [here](https://stackoverflow.com/questions/53751358/stored-procedure-works-on-mysql-workbench-but-not-in-python). But without success. And above all there is no error. How to find the error? Find the error?
I have already rewritten the code 3 times. Maybe someone has an idea how to write it differently? That it works.
thanks to you
|
2020/12/03
|
[
"https://Stackoverflow.com/questions/65135010",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/14516823/"
] |
In order to keep the columns when using agg you can use 'first' as given below:
Code:
```
import pandas as pd
rawdata = {'portfolio': ['port1', 'port2', 'port1', 'port2'],
'portfolioname': ['portfolioone', 'portfoliotwo', 'portfolioone', 'portfoliotwo'],
'date': ['04/12/2020', '04/12/2020', '04/12/2020', '04/12/2020'],
'code': ['ABC', 'ABC', 'XYZ', 'XYZ'],
'quantity': [2, 3, 10, 11],
'price': [1.5, 1.5, 0.2, 0.2],
'value': [3, 4.5, 2, 2.2],
'weight': [.6, .67, .4, .328]}
df1 = pd.DataFrame(rawdata)
print(df1, '\n')
finisheddata = {'portfolio': ['port3', 'port3'],
'portfolioname': ['portfoliothree', 'portfoliothree'],
'date': ['04/12/2020', '04/12/2020'],
'code': ['ABC', 'XYZ'],
'quantity': [5, 21],
'price': [1.5, 0.2],
'value': [7.5, 4.2],
'weight': [.64, .36]}
df2 = pd.DataFrame(finisheddata) # Desired
print(df2, '\n')
df3 = df1.groupby(['code']).agg({'portfolio' : 'first', 'portfolioname' : 'first', 'date' : 'first', 'quantity': 'sum', 'price' : 'first', 'weight': 'mean'}).reset_index()
df3['value'] = df3.price * df3.quantity
df3 = df3[['portfolio', 'portfolioname', 'date', 'code', 'quantity', 'price', 'value', 'weight']]
df3['portfolio'] = df3['portfolioname'] = 'combined'
print(df3)
```
Output:
```
portfolio portfolioname date code quantity price value weight
0 port1 portfolioone 04/12/2020 ABC 2 1.5 3.0 0.600
1 port2 portfoliotwo 04/12/2020 ABC 3 1.5 4.5 0.670
2 port1 portfolioone 04/12/2020 XYZ 10 0.2 2.0 0.400
3 port2 portfoliotwo 04/12/2020 XYZ 11 0.2 2.2 0.328
portfolio portfolioname date code quantity price value weight
0 port3 portfoliothree 04/12/2020 ABC 5 1.5 7.5 0.64
1 port3 portfoliothree 04/12/2020 XYZ 21 0.2 4.2 0.36
portfolio portfolioname date code quantity price value weight
0 combined combined 04/12/2020 ABC 5 1.5 7.5 0.635
1 combined combined 04/12/2020 XYZ 21 0.2 4.2 0.364
```
|
This is a touch inelegant but it shows you how to use groupby and then build a series of data. Then once the data is built move it into a dataframe. After most of the output data is assembled then use the output to work out the weight in dataframe.
```
data = []
for cname, dfsub in df1.groupby('code'):
port = 'portx'
portname = 'portnew'
code = cname
quant = dfsub.quantity.sum()
date = dfsub.date.iloc[0]
price = dfsub.price.iloc[0]
value = quant * price
data.append([port,portname,date,code,quant,price,value])
dfout = pd.DataFrame(data, columns=['portfolio', 'portfolioname', 'date', 'code', 'quantity', 'price', 'value'])
sumval = dfout.value.sum()
dfout['weight'] = dfout['value'] / sumval
```
the output looks like
```
portfolio portfolioname date code quantity price value weight
0 portx portnew 04/12/2020 ABC 5 1.5 7.5 0.641026
1 portx portnew 04/12/2020 XYZ 21 0.2 4.2 0.358974
```
If you want to reduce the number of digits in weight then `dfout.round({'weight': 3})` to round it to 3 decimal places
| 6,062
|
58,612,306
|
I'm setting up an autoclicker in Python 3.8 and I need win32api for GetAsyncKeyState but it always gives me this error:
```
>>> import win32api
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
ImportError: DLL load failed while importing win32api: The specified module could not be found.
```
I'm on Windows 10 Home 64x. I've already tried
```
pip install pypiwin32
```
And it successfully installs but nothing changes. I tried uninstalling and re-installing python as well. I also tried installing 'django' in the same way and it actually works when I `import django`, so I think it's a win32api issue only.
```
>>> import win32api
```
I expect the output to be none, but the actual output is always that error ^^
|
2019/10/29
|
[
"https://Stackoverflow.com/questions/58612306",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/12292714/"
] |
The answer is in jupyter notebook github.
<https://github.com/jupyter/notebook/issues/4980>
`conda install pywin32` worked for me. I am using conda distribution and my virtual env is using Python 3.8
|
hi this question i'm solve as below:
1.check directory C:\Windows\System32, is exist these file?
pythoncom37.dll pywintypes37.dll or pythoncom36.dll pywintypes36.dll
the number is python version .
2. if the file is exist delete it.
and then this issue will be solve.
| 6,064
|
46,279,333
|
```
@echo off
start c:\Python27\python.exe C:\Users\anupam.soni\Desktop\WIND_ACTUAL\tool.py
PAUSE
```
My script in tool.py is correctly working in **PyCharm IDE**, this bat is not working.
**Note : file path and python path is correct.**
Any other option to run python script independently
|
2017/09/18
|
[
"https://Stackoverflow.com/questions/46279333",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/8527475/"
] |
I assume that in your Adapter, you hold an array of objects that represents the items you want to be displayed.
Add a property to this object named for example `ButtonVisible` and set the property when you press the button.
Complete sample adapter follows. This displays a list of items with a button that, when pressed, is made non-visible. The visibility is remembered no matter how many items in the list or how much you scroll.
```
public class TestAdapter extends RecyclerView.Adapter<TestAdapter.VH> {
public static class MyData {
public boolean ButtonVisible = true;
public String Text;
public MyData(String text) {
Text = text;
}
}
public List<MyData> items = new ArrayList<>();
public TestAdapter() {
this.items.add(new MyData("Item 1"));
this.items.add(new MyData("Item 2"));
this.items.add(new MyData("Item 3"));
}
@Override
public TestAdapter.VH onCreateViewHolder(ViewGroup parent, int viewType) {
return new VH((
LayoutInflater.from(parent.getContext())
.inflate(R.layout.test_layout, parent, false))
);
}
@Override
public void onBindViewHolder(TestAdapter.VH holder, final int position) {
final MyData itm = items.get(position);
holder.button.setVisibility(itm.ButtonVisible ? View.VISIBLE : View.GONE);
holder.text.setText(itm.Text);
holder.button.setOnClickListener(new View.OnClickListener() {
@Override
public void onClick(View view) {
itm.ButtonVisible = false;
notifyItemChanged(position);
}
});
}
@Override
public int getItemCount() {
return items.size();
}
public class VH extends RecyclerView.ViewHolder {
Button button;
TextView text;
public VH(View itemView) {
super(itemView);
button = itemView.findViewById(R.id.toggle);
text = itemView.findViewById(R.id.text1);
}
}
}
```
test\_layout.xml
```
<?xml version="1.0" encoding="utf-8"?>
<LinearLayout xmlns:android="http://schemas.android.com/apk/res/android"
android:layout_width="match_parent"
android:layout_height="wrap_content"
android:orientation="horizontal">
<Button
android:id="@+id/toggle"
android:layout_width="wrap_content"
android:layout_height="wrap_content"/>
<TextView
android:id="@+id/text1"
android:layout_width="match_parent"
android:layout_height="wrap_content"/>
</LinearLayout>
```
|
Set an array of boolean variables associated with each item.
```
@Override
public void onBindViewHolder(final MyViewHolder holder, int position) {
if(visibilityList.get(position)){
holder.button.setVisibility(View.VISIBLE);
}else{
holder.button.setVisibility(View.GONE);
}
holder.message.setOnClickListener(new View.OnClickListener() {
@Override
public void onClick(View v) {
if(visibilityList.get(position)){
visibilityList.set(position, false);
holder.button.setVisibility(View.GONE);
}else{
visibilityList.set(position, true);
holder.button.setVisibility(View.VISIBLE);
}
}
});
}
```
**Note:** visibilityList is the List variable where each value is set to default (either true or false as per your requirement)
| 6,074
|
61,550,294
|
For my basic, rudimentary Django CMS, in my effort to add a toggle feature to publish / unpublish a blog post (I’ve called my app ‘essays’ and the class object inside my models is `is_published`), I’ve encountered an OperationalError when trying to use the Admin Dashboard to add essay content. I’m expecting to be able to switch a tick box to publish/unpublish but now I can’t even access the Dashboard.
Here is part of the traceback from my Django server:
```
File "/home/<user>/dev/projects/python/2018-and-2020/<projectdir>/venv/lib/python3.8/site-packages/django/db/backends/utils.py", line 84, in _execute
return self.cursor.execute(sql, params)
File "/home/<user>/dev/projects/python/2018-and-2020/<projectdir>/venv/lib/python3.8/site-packages/django/db/backends/sqlite3/base.py", line 383, in execute
return Database.Cursor.execute(self, query, params)
django.db.utils.OperationalError: no such column: essays_essayarticle.is_published
```
The debug traceback reinforces the OperationalError above:
```
OperationalError at /admin/essays/essayarticle/
no such column: essays_essayarticle.is_published
Request Method: GET
Request URL: http://<DN>.ngrok.io/admin/essays/essayarticle/
Django Version:2.2.11
Exception Type: OperationalError
Exception Value:
no such column: essays_essayarticle.is_published
Exception Location:
/home/<user>/dev/projects/python/2018-and-2020/<projectdir>/venv/lib/python3.8/site-packages/django/db/backends/sqlite3/base.py in execute, line 383
Python Executable:
/home/<user>/dev/projects/python/2018-and-2020/<projectdir>/venv/bin/python
`Fri, 1 May 2020 19:37:31 +0000
```
Exception Value indicates ‘no such column’ which is a reference to my db. I’m running SQlite3 for test purposes. Prior to this OperationalError, I was troubleshooting issues with a DummyNode and "isinstance django.db.migrations.exceptions.NodeNotFoundError: Migration". The previous solution I arrived at was to delete my migrations in two of my apps. This SO answer is the particular solution that resolved the issue: <https://stackoverflow.com/a/56195727/6095646>
As per **@Laila Buabbas**'s suggestion, to clarify, I deleted my migrations directory and invoked: `python manage.py makemigrations app_name` for each of my two apps. So that previous SQLite issue has been resolved. But I can’t figure out this new OperationalError, described above.
Is the issue is with my views.py / models.py (copied below)? I can't narrow it down any more specific than that. Here is part of the class defined in my app’s models.py (with the new potential problem line added at the end):
```
class EssayArticle(models.Model):
title = models.CharField(max_length=256)
web_address = models.CharField(max_length=256)
web_address_slug = models.SlugField(blank=True, max_length=512)
content = models.TextField(blank=True)
is_published = models.BooleanField(default=True)
```
Here are the relevant lines from the applicable function in my views.py:
```
def article(request, web_address):
try:
article = EssayArticle.objects.get(
web_address_slug=web_address) # .filter(is_published=True)
except EssayArticle.DoesNotExist:
raise Http404('Article does not exist!')
context = {
'article': article,
}
return render(request, 'essays/article.html', context)
```
Commenting in our out the `.filter(is_published=True)` doesn't stop or change the debug error.
My local dev box is Manjaro Linux with Django v2.2.11. I’m running Python v3.8.2.
Here are some of the resources I have already leveraged:
* [Improve INSERT-per-second performance of SQLite](https://stackoverflow.com/questions/1711631/improve-insert-per-second-performance-of-sqlite)
* [Django OperationalError: no such column: on pythonanywhere](https://stackoverflow.com/questions/53863318/django-operationalerror-no-such-column-on-pythonanywhere)
* [Django 1.8 OperationalError: no such column:](https://stackoverflow.com/questions/31842149/django-1-8-operationalerror-no-such-column)
* [Django tutorial01 OperationalError: no such column: polls\_choice.question\_text\_id](https://stackoverflow.com/questions/20451706/django-tutorial01-operationalerror-no-such-column-polls-choice-question-text-i)
|
2020/05/01
|
[
"https://Stackoverflow.com/questions/61550294",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6095646/"
] |
There isn't column is\_published in essays\_essayarticle table of your db try to add column in db by adding new migration and see the change for a table whether this column is going to be added.
The error isn't in your view rather it is in query.
|
I'm making the assumption that you deleted the migrations folder, if so when you makemigrations and migrate write the name of you app at the end
example
```
python manage.py makemigrations app_name
```
| 6,076
|
4,527,495
|
I have a strange issue with python 2.6.5. If I call
```
p = subprocess.Popen(["ifup eth0"], shell=True, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
out, err = p.communicate()
```
with the interface eth0 being down, the python programm hangs. "p.communicate()" takes a minute or longer to finish. If the interface is up before, the programm runs smoothly. I tested "ifup eth0" from the command line manually for both cases and its lightning fast.
If you have any idea what the Problem might be, I would appreciate it very much.
Thanks in advance
**EDIT:**
Based on the answers, I tried the following things:
```
p = subprocess.Popen(["ifup", "eth0"], shell=False, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
out, err = p.communicate()
```
If the interface was up before, the skript runs smoothly. However if the interface was down, python hangs again. I also tried:
```
p = subprocess.Popen(["ifup", "eth0"], shell=False)
out, err = p.communicate()
```
And EVERYTHING runs perfektly fast. Therefore it might be indeed related to a deadlock, as pointed out funktku. However the python documentation also says [python ref](http://docs.python.org/library/subprocess.html#subprocess.Popen.wait):
>
> Warning
>
>
> This will deadlock when using
> stdout=PIPE and/or stderr=PIPE and the
> child process generates enough output
> to a pipe such that it blocks waiting
> for the OS pipe buffer to accept more
> data. Use communicate() to avoid that.
>
>
>
Therefore there shouldn't be a deadlock. Hmm... Here is the detailed output when I run the programms on the command line:
1 Case, interface eth0 already up:
```
ifup eth0
Interface eth0 already configured
```
2 Case, interface down before:
```
ifup eth0
ssh stop/waiting
ssh start/running
```
So the ifup command generates two lines of ouput in case the interface is down before and one line of ouput otherwise. This is the only difference I noticed. But I doubt this is the cause of the problem, since "ls -ahl" causes many more lines of ouput and is running very well.
I also tried playing around with the buffersize argument, however no success, by setting it to some large value like 4096.
Do you have an ideas, what might be the cause of that? Or is this probably a bug in python Pipe handling or the ifup command itself? Do I really have to use the old os.popen(cmd).read()????
**EDIT2:**
os.popen(cmd).read() suffers from the same problem. Any idea of how I can test the pipe behaviour of ifup on the commandline?
I appreciate every hint, thanks in advance
|
2010/12/24
|
[
"https://Stackoverflow.com/questions/4527495",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/373361/"
] |
You should check out the [warning](http://docs.python.org/library/subprocess.html#subprocess.call) under the `subprocess.call` method. It might be the reason of your problem.
**Warning**
>
> Like Popen.wait(), this will
> deadlock when using stdout=PIPE and/or
> stderr=PIPE and the child process
> generates enough output to a pipe such
> that it blocks waiting for the OS pipe
> buffer to accept more data.
>
>
>
|
```
p = subprocess.Popen(["ifup", "eth0"], shell=False, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
out, err = p.communicate()
```
Set `shell=False`, you don't need it.
Try running this code, it should work. Notice how two arguments are separate elements in the list.
| 6,079
|
49,320,399
|
I want to call a REST api and get some json data in response in python.
```
curl https://analysis.lastline.com/analysis/get_completed -X POST -F “key=2AAAD5A21DN0TBDFZZ66” -F “api_token=IwoAGFa344c277Z2” -F “after=2016-03-11 20:00:00”
```
I know of python [request](http://docs.python-requests.org/en/latest/), but how can I pass `key`, `api_token` and `after`? What is `-F` flag and how to use it in python requests?
|
2018/03/16
|
[
"https://Stackoverflow.com/questions/49320399",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3855999/"
] |
Just include the parameter `data` to the .post function.
```
requests.post('https://analysis.lastline.com/analysis/get_completed', data = {'key':'2AAAD5A21DN0TBDFZZ66', 'api_token':'IwoAGFa344c277Z2', 'after':'2016-03-11 20:00:00'})
```
|
-F means make a POST as form data.
So in requests it would be:
```
>>> r = requests.post('http://httpbin.org/post', data = {'key':'value'})
```
| 6,081
|
59,845,836
|
please help me
what is my code problem??
my code writing name , mean(grades) in out put
```
import csv
from statistics import mean
with open('C:/Users/sina/Desktop/python pt/jalase19.csv' , 'r') as fo:
reader = csv.reader(fo)
for row in reader :
name = row[0]
grades = list()
for grade in row[1:]:
grades.append(float(grade))
with open('C:/Users/sina/Desktop/python pt/jalase20.csv' , 'w') as fw:
fw.write("name , mean(grades)\n")
```
|
2020/01/21
|
[
"https://Stackoverflow.com/questions/59845836",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/11828203/"
] |
**You didn't do an indent after the "with" statement**
As described [here](https://docs.python.org/2.5/whatsnew/pep-343.html) you have to do an indent after an "with" statement
Your code should look like that:
```
import csv
from statistics import mean
with open('C:/Users/sina/Desktop/python pt/jalase19.csv' , 'r') as fo:
reader = csv.reader(fo)
for row in reader :
name = row[0]
grades = list()
for grade in row[1:]:
grades.append(float(grade))
with open('C:/Users/sina/Desktop/python pt/jalase20.csv' , 'w') as fw:
fw.write("name , mean(grades)\n")
```
Also i think you meant **fw** instead of **f2**
|
When opening your files you are missing indentation. See how the error points you to line 4? When opening a file using the [context manager](https://book.pythontips.com/en/latest/context_managers.html) and anytime you are using a control statement (if, else, for, etc.) the next line must be indented.
```
import csv
from statistics import mean
with open('C:/Users/sina/Desktop/python pt/jalase19.csv', 'r') as fo:
reader = csv.reader(fo)
for row in reader:
name = row[0]
grades = list()
for grade in row[1:]:
grades.append(float(grade))
with open('C:/Users/sina/Desktop/python pt/jalase20.csv' , 'w') as f2:
f2.write("name , mean(grades)\n")
```
| 6,083
|
16,894,490
|
I have some problems with this code... send not the integer image but some bytes, is there someone than can help me? I want to send all images I find in a folder. Thank you.
CLIENT
======
```
import socket
import sys
import os
s = socket.socket()
s.connect(("localhost",9999)) #IP address, port
sb = 'c:\\python27\\invia'
os.chdir(sb) #path
dirs =os.listdir(sb) #list of file
print dirs
for file in dirs:
f=open(file, "rb") #read image
l = f.read()
s.send(file) #send the name of the file
st = os.stat(sb+'\\'+file).st_size
print str(st)
s.send(str(st)) #send the size of the file
s.send(l) #send data of the file
f.close()
s.close()
```
SERVER
======
```
import socket
import sys
import os
s = socket.socket()
s.bind(("localhost",9999))
s.listen(4) #number of people than can connect it
sc, address = s.accept()
print address
sb = 'c:\\python27\\ricevi'
os.chdir(sb)
while True:
fln=sc.recv(5) #read the name of the file
print fln
f = open(fln,'wb') #create the new file
size = sc.recv(7) #receive the size of the file
#size=size[:7]
print size
strng = sc.recv(int(size)) #receive the data of the file
#if strng:
f.write(strng) #write the file
f.close()
sc.close()
s.close()
```
|
2013/06/03
|
[
"https://Stackoverflow.com/questions/16894490",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2445800/"
] |
To transfer a sequence of files over a single socket, you need some way of delineating each file. In effect, you need to run a small protocol on top of the socket which allows to you know the metadata for each file such as its size and name, and of course the image data.
It appears you're attempting to do this, however both sender and receiver must agree on a protocol.
You have the following in your sender:
```
s.send(file) #send the name of the file
st = os.stat(sb+'\\'+file).st_size
s.send(str(st)) #send the size of the file
s.send(l)
```
How is the receiver to know how long the file name is? Or, how will the receiver know where the end of the file name is, and where the size of the file starts? You could imagine the receiver obtaining a string like `foobar.txt8somedata` and having to infer that the name of the file is `foobar.txt`, the file is 8 bytes long containing the data `somedata`.
What you need to do is separate the data with some kind of delimeter such as `\n` to indicate the boundary for each piece of metadata.
You could envisage a packet structure as `<filename>\n<file_size>\n<file_contents>`. An example stream of data from the transmitter may then look like this:
```
foobar.txt\n8\nsomedata
```
The receiver would then decode the incoming stream, looking for `\n` in the input to determine each field's value such as the file name and size.
Another approach would be to allocate fixed length strings for the file name and size, followed by the file's data.
|
The parameter to [`socket.recv`](http://docs.python.org/2/library/socket#socket.socket.recv) only specifies the maximum buffer size for receiving data packages, it doesn't mean exactly that many bytes will be read.
So if you write:
```
strng = sc.recv(int(size))
```
you won't necessarily get all the content, specially if `size` is rather large.
You need to read from the socket in a loop until you have actually read `size` bytes to make it work.
| 6,084
|
27,012,337
|
I'm trying to use ConfigParser to read a .cfg file for my pygame game. I can't get it to function for some reason. The code looks like this:
```
import ConfigParser
def main():
config = ConfigParser.ConfigParser()
config.read('options.cfg')
print config.sections()
Screen_width = config.getint('graphics','width')
Screen_height = config.getint('graphics','height')
```
The main method in this file is called in the launcher for the game. I've tested that out and that works perfectly. When I run this code, I get this error:
```
Traceback (most recent call last):
File "Scripts\Launcher.py", line 71, in <module>
Game.main()
File "C:\Users\astro_000\Desktop\Mini-Golf\Scripts\Game.py", line 8, in main
Screen_width = config.getint('graphics','width')
File "c:\python27\lib\ConfigParser.py", line 359, in getint
return self._get(section, int, option)
File "c:\python27\lib\ConfigParser.py", line 356, in _get
return conv(self.get(section, option))
File "c:\python27\lib\ConfigParser.py", line 607, in get
raise NoSectionError(section)
ConfigParser.NoSectionError: No section: 'graphics'
```
The thing is, there is a section 'graphics'.
The file I'm trying to read from looks like this:
```
[graphics]
height = 600
width = 800
```
I have verified that it is, in fact called options.cfg.
config.sections() returns only this: "[]"
I've had this work before using this same code, but it wont work now. Any help would be greatly appreciated.
|
2014/11/19
|
[
"https://Stackoverflow.com/questions/27012337",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3033405/"
] |
Your config file probably is not found. The parser will just produce an empty set in that case. You should wrap your code with a check for the file:
```
from ConfigParser import SafeConfigParser
import os
def main():
filename = "options.cfg"
if os.path.isfile(filename):
parser = SafeConfigParser()
parser.read(filename)
print(parser.sections())
screen_width = parser.getint('graphics','width')
screen_height = parser.getint('graphics','height')
else:
print("Config file not found")
if __name__=="__main__":
main()
```
|
I always use the `SafeConfigParser`:
```
from ConfigParser import SafeConfigParser
def main():
parser = SafeConfigParser()
parser.read('options.cfg')
print(parser.sections())
screen_width = parser.getint('graphics','width')
screen_height = parser.getint('graphics','height')
```
Also make sure there is a file called `options.cfg` and specify the full path if needed, as I already commented. Parser will fail silently if there is no file found.
| 6,085
|
54,833,385
|
I have the following code (using dnspython), which works - but it uses globals which I'm not keen on. I was thinking that I could use a recursive function but there is no obvious end.
Does anyone have any ideas on how this could be improved??
```
import dns.resolver
dns_resolver = dns.resolver.Resolver()
dns_resolver.nameservers = ['1.1.1.1', '1.0.0.1']
resolve_count = 0
def get_spf_count(domain_name):
global resolve_count
for answer in dns_resolver.query(domain_name, 'TXT'):
spf = answer.to_text() if 'v=spf1' in answer.to_text() else None
if spf:
spf_records = [
record
for record in spf.replace('" "', '').replace('"', '').split()
if record not in ['v=spf1', '~all', '-all', '+all', '?all']
]
for record in spf_records:
if 'include:' in record:
check_domain = record.split(':')[1]
get_spf_count(check_domain)
resolve_count += 1
elif record.startswith(('a:', 'mx:', 'ptr:', 'exists:')):
resolve_count += 1
get_spf_count('google.com')
print(resolve_count)
```
|
2019/02/22
|
[
"https://Stackoverflow.com/questions/54833385",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3595388/"
] |
Here is a slightly cleaned-up recursive function with a properly local variable.
```
import dns.resolver
def get_spf_count(domain_name, dns_resolver=None):
if dns_resolver is None:
dns_resolver = dns.resolver.Resolver()
dns_resolver.nameservers = ['1.1.1.1', '1.0.0.1']
resolve_count = 0
for answer in dns_resolver.query(domain_name, 'TXT'):
spf = answer.to_text() if 'v=spf1' in answer.to_text() else None
if spf:
spf_records = [
record
for record in spf.replace('" "', '').replace('"', '').split()
if record not in ['v=spf1', '~all', '-all', '+all', '?all']
]
for record in spf_records:
if 'include:' in record:
check_domain = record.split(':')[1]
resolve_count += 1 + get_spf_count(check_domain, dns_resolver)
elif record.startswith(('a:', 'mx:', 'ptr:', 'exists:')):
resolve_count += 1
return resolve_count
print(get_spf_count('google.com'))
```
Notice how everything the function needs is local within the function, including the `dns.resolver.Resolver()` object (and how you could pass in a shared resolver object if you wanted to).
|
Why not pass `resolve_count` in as a variable, and have the function return the updated value?
```
def get_spf_count(domain_name, resolve_count):
for answer in dns_resolver.query(domain_name, 'TXT'):
spf = answer.to_text() if 'v=spf1' in answer.to_text() else None
if spf:
spf_records = [
record
for record in spf.replace('" "', '').replace('"', '').split()
if record not in ['v=spf1', '~all', '-all', '+all', '?all']
]
for record in spf_records:
if 'include:' in record:
check_domain = record.split(':')[1]
get_spf_count(check_domain, resolve_count)
resolve_count += 1
elif record.startswith(('a:', 'mx:', 'ptr:', 'exists:')):
resolve_count += 1
return resolve_count
resolve_count = get_spf_count('google.com', 0)
print(resolve_count)
```
| 6,088
|
37,144,913
|
I am getting above error when I am running an iteration using FOR loop to build multiple models. First two models having similar data sets build fine. While building third model I am getting this error. The code where error is thrown is when I call sm.logit() using Statsmodel package of python:
```
y = y_mort.convert_objects(convert_numeric=True)
#Building Logistic model_LSVC
print("Shape of y:", y.shape, " &&Shape of X_selected_lsvc:", X.shape)
print("y values:",y.head())
logit = sm.Logit(y,X,missing='drop')
```
The error that appears:
```
Shape of y: (9018,) &&Shape of X_selected_lsvc: (9018, 59)
y values: 0 0
1 1
2 0
3 0
4 0
Name: mort, dtype: int64
ValueError Traceback (most recent call last)
<ipython-input-8-fec746e2ee99> in <module>()
160 print("Shape of y:", y.shape, " &&Shape of X_selected_lsvc:", X.shape)
161 print("y values:",y.head())
--> 162 logit = sm.Logit(y,X,missing='drop')
163 # fit the model
164 est = logit.fit(method='cg')
D:\Anaconda3\lib\site-packages\statsmodels\discrete\discrete_model.py in __init__(self, endog, exog, **kwargs)
399
400 def __init__(self, endog, exog, **kwargs):
--> 401 super(BinaryModel, self).__init__(endog, exog, **kwargs)
402 if (self.__class__.__name__ != 'MNLogit' and
403 not np.all((self.endog >= 0) & (self.endog <= 1))):
D:\Anaconda3\lib\site-packages\statsmodels\discrete\discrete_model.py in __init__(self, endog, exog, **kwargs)
152 """
153 def __init__(self, endog, exog, **kwargs):
--> 154 super(DiscreteModel, self).__init__(endog, exog, **kwargs)
155 self.raise_on_perfect_prediction = True
156
D:\Anaconda3\lib\site-packages\statsmodels\base\model.py in __init__(self, endog, exog, **kwargs)
184
185 def __init__(self, endog, exog=None, **kwargs):
--> 186 super(LikelihoodModel, self).__init__(endog, exog, **kwargs)
187 self.initialize()
188
D:\Anaconda3\lib\site-packages\statsmodels\base\model.py in __init__(self, endog, exog, **kwargs)
58 hasconst = kwargs.pop('hasconst', None)
59 self.data = self._handle_data(endog, exog, missing, hasconst,
---> 60 **kwargs)
61 self.k_constant = self.data.k_constant
62 self.exog = self.data.exog
D:\Anaconda3\lib\site-packages\statsmodels\base\model.py in _handle_data(self, endog, exog, missing, hasconst, **kwargs)
82
83 def _handle_data(self, endog, exog, missing, hasconst, **kwargs):
---> 84 data = handle_data(endog, exog, missing, hasconst, **kwargs)
85 # kwargs arrays could have changed, easier to just attach here
86 for key in kwargs:
D:\Anaconda3\lib\site-packages\statsmodels\base\data.py in handle_data(endog, exog, missing, hasconst, **kwargs)
564 klass = handle_data_class_factory(endog, exog)
565 return klass(endog, exog=exog, missing=missing, hasconst=hasconst,
--> 566 **kwargs)
D:\Anaconda3\lib\site-packages\statsmodels\base\data.py in __init__(self, endog, exog, missing, hasconst, **kwargs)
74 # this has side-effects, attaches k_constant and const_idx
75 self._handle_constant(hasconst)
---> 76 self._check_integrity()
77 self._cache = resettable_cache()
78
D:\Anaconda3\lib\site-packages\statsmodels\base\data.py in _check_integrity(self)
450 (hasattr(endog, 'index') and hasattr(exog, 'index')) and
451 not self.orig_endog.index.equals(self.orig_exog.index)):
--> 452 raise ValueError("The indices for endog and exog are not aligned")
453 super(PandasData, self)._check_integrity()
454
ValueError: The indices for endog and exog are not aligned
```
The y matrix and X matrix have shape of (9018,),(9018, 59). Therefore any mismatch in dependent and independent variables doesn't appear. Any idea?
|
2016/05/10
|
[
"https://Stackoverflow.com/questions/37144913",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5286020/"
] |
This error may also come due to wrong usage of API
**Correct**:
```py
X_train, X_test, y_train, y_test = train_test_split(
X, y, train_size=0.7, test_size=0.3, random_state=100
)
```
**Incorrect**:
```py
X_train, y_train, X_test, y_test = train_test_split(
X, y, train_size=0.7, test_size=0.3, random_state=100
)
```
|
It may be due to different indices in `x` and `y`. This may happen when we initially removed some values from dataframe and perform some operations on `x` after separating `x` and `y`. The indices in `y` will contain the missing indices from original dataframe while `x` will have continuous indices. It's best to do `dataframe.reset_index(drop = True)` before separating `x` and `y`.
| 6,089
|
51,020,212
|
I am trying to download a package to call **sc2** and when I write `pip install sc2` into cmd prompt, I receive the error:
>
> Command "python setup.py egg\_info" failed with error code 1 in c:\users\user\appdata\local\temp\pip-install-q3ixb0\websockets.
>
>
>
Any help?
|
2018/06/25
|
[
"https://Stackoverflow.com/questions/51020212",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/9988239/"
] |
**easy\_install** Worked of me.
easy\_install sc2
|
Maybe are you behind a proxy or not connected? has you try to make ping to any url ?
| 6,099
|
63,708,795
|
I'm receiving a string like this sentence **"Mr,Pavol,Bujna,has arrived"** from a server.
To my Raspberry Pi with Python sockets...
It's working well, but need to split the sentence to separate variables.
What I have now:
`message2 = 'Mr,Pavol,Bujna,has arrived'`
What I need:
```
firstname = 'Pavol'
surname = 'Bujna'
arravingLeaving = 'has arrived'
```
How can I split the string variable to multiple variables when the string is a comma-separated?
Raspberry Pi code.
Important line is:
`draw.text((10, 40), message2, font = font20, fill = 0)`
```
#!/usr/bin/python
# -*- coding:utf-8 -*-
import sys
import os
picdir = os.path.join(os.path.dirname(os.path.dirname(os.path.realpath(__file__))), 'pic')
libdir = os.path.join(os.path.dirname(os.path.dirname(os.path.realpath(__file__))), 'lib')
if os.path.exists(libdir):
sys.path.append(libdir)
import logging
from waveshare_epd import epd2in13d
import time
from PIL import Image,ImageDraw,ImageFont
import traceback
from socket import socket, gethostbyname, AF_INET, SOCK_DGRAM
import sys
PORT_NUMBER = 5000
SIZE = 1024
hostName = gethostbyname( '0.0.0.0' )
mySocket = socket( AF_INET, SOCK_DGRAM )
mySocket.bind( (hostName, PORT_NUMBER) )
#Set output log level
logging.basicConfig(level=logging.DEBUG)
while True:
(data,addr) = mySocket.recvfrom(SIZE)
message = str(data) #make string from data
message2 = message[2:-1] #remove first (b), second (') and last (') character
#try:
logging.info("epd2in13d Demo")
epd = epd2in13d.EPD()
logging.info("init and Clear")
epd.init()
epd.Clear(0xFF)
font15 = ImageFont.truetype(os.path.join(picdir, 'Font.ttc'), 15)
font20 = ImageFont.truetype(os.path.join(picdir, 'Font.ttc'), 20)
font24 = ImageFont.truetype(os.path.join(picdir, 'Font.ttc'), 24)
# Drawing on the Horizontal image
logging.info("1.Drawing on the Horizontal image...")
Himage = Image.new('1', (epd.height, epd.width), 255) # 255: clear the frame
draw = ImageDraw.Draw(Himage)
draw.text((150, 0), time.strftime('%H:%M'), font = font20, fill = 0)
draw.text((10, 40), message2, font = font20, fill = 0)
epd.display(epd.getbuffer(Himage))
time.sleep(2)
```
This code is for the ALPR system I made. Raspberry Pi has an e-ink display where it's showing who arrived.
As the display is not long enough, I need to split the sentence into multiple lines, that's why I need multiple variables to work with.
[](https://i.stack.imgur.com/VsbeE.jpg)
|
2020/09/02
|
[
"https://Stackoverflow.com/questions/63708795",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/13082942/"
] |
message2 = 'Mr,Pavol,Bujna,has arrived'
```
words=message2.split(',')
firstname=words[1]
lastname=words[2]
arravingLeaving=words[3]
```
or u could use tuple unpacking as well
|
Thanks, I figured it out myself meanwhile.
Splitting message2 string to multiple variables
```
title, firstName, lastName, arravingLeaving = message2.split(",")
print(title)
print(firstName)
print(lastName)
print(arravingLeaving)
```
Whole code:
```
#!/usr/bin/python
# -*- coding:utf-8 -*-
import sys
import os
picdir = os.path.join(os.path.dirname(os.path.dirname(os.path.realpath(__file__))), 'pic')
libdir = os.path.join(os.path.dirname(os.path.dirname(os.path.realpath(__file__))), 'lib')
if os.path.exists(libdir):
sys.path.append(libdir)
import logging
from waveshare_epd import epd2in13d
import time
from PIL import Image,ImageDraw,ImageFont
import traceback
from socket import socket, gethostbyname, AF_INET, SOCK_DGRAM
import sys
PORT_NUMBER = 5000
SIZE = 1024
hostName = gethostbyname( '0.0.0.0' )
mySocket = socket( AF_INET, SOCK_DGRAM )
mySocket.bind( (hostName, PORT_NUMBER) )
#Set output log level
logging.basicConfig(level=logging.DEBUG)
while True:
(data,addr) = mySocket.recvfrom(SIZE)
message = str(data) #make string from data
message2 = message[2:-1] #remove first (b), second (') and last (') character
#Splitting message2 string to multiple variables
title, firstName, lastName, arravingLeaving = message2.split(",")
print(title)
print(firstName)
print(lastName)
print(arravingLeaving)
#try:
logging.info("epd2in13d Demo")
epd = epd2in13d.EPD()
logging.info("init and Clear")
epd.init()
epd.Clear(0xFF)
font15 = ImageFont.truetype(os.path.join(picdir, 'Font.ttc'), 15)
font20 = ImageFont.truetype(os.path.join(picdir, 'Font.ttc'), 20)
font24 = ImageFont.truetype(os.path.join(picdir, 'Font.ttc'), 24)
# Drawing on the Horizontal image
logging.info("1.Drawing on the Horizontal image...")
Himage = Image.new('1', (epd.height, epd.width), 255) # 255: clear the frame
draw = ImageDraw.Draw(Himage)
draw.text((150, 0), time.strftime('%H:%M'), font = font20, fill = 0)
draw.text((15, 0), title, font = font20, fill = 0)
draw.text((10, 25), firstName, font = font20, fill = 0)
draw.text((10, 50), lastName, font = font20, fill = 0)
draw.text((10, 75), arravingLeaving, font = font20, fill = 0)
epd.display(epd.getbuffer(Himage))
time.sleep(2)
```
[](https://i.stack.imgur.com/7sj4V.jpg)
| 6,100
|
13,256,735
|
In my application, I have one single thread that is performing very fast processing on log lines to produce a float value. There is usually only a single other thread performing slow reads on the values at intervals. Every so often, other threads can come and go and also perform once-off reads on those values.
My question is about the necessity of a mutex (in cpython), for this specific case where the data is simply the most recent data available. It is not a critical value that must be in sync with anything else (or even the other fields being written at the same time). Just simply... what the value is when it is.
That being said, I know I could easily add a lock (or a readers / write lock) to guard the update of the value, but I wonder if the overhead of the acquire/release in rapid succession for the course of an entire log (lets say average 5000 lines) is not worth it just to do shared resources "appropriately".
Based on the docs on [What kinds of global value mutation are thread-safe?](http://docs.python.org/2/faq/library#what-kinds-of-global-value-mutation-are-thread-safe), these assignments should be atomic operations.
Here is a basic example of the logic:
```py
import time
from random import random, choice, randint
from threading import Thread
class DataStructure(object):
def __init__(self):
self.f_val = 0.0
self.s_val = ""
def slow_reader(data):
"""
Loop much more slowly and read values
anywhere between 1 - 5 second intervals
"""
for _ in xrange(10):
f_val = data.f_val
# don't care about sync here
s_val = data.s_val
print f_val, s_val
# in real code could be even 30 or 60 seconds
time.sleep(randint(1,3))
def fast_writer(data):
""" Update data extremely often """
for _ in xrange(20000):
f_val, s_val = do_work()
data.f_val = f_val
# don't care about sync here
data.s_val = s_val
FLOAT_SRC = [random()*100 for _ in xrange(100)]
STR_SRC = ['foo', 'bar', 'biz', 'baz']
def do_work():
time.sleep(0.001)
return choice(FLOAT_SRC), choice(STR_SRC)
if __name__ == "__main__":
data = DataStructure()
threads = [
Thread(target=slow_reader, args=(data,)),
Thread(target=fast_writer, args=(data,)),
]
for t in threads:
t.daemon=True
t.start()
for t in threads:
t.join()
```
This represents the fast log parser (actually being read via a PIPE) doing work on each line, and a slow periodic reader grabbing whatever are the current values at that moment. At any time, another once-read thread could come and go to grab those same values from the data structure.
Is this a situation where a mutex in cpython is not needed at all?
**Edit**
To clarify a bit more... I don't even need the float and string fields to be in sync from the last write. It is ok if the scheduler decides to switch contexts between the float and string reads. I'm just wondering if I even need the overhead of a lock to simply read whatever value is assigned at any moment in time.
My concern is regarding the fact that the writer is going to be looping, on an extremely fast operating, locking and unlocking a lock that is often un-contended.
Effectively assume this is all I care about in the `reader`:
```
def slow_reader(data):
for _ in xrange(10):
f_val = data.f_val
print f_val
time.sleep(randint(1,3))
```
|
2012/11/06
|
[
"https://Stackoverflow.com/questions/13256735",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/496445/"
] |
Try this macro:
```
Sub ConsolidateRows()
'takes rows and consolidate one or many cells, based on one or many cells matching with above or below rows.
Dim lastRow As Long, i As Long, j As Long
Dim colMatch As Variant, colConcat As Variant
'**********PARAMETERS TO UPDATE****************
Const strMatch As String = "A" 'columns that need to match for consolidation, separated by commas
Const strConcat As String = "B" 'columns that need consolidating, separated by commas
Const strSep As String = ", " 'string that will separate the consolidated values
'*************END PARAMETERS*******************
application.ScreenUpdating = False 'disable ScreenUpdating to avoid screen flashes
colMatch = Split(strMatch, ",")
colConcat = Split(strConcat, ",")
lastRow = range("A" & Rows.Count).End(xlUp).Row 'get last row
For i = lastRow To 2 Step -1 'loop from last Row to one
For j = 0 To UBound(colMatch)
If Cells(i, colMatch(j)) <> Cells(i - 1, colMatch(j)) Then GoTo nxti
Next
For j = 0 To UBound(colConcat)
if len(Cells(i - 1, colConcat(j)))>0 then _
Cells(i - 1, colConcat(j)) = Cells(i - 1, colConcat(j)) & strSep & Cells(i, colConcat(j))
Next
Rows(i).Delete
nxti:
Next
application.ScreenUpdating = True 'reenable ScreenUpdating
End Sub
```
|
The following VBA code should work for what you are trying to do. It assumes that your email addresses are in the range A2:A50000, so you can change this to fit your needs. If you are not too familiar with VBA, under the Developer Tab in Excel 2011 Mac, there should be an icon called Visual Basic Editor. Open VB and CMD+Click on the window pane and insert a new module. Then paste in the following code:
```
Sub combineData()
Dim xCell As Range, emailRange As Range
Dim tempRow(0 To 3) As Variant, allData() As Variant
Dim recordCnt As Integer
Set emailRange = Range("A2:A11")
recordCnt = -1
'LOOP THROUGH EACH CELL AND ADD THE DATE TO AN ARRAY
For Each xCell In emailRange
'IF THE CELL IS EQUAL TO THE ONE ABOVE IT,
'ADD THE PRODUCT NUMBER SEPARATED WITH A COMMA
If xCell = xCell.Offset(-1, 0) Then
tempRow(1) = tempRow(1) & ", " & xCell.Offset(0, 1).Value
allData(recordCnt) = tempRow
Else
recordCnt = recordCnt + 1
If recordCnt = 0 Then
ReDim allData(0 To recordCnt)
Else
ReDim Preserve allData(0 To recordCnt)
End If
tempRow(0) = xCell.Value
tempRow(1) = xCell.Offset(0, 1).Value
tempRow(2) = xCell.Offset(0, 2).Value
tempRow(3) = xCell.Offset(0, 3).Value
allData(recordCnt) = tempRow
End If
Next xCell
'CREATE A NEW WORKSHEET AND DUMP IN THE CONDENSED DATA
Dim newWs As Worksheet, i As Integer, n As Integer
Set newWs = ThisWorkbook.Worksheets.Add
For i = 0 To recordCnt
For n = 0 To 3
newWs.Range("A2").Offset(i, n) = allData(i)(n)
Next n
Next i
End Sub
```
Then close VB, and click the "Macros" button under the Developer tab. Then run combineData. That should give you the result you're looking for. Let me know if you have any trouble!
| 6,101
|
53,910,919
|
I want get the name (first line only) from the below raw content. Can you please help me? I want to get just `RAM KUMAR` only from the raw text using python.
Raw Content:
```
"RAM KUMAR\n\nMarketing and Sales Professional\n\n+91.0000000000\n\nshri.babuji@shriresume.com, shri1.babuji@shriresume.com\n\nLinkedin.com/in/ramkumar \t\t\t\t \n\n\t\t\t\n\n \t \n\nSUMMARY\n\n\n\nHighly motivated, creative and versatile IT professional with 9.2 years of experience in Java, J2SE & J2EE and related technologies as Developer, Onsite/Offshore Coordinator and Project Lead.\n\nProficiency in Java, Servlets, Struts and the latest frameworks like JSF, EJB 3.0.\n\nKnowledge of Java, JSP, Servlet, EJB, JMS, Struts and spring, Hibernate, XML, Web Services.\n\nExperience in using MVC design pattern, Java, Servlets, JSP, JavaScript, Hibernate 3.0, Web Services (SOAP and Restful), HTML, JQuery, XML, Web Logic, JBOSS 4.2.3, SQL, PL/SQL, JUnit, and Apache-Tomcat, Linux.\n\nExtensive experience in developing various web based applications using Struts framework.\n\nExpertise in relational databases like Oracle, My SQL and SQL Server.\n\nExperienced in developing Web Based applications using Web Sphere 6.0 and Oracle 9i as a back end."
```
|
2018/12/24
|
[
"https://Stackoverflow.com/questions/53910919",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3239174/"
] |
No need to use regex, just simply do:
```
print(yourstring.split('\n')[0])
```
Output:
```
RAM KUMAR
```
**Edit:**
```
with open(filename,'r') as f:
print(f.read().split('\n')[0])
```
|
Use [`split`](https://www.geeksforgeeks.org/python-string-split/) to do something like this perhaps:
```
txt_content = "RAM KUMAR\n\nMarketing and Sales Professional\n\n+91.0000000000\n\nshri.babuji@shriresume.com, shri1.babuji@shriresume.com\n\nLinkedin.com/in/ramkumar \t\t\t\t \n\n\t\t\t\n\n \t \n\nSUMMARY\n\n\n\nHighly motivated, creative and versatile IT professional with 9.2 years of experience in Java, J2SE & J2EE and related technologies as Developer, Onsite/Offshore Coordinator and Project Lead.\n\nProficiency in Java, Servlets, Struts and the latest frameworks like JSF, EJB 3.0.\n\nKnowledge of Java, JSP, Servlet, EJB, JMS, Struts and spring, Hibernate, XML, Web Services.\n\nExperience in using MVC design pattern, Java, Servlets, JSP, JavaScript, Hibernate 3.0, Web Services (SOAP and Restful), HTML, JQuery, XML, Web Logic, JBOSS 4.2.3, SQL, PL/SQL, JUnit, and Apache-Tomcat, Linux.\n\nExtensive experience in developing various web based applications using Struts framework.\n\nExpertise in relational databases like Oracle, My SQL and SQL Server.\n\nExperienced in developing Web Based applications using Web Sphere 6.0 and Oracle 9i as a back end."
print(txt_content.split('\n')[0]) # 'RAM KUMAR'
```
| 6,102
|
28,465,477
|
I'm looking into how to compute as efficient as possible in python3 a dot product inside a double sum of the form:
```
import cmath
for j in range(0,N):
for k in range(0,N):
sum_p += cmath.exp(-1j * sum(a*b for a,b in zip(x, [l - m for l, m in zip(r_p[j], r_p[k])])))
```
where r\_np is a array of several thousand triples, and x a constant triple. Timing for a length of `N=1000` triples is about `2.4s`. The same using numpy:
```
import numpy as np
for j in range(0,N):
for k in range(0,N):
sum_np = np.add(sum_np, np.exp(-1j * np.inner(x_np,(r_np[j] - r_np[k]))))
```
is actually slower with a runtime of about `4.0s`. I presume this is due to no big vectorizing advantage, only the short 3 dot 3 is np.dot, which is eaten up by starting N^2 of those in the loop.
However, a modest speedup over the first example I could gain by using plain python3 with map and mul:
```
from operator import mul
for j in range(0,N):
for k in range(0,N):
sum_p += cmath.exp(-1j * sum(map(mul,x, [l - m for l, m in zip(r_p[j], r_p[k])])))
```
with a runtime about `2.0s`
Attempts to either use an if condition to not calculate the case `j=k`, where
```
r_np[j] - r_np[k] = 0
```
and thus the dot product also becomes 0, or splitting the sum up in two to achieve the same
```
for j in range(0,N):
for k in range(j+1,N):
...
for k in range(0,N):
for j in range(k+1,N):
...
```
both made it even slower. So the whole thing scales with O(N^2), and I wonder if with some methods like sorting or other things one could get rid of the loops and to make it scale with O(N logN).
The problem is that I need single digit second runtimes for a set of `N~6000` triples as I have thousands of those sums to compute. Otherwise I have to try scipy’s weave , numba, pyrex or python or go down the C path entirely…
Thanks in advance for any help!
**Edit:**
this is how a data sample would look like:
```
# numpy arrays
x_np = np.array([0,0,1], dtype=np.float64)
N=1000
xy = np.multiply(np.subtract(np.random.rand(N,2),0.5),8)
z = np.linspace(0,40,N).reshape(N,1)
r_np = np.hstack((xy,z))
# in python format
x = (0,0,1)
r_p = r_np.tolist()
```
|
2015/02/11
|
[
"https://Stackoverflow.com/questions/28465477",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2519380/"
] |
I used this to generate test data:
```
x = (1, 2, 3)
r_p = [(i, j, k) for i in range(10) for j in range(10) for k in range(10)]
```
On my machine, this took `2.7` seconds with your algorithm.
Then I got rid of the `zip`s and `sum`:
```
for j in range(0,N):
for k in range(0,N):
s = 0
for t in range(3):
s += x[t] * (r_p[j][t] - r_p[k][t])
sum_p += cmath.exp(-1j * s)
```
This brought it down to `2.4` seconds.
Then I noted that `x` is constant so:
```
x * (p - q) = x1*p1 - x1*q1 + x2*p2 - x2*q2 - ...
```
So I changed the generation code to:
```
x = (1, 2, 3)
r_p = [(x[0] * i, x[1] * j, x[2] * k) for i in range(10) for j in range(10) for k in range(10)]
```
And the algorithm to:
```
for j in range(0,N):
for k in range(0,N):
s = 0
for t in range(3):
s += r_p[j][t] - r_p[k][t]
sum_p += cmath.exp(-1j * s)
```
Which got me to `2.0` seconds.
Then I realized we can rewrite it as:
```
for j in range(0,N):
for k in range(0,N):
sum_p += cmath.exp(-1j * (sum(r_p[j]) - sum(r_p[k])))
```
Which, surprisingly, got me to `1.1` seconds, which I can't really explain - maybe some caching going on?
Anyway, caching or not, you can precompute the sums of your triples and then you won't have to rely on the caching mechanism. I did that:
```
sums = [sum(a) for a in r_p]
sum_p = 0
N = len(r_p)
start = time.clock()
for j in range(0,N):
for k in range(0,N):
sum_p += cmath.exp(-1j * (sums[j] - sums[k]))
```
Which got me to `0.73` seconds.
I hope this is good enough!
**Update:**
Here's one around `0.01` seconds with a single for loop. It seems mathematically sound, but it's giving slightly different results, which I'm guessing is due to precision issues. I'm not sure how to fix those, but I thought I'd post it in case you can live with the precision issues or someone knows how to fix them.
Considering I'm using fewer `exp` calls than your initial code however, consider that maybe this is actually the more correct version, and your initial approach is the one with precision issues.
```
sums = [sum(a) for a in r_p]
e_denom = sum([cmath.exp(1j * p) for p in sums])
sum_p = 0
N = len(r_p)
start = time.clock()
for j in range(0,N):
sum_p += e_denom * cmath.exp(-1j * sums[j])
print(sum_p)
end = time.clock()
print(end - start)
```
**Update 2:**
The same, except with less multiplications and a `sum` function call:
```
sum_p = e_denom * sum([np.exp(-1j * p) for p in sums])
```
|
That double loop is a time killer in `numpy`. If you use vectorized array operations, the evaluation is cut to under a second.
```
In [1764]: sum_np=0
In [1765]: for j in range(0,N):
for k in range(0,N):
sum_np += np.exp(-1j * np.inner(x_np,(r_np[j] - r_np[k])))
In [1766]: sum_np
Out[1766]: (2116.3316526447466-1.0796252780664872e-11j)
In [1767]: np.exp(-1j * np.inner(x_np, (r_np[:N,None,:]-r_np[None,:N,:]))).sum((0,1))
Out[1767]: (2116.3316526447466-1.0796252780664872e-11j)
```
Timings:
```
In [1768]: timeit np.exp(-1j * np.inner(x_np, (r_np[:N,None,:]-r_np[None,:N,:]))).sum((0,1))
1 loops, best of 3: 506 ms per loop
In [1769]: %%timeit
sum_np=0
for j in range(0,N):
for k in range(0,N):
sum_np += np.exp(-1j * np.inner(x_np,(r_np[j] - r_np[k])))
1 loops, best of 3: 12.9 s per loop
```
replacing `np.inner` with `np.einsum` shaves 20% off the time
```
np.exp(-1j * np.einsum('k,ijk', x_np, r_np[:N,None,:]-r_np[None,:N,:])).sum((0,1))
```
| 6,105
|
61,354,963
|
```
>>> 1/3
0.3333333333333333
>>> 1/3+1/3+1/3
1.0
```
I can't understand why this is 1.0. Shouldn't it be `0.9999999999999999`?
So I kind of came up with the solution that python has an automatic rounding for it's answer, but if than, the following results can't be explained...
```
>>> 1/3+1/3+1/3+1/3+1/3+1/3
1.9999999999999998
>>> (1/3+1/3+1/3)+(1/3+1/3+1/3)
2.0
```
I thought rounding error occurred because there were only limited number of digits to use in the mantissa, and the exponent,(in floating point numbers) but 0.9999~~9 is not off the limit of the number of digits too..
Can somebody explain why these results came out like this?
|
2020/04/21
|
[
"https://Stackoverflow.com/questions/61354963",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/13376385/"
] |
This is one of the subtle points of IEEE-754 arithmetic. When you write:
```py
>>> 1/3
0.3333333333333333
```
the number you see printed is a "rounded" version of the number that is internally stored as the result of `1/3`. It's just what the Double -> String conversion in the printing process decided to show you. But you already knew that.
Now you can ask, is there a way to find out what the difference is? Yes, use the `fractions` module:
```py
>>> from fractions import Fraction
>>> Fraction(1, 3) - Fraction(1/3)
Fraction(1, 54043195528445952)
```
Ah, that's interesting. So it is slightly less than the actual value, and the difference is `1 / 54043195528445952`. This is, of course, expected.
So, what happens when you "add" two of these together. Let's see:
```py
>>> Fraction(2,3) - Fraction(1/3+1/3)
Fraction(1, 27021597764222976)
```
Again, you're close to `2/3`rds, but still not quite there. Let's do the addition one more time:
```py
>>> Fraction(1,1) - Fraction(1/3+1/3+1/3)
Fraction(0, 1)
```
Bingo! with 3 of them, the representation is exactly `1`.
Why is that? Well, in each addition you get a number that's close to what you think the answer should be, but the internal rounding causes the result to become a close-by number that's not what you had in mind. With three additions what your intuition tells you and what the internal rounding does match up.
It is important to emphasize that the addition `1/3 + 1/3 + 1/3` does *not* produce a `1`; it just produces an internal value whose closest representation as an IEEE-754 double-precision floating point value is `1`. This is a subtle but important difference. Hope that helps!
|
This question may provide some answers to the floating point error
[Is floating point math broken?](https://stackoverflow.com/questions/588004/is-floating-point-math-broken)
With the brackets, the compiler is breaking down the addition into smaller pieces, reducing the possibility of a floating point which is not supposed to be there to carry on and keep 'accumulating' the floating point error. Most likely when split into groups of 3, the compiler knows the sum will be 1 and add 1 + 1 together
| 6,108
|
50,120,062
|
I have a 1D `numpy` array. The difference between two succeeding values in this array is either one or larger than one. I want to cut the array into parts for every occurrence that the difference is larger than one. Hence:
```
arr = numpy.array([77, 78, 79, 80, 90, 91, 92, 100, 101, 102, 103, 104])
```
should become
```
[array([77, 78, 79, 80]), array([90, 91, 92]), array([100, 101, 102, 103, 104])]
```
I have the following code that does the trick but I have the feeling I am being to complicated here. There has to be a better/more pythonic way. Anyone with a more elegant approach?
```
import numpy
def split(arr, cut_idxs):
empty_arr = []
for idx in range(-1, cut_idxs.shape[0]):
if idx == -1:
l, r = 0, cut_idxs[0]
elif (idx != -1) and (idx != cut_idxs.shape[0] - 1):
l, r = cut_idxs[idx] + 1, cut_idxs[idx + 1]
elif idx == cut_idxs.shape[0] - 1:
l, r = cut_idxs[-1] + 1, arr.shape[0]
empty_arr.append(arr[l:r + 1])
return empty_arr
arr = numpy.array([77, 78, 79, 80, 90, 91, 92, 100, 101, 102, 103, 104])
cuts = numpy.where(numpy.ediff1d(arr) > 2)[0]
print split(arr, cuts)
```
|
2018/05/01
|
[
"https://Stackoverflow.com/questions/50120062",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2776885/"
] |
One Pythonic way would be -
```
np.split(arr, np.flatnonzero(np.diff(arr)>1)+1)
```
Sample run -
```
In [10]: arr
Out[10]: array([ 77, 78, 79, 80, 90, 91, 92, 100, 101, 102, 103, 104])
In [11]: np.split(arr, np.flatnonzero(np.diff(arr)>1)+1)
Out[11]:
[array([77, 78, 79, 80]),
array([90, 91, 92]),
array([100, 101, 102, 103, 104])]
```
Another with `slicing` -
```
In [16]: cut_idx = np.r_[0,np.flatnonzero(np.diff(arr)>1)+1,len(arr)]
# Or np.flatnonzero(np.r_[True, np.diff(arr)>1, True])
In [17]: [arr[i:j] for i,j in zip(cut_idx[:-1],cut_idx[1:])]
Out[17]:
[array([77, 78, 79, 80]),
array([90, 91, 92]),
array([100, 101, 102, 103, 104])]
```
|
Another way with slicing, getting the appropriate indices using `np.diff`:
```
import numpy as np
def split(arr):
idx = np.pad(np.where(np.diff(arr) > 1)[0]+1, (1,1),
'constant', constant_values = (0, len(arr)))
return [arr[idx[i]: idx[i+1]] for i in range(len(idx)-1)]
```
Result:
```
arr = np.array([77, 78, 79, 80, 90, 91, 92, 100, 101, 102, 103, 104])
>>> split(arr)
[array([77, 78, 79, 80]), array([90, 91, 92]), array([100, 101, 102, 103, 104])]
```
In your case, your slicing "map" `idx` ends up being: `array([ 0, 4, 7, 12])`, which is where the `diff` is greater than 1 (indices `4` and `7`), padded by a zero on the left, and the length of your array (`12`) on the right using [`np.pad`](https://docs.scipy.org/doc/numpy-1.14.0/reference/generated/numpy.pad.html)
But `np.split`, as suggested by @Divakar seems to be the way to go
| 6,111
|
63,709,660
|
I'm trying to connect to an SFTP server using Python and Paramiko, but I'm getting this error (the same error occurs when I use pysftp):
```none
starting thread (client mode): 0x17ccde50L
Local version/idstring: SSH-2.0-paramiko_2.7.2
Remote version/idstring: SSH-2.0-OpenSSH_7.2
Connected (version 2.0, client OpenSSH_7.2)
kex algos:[u'curve25519-sha256@libssh.org', u'ecdh-sha2-nistp256', u'ecdh-sha2-nistp384', u'ecdh-sha2-nistp521', u'diffie-hellman-group-exchange-sha256', u'diffie-hellman-group14-sha1'] server key:[u'ssh-rsa', u'rsa-sha2-512', u'rsa-sha2-256', u'ssh-dss', u'ecdsa-sha2-nistp256', u'ssh-ed25519'] client encrypt:[u'chacha20-poly1305@openssh.com', u'aes128-ctr', u'aes192-ctr', u'aes256-ctr', u'aes128-gcm@openssh.com', u'aes256-gcm@openssh.com'] server encrypt:[u'chacha20-poly1305@openssh.com', u'aes128-ctr', u'aes192-ctr', u'aes256-ctr', u'aes128-gcm@openssh.com', u'aes256-gcm@openssh.com'] client mac:[u'umac-64-etm@openssh.com', u'umac-128-etm@openssh.com', u'hmac-sha2-256-etm@openssh.com', u'hmac-sha2-512-etm@openssh.com', u'hmac-sha1-etm@openssh.com', u'umac-64@openssh.com', u'umac-128@openssh.com', u'hmac-sha2-256', u'hmac-sha2-512', u'hmac-sha1'] server mac:[u'umac-64-etm@openssh.com', u'umac-128-etm@openssh.com', u'hmac-sha2-256-etm@openssh.com', u'hmac-sha2-512-etm@openssh.com', u'hmac-sha1-etm@openssh.com', u'umac-64@openssh.com', u'umac-128@openssh.com', u'hmac-sha2-256', u'hmac-sha2-512', u'hmac-sha1'] client compress:[u'none', u'zlib@openssh.com'] server compress:[u'none', u'zlib@openssh.com'] client lang:[u''] server lang:[u''] kex follows?False
Kex agreed: curve25519-sha256@libssh.org
HostKey agreed: ssh-ed25519
Cipher agreed: aes128-ctr
MAC agreed: hmac-sha2-256
Compression agreed: none
kex engine KexCurve25519 specified hash_algo <built-in function openssl_sha256>
Unknown exception: from_buffer() cannot return the address of the raw string within a str or unicode or bytearray object
Traceback (most recent call last):
File "/usr/lib/python2.7/site-packages/paramiko/transport.py", line 2075, in run
self.kex_engine.parse_next(ptype, m)
File "/usr/lib/python2.7/site-packages/paramiko/kex_curve25519.py", line 64, in parse_next
return self._parse_kexecdh_reply(m)
File "/usr/lib/python2.7/site-packages/paramiko/kex_curve25519.py", line 129, in _parse_kexecdh_reply
self.transport._activate_outbound()
File "/usr/lib/python2.7/site-packages/paramiko/transport.py", line 2553, in _activate_outbound
self.local_cipher, key_out, IV_out, self._ENCRYPT
File "/usr/lib/python2.7/site-packages/paramiko/transport.py", line 1934, in _get_cipher
return cipher.encryptor()
File "/usr/lib64/python2.7/site-packages/cryptography/hazmat/primitives/ciphers/base.py", line 126, in encryptor
self.algorithm, self.mode
File "/usr/lib64/python2.7/site-packages/cryptography/hazmat/backends/openssl/backend.py", line 487, in create_symmetric_encryption_ctx
return _CipherContext(self, cipher, mode, _CipherContext._ENCRYPT)
File "/usr/lib64/python2.7/site-packages/cryptography/hazmat/backends/openssl/ciphers.py", line 69, in __init__
iv_nonce = self._backend._ffi.from_buffer(mode.nonce)
TypeError: from_buffer() cannot return the address of the raw string within a str or unicode or bytearray object
```
I was able to successfully connect to the SFTP server using:
```none
sftp -oPort=22 xxxxx@10.132.x.x:/home
```
So I know the server exists and is accessible.
My code in Python is simply this:
```
paramiko.util.log_to_file("filename.log")
ssh = paramiko.SSHClient()
ssh.set_missing_host_key_policy(
paramiko.AutoAddPolicy())
ssh.connect(ftp_host, username=ftp_username, password=ftp_password, timeout=None)
```
And a few dependencies..
```none
asn1crypto @ file:///home/folder/app/utils/asn1crypto-1.2.0-py2.py3-none-any.whl
bcrypt @ file:///home/folder/app/utils/bcrypt-3.1.6-cp27-cp27mu-manylinux1_x86_64.whl
cffi==1.5.2
cryptography @ file:///home/folder/app/utils/cryptography-3.1-cp27-cp27mu-manylinux2010_x86_64.whl
netmiko==2.3.2
paramiko @ file:///home/folder/app/utils/vendor/paramiko-2.7.2-py2.py3-none-any.whl
ply==3.4
pyasn1==0.1.9
pycparser==2.19
PyNaCl @ file:///home/folder/app/utils/PyNaCl-1.3.0-cp27-cp27mu-manylinux1_x86_64.whl
pyOpenSSL==16.0.0
six==1.9.0
```
My question is, what does this error mean exactly and what is the best way to resolve it? I need to copy images to an SFTP, but can't quite connect.
By the way, the server I'm running the Python is stuck on 2.7 and I'm not allowed to upgrade it. Also, it doesn't have access to the internet so I can't use things like apt-get. I install things by dragging and dropping zipped folders or .whl files. I just a matter of finding the correct combination of dependencies.
|
2020/09/02
|
[
"https://Stackoverflow.com/questions/63709660",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5754267/"
] |
This topic suggests that you may have obsolete dependencies:
<https://github.com/paramiko/paramiko/issues/1027>
[The solution by @bieli](https://github.com/paramiko/paramiko/issues/1027#issuecomment-374145838) seems to help many of those who face the problem:
```
sudo pip uninstall cryptography -y && sudo apt-get purge python3-cryptography && sudo apt-get autoremove && sudo pip3 install --upgrade cryptography
```
---
*If you cannot upgrade your dependencies, you can try using a different KEX. But in general, this may be dead end.*
---
*Obligatory warning: Do not use `AutoAddPolicy` – You are losing a protection against [MITM attacks](https://en.wikipedia.org/wiki/Man-in-the-middle_attack) by doing so. For a correct solution, see [Paramiko "Unknown Server"](https://stackoverflow.com/q/10670217/850848#43093883)*.
|
In the sample below, you may see absolute paths to a few of the dependencies because I'm running the Python script on a remote server without internet. Therefore, .whl files had to be copied from my PC to the remote server. Of these dependencies, "**cffi**" was upgraded to version 1.11.2 and eventually resolved the issue.
If you find yourself having similar issue, try to find the best mixture of dependencies like so:
```
asn1crypto @ file:///home/badge_bridge/utils/asn1crypto-1.2.0-py2.py3-none-any.whl
bcrypt @ file:///home/badge_bridge/utils/bcrypt-3.1.6-cp27-cp27mu-manylinux1_x86_64.whl
cffi @ file:///home/badge_bridge/utils/vendor/cffi-1.11.2-cp27-cp27mu-manylinux1_x86_64.whl
chrome-gnome-shell==0.0.0
cryptography @ file:///home/badge_bridge/utils/vendor/cryptography-2.1-cp27-cp27mu-manylinux1_x86_64.whl
cupshelpers==1.0
ecdsa==0.6
enum34 @ file:///home/badge_bridge/utils/vendor/enum34-1.1.6-py2-none-any.whl
idna==2.0
ipaddress==1.0.14
isc==2.0
netmiko==2.3.2
paramiko @ file:///home/badge_bridge/utils/vendor/paramiko-2.3.3-py2.py3-none-any.whl
ply==3.4
pyasn1==0.1.9
pycparser==2.19
pycryptodome @ file:///home/badge_bridge/utils/vendor/pycryptodome-3.6.5-cp27-cp27mu-manylinux1_x86_64.whl
pycups==1.9.66
pycurl==7.19.0
pygobject==3.20.1
PyNaCl @ file:///home/badge_bridge/utils/PyNaCl-1.3.0-cp27-cp27mu-manylinux1_x86_64.whl
pyOpenSSL==16.0.0
pysftp==0.2.9
pysmbc==1.0.13
requests==2.11.1
six==1.13.0
```
| 6,112
|
56,904,802
|
I want functions in a class to store their returned values in some data structure. For this purpose I want to use a decorator:
```py
results = []
instances = []
class A:
def __init__(self, data):
self.data = data
@decorator
def f1(self, a, b):
return self.data + a + b
@decorator
def f2(self, a):
return self.data + a + 1
x = A(1)
x.f1(1, 2)
x.f2(3)
print(results)
```
The question is, how to implement this decorator.
The main idea is the following:
```py
class Wrapper:
def __init__(self, func):
self.func = func
def __call__(self, *args):
res = self.func(*args)
results.append(res)
instances.append(args[0])
def decorator(func):
return Wrapper(func)
```
But then I am getting an error message:
```
TypeError: f1() missing 1 required positional argument: 'b'
```
This question is similar to what other people asked ([How to decorate a method inside a class?](https://stackoverflow.com/questions/1367514/how-to-decorate-a-method-inside-a-class), [Python functools.wraps equivalent for classes](https://stackoverflow.com/questions/6394511/python-functools-wraps-equivalent-for-classes)), but it is not clear where to put `@functools.wraps` or to call `@functools.update_wrapped()` in my case.
|
2019/07/05
|
[
"https://Stackoverflow.com/questions/56904802",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2015614/"
] |
You will need to use `CodeIdTokenToken` response type, according to the [documentation](https://openid.net/specs/openid-connect-core-1_0.html#HybridAuthRequest)
`options.ResponseType = OpenIdConnectResponseType.CodeIdTokenToken;`
|
I managed to fix this. To anyone that would encounter this issue, set the response type to **Code** to get both the id\_token and the access\_token. This will instruct Open ID Connect to use the authorization code flow.
```
options.ResponseType = OpenIdConnectResponseType.Code
```
| 6,113
|
61,039,847
|
I am making Exam app with kivy(python) and I have problem with getting correct answer. I have dictonary of translates from latin words to slovenian words exemple(Keys are latin words, values are slovenian words):
```
Dic = {"Aegrotus": "bolnik", "Aether": "eter"}
```
So the problem is when 2 or 3 latin words mean same as 1 sloveian word and vice versa. Exemple:
```
Dic = {("A", "ab"): "od", "Acutus": ("Akuten", "Akutna", "Akutno"), "Aromaticus": ("Dišeč", "Odišavljen")}
```
For example:
[Exemple\_pic](https://i.stack.imgur.com/A8WTz.png)
On image you see app, I have to translate "Agito" what means "stresam" So my question is how to check if its multiple keys what is its value.
I hope you understand my question :).
|
2020/04/05
|
[
"https://Stackoverflow.com/questions/61039847",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/11590506/"
] |
Firstly, you have to be able to get the text output from the app shown in the picture, then you use your dictionary to check it.
And the way to design the dictionary makes it difficult to check. You should design it that way: key is only one string, and values is a list. For example:
```
Dic = {"A": ["od"], "ab": ["od"], "Acutus": ["Akuten", "Akutna", "Akutno"], "Aromaticus": ["Dišeč", "Odišavljen"]}
```
So now after you get the text from your app, let's say it is `text = 'ab:id'`. You will split it to key and value then check in your dict:
```
def check(text):
text = text.split(':')
key = text[0]
value = text[1]
if value in Dic[key]:
return True
return False
```
Let's try it out
```
>>> check('ab:id')
False
>>> check('ab:od')
True
>>> check('Acutus:Akutna')
True
>>> check('Acutus:Akutno')
True
```
|
Do you only need to translate from latin -> slovenian and not the other way around? If so, just make every key a single word. It's OK for multiple keys to have the same value:
```py
Dic = {
"Aegrotus": "bolnik", "Aether": "eter", "A": "od", "ab": "od",
"Acutus": ("Akuten", "Akutna", "Akutno"), "Aromaticus": ("Dišeč", "Odišavljen"),
}
```
Each lookup if then of the form `Dic[latin] -> slovenian`, where `latin` is a single word and `slovenian` is one or more words.
| 6,114
|
51,187,904
|
Trying to read a `Parquet` file in PySpark but getting `Py4JJavaError`. I even tried reading it from the `spark-shell` and was able to do so. I cannot understand what I am doing wrong here in terms of the Python APIs that it is working in Scala and not in PySpark;
```
spark = SparkSession.builder.master("local").appName("test-read").getOrCreate()
sdf = spark.read.parquet("game_logs.parquet")
```
Stack Trace:
```
Py4JJavaError Traceback (most recent call last)
<timed exec> in <module>()
~/pyenv/pyenv/lib/python3.6/site-packages/pyspark/sql/readwriter.py in parquet(self, *paths)
301 [('name', 'string'), ('year', 'int'), ('month', 'int'), ('day', 'int')]
302 """
--> 303 return self._df(self._jreader.parquet(_to_seq(self._spark._sc, paths)))
304
305 @ignore_unicode_prefix
~/pyenv/pyenv/lib/python3.6/site-packages/py4j/java_gateway.py in __call__(self, *args)
1255 answer = self.gateway_client.send_command(command)
1256 return_value = get_return_value(
-> 1257 answer, self.gateway_client, self.target_id, self.name)
1258
1259 for temp_arg in temp_args:
~/pyenv/pyenv/lib/python3.6/site-packages/pyspark/sql/utils.py in deco(*a, **kw)
61 def deco(*a, **kw):
62 try:
---> 63 return f(*a, **kw)
64 except py4j.protocol.Py4JJavaError as e:
65 s = e.java_exception.toString()
~/pyenv/pyenv/lib/python3.6/site-packages/py4j/protocol.py in get_return_value(answer, gateway_client, target_id, name)
326 raise Py4JJavaError(
327 "An error occurred while calling {0}{1}{2}.\n".
--> 328 format(target_id, ".", name), value)
329 else:
330 raise Py4JError(
Py4JJavaError: An error occurred while calling o26.parquet.
: java.lang.IllegalArgumentException
at org.apache.xbean.asm5.ClassReader.<init>(Unknown Source)
at org.apache.xbean.asm5.ClassReader.<init>(Unknown Source)
at org.apache.xbean.asm5.ClassReader.<init>(Unknown Source)
at org.apache.spark.util.ClosureCleaner$.getClassReader(ClosureCleaner.scala:46)
at org.apache.spark.util.FieldAccessFinder$$anon$3$$anonfun$visitMethodInsn$2.apply(ClosureCleaner.scala:449)
at org.apache.spark.util.FieldAccessFinder$$anon$3$$anonfun$visitMethodInsn$2.apply(ClosureCleaner.scala:432)
at scala.collection.TraversableLike$WithFilter$$anonfun$foreach$1.apply(TraversableLike.scala:733)
at scala.collection.mutable.HashMap$$anon$1$$anonfun$foreach$2.apply(HashMap.scala:103)
at scala.collection.mutable.HashMap$$anon$1$$anonfun$foreach$2.apply(HashMap.scala:103)
at scala.collection.mutable.HashTable$class.foreachEntry(HashTable.scala:230)
at scala.collection.mutable.HashMap.foreachEntry(HashMap.scala:40)
at scala.collection.mutable.HashMap$$anon$1.foreach(HashMap.scala:103)
at scala.collection.TraversableLike$WithFilter.foreach(TraversableLike.scala:732)
at org.apache.spark.util.FieldAccessFinder$$anon$3.visitMethodInsn(ClosureCleaner.scala:432)
at org.apache.xbean.asm5.ClassReader.a(Unknown Source)
at org.apache.xbean.asm5.ClassReader.b(Unknown Source)
at org.apache.xbean.asm5.ClassReader.accept(Unknown Source)
at org.apache.xbean.asm5.ClassReader.accept(Unknown Source)
at org.apache.spark.util.ClosureCleaner$$anonfun$org$apache$spark$util$ClosureCleaner$$clean$14.apply(ClosureCleaner.scala:262)
at org.apache.spark.util.ClosureCleaner$$anonfun$org$apache$spark$util$ClosureCleaner$$clean$14.apply(ClosureCleaner.scala:261)
at scala.collection.immutable.List.foreach(List.scala:381)
at org.apache.spark.util.ClosureCleaner$.org$apache$spark$util$ClosureCleaner$$clean(ClosureCleaner.scala:261)
at org.apache.spark.util.ClosureCleaner$.clean(ClosureCleaner.scala:159)
at org.apache.spark.SparkContext.clean(SparkContext.scala:2299)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:2073)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:2099)
at org.apache.spark.rdd.RDD$$anonfun$collect$1.apply(RDD.scala:939)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112)
at org.apache.spark.rdd.RDD.withScope(RDD.scala:363)
at org.apache.spark.rdd.RDD.collect(RDD.scala:938)
at org.apache.spark.sql.execution.datasources.parquet.ParquetFileFormat$.mergeSchemasInParallel(ParquetFileFormat.scala:611)
at org.apache.spark.sql.execution.datasources.parquet.ParquetFileFormat.inferSchema(ParquetFileFormat.scala:241)
at org.apache.spark.sql.execution.datasources.DataSource$$anonfun$8.apply(DataSource.scala:202)
at org.apache.spark.sql.execution.datasources.DataSource$$anonfun$8.apply(DataSource.scala:202)
at scala.Option.orElse(Option.scala:289)
at org.apache.spark.sql.execution.datasources.DataSource.getOrInferFileFormatSchema(DataSource.scala:201)
at org.apache.spark.sql.execution.datasources.DataSource.resolveRelation(DataSource.scala:392)
at org.apache.spark.sql.DataFrameReader.loadV1Source(DataFrameReader.scala:239)
at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:227)
at org.apache.spark.sql.DataFrameReader.parquet(DataFrameReader.scala:622)
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:564)
at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244)
at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)
at py4j.Gateway.invoke(Gateway.java:282)
at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)
at py4j.commands.CallCommand.execute(CallCommand.java:79)
at py4j.GatewayConnection.run(GatewayConnection.java:238)
at java.base/java.lang.Thread.run(Thread.java:844
```
Enviornment Info:
```
Spark version 2.3.1
Using Scala version 2.11.8,
Java HotSpot(TM) 64-Bit Server VM, 1.8.0_172
Python 3.6.5
PySpark 2.3.1
```
|
2018/07/05
|
[
"https://Stackoverflow.com/questions/51187904",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5129047/"
] |
I figured out what was going wrong exactly. The `spark-shell` was using `Java 1.8`, but `PySpark` was using `Java 10.1`. There is some issue with Java 1.9/10 and Spark. Changed the default Java version to 1.8.
|
Spark runs on Java 8/11.
For switching between Java versions, you can add this to your .bashrc/.zshrc file:
```sh
alias j='f(){ export JAVA_HOME=$(/usr/libexec/java_home -v $1) };f'
```
Then in your terminal:
```sh
source .zshrc
```
```sh
j 1.8
```
```sh
java -version
```
This will change the version system-wide. If you just want it different for one app you can prepend it with the environment variable JAVA\_HOME
```sh
JAVA_HOME=$(/usr/libexec/java_home -v 1.8) jupyter notebook
```
```py
%env JAVA_HOME {path}
```
| 6,117
|
5,838,307
|
I'd like to create a drop-in replacement for python's `list`, that will allow me to know when an item is added or removed. A subclass of list, or something that implements the list interface will do equally well. I'd prefer a solution where I don't have to reimplement all of list's functionality though. Is there a easy + pythonic way to do it?
Pseudocode:
```
class MyList(list):
# ...
def on_add(self, items):
print "Added:"
for i in items:
print i
# and the same for on_remove
l = MyList()
l += [1,2,3]
# etc.
```
|
2011/04/29
|
[
"https://Stackoverflow.com/questions/5838307",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/143091/"
] |
To see what functions are defined in list, you can do
```
>>> dir(list)
```
Then you will see what you can override, try for instance this:
```
class MyList(list):
def __iadd__(self, *arg, **kwargs):
print "Adding"
return list.__iadd__(self, *arg, **kwargs)
```
You probably need to do a few more of them to get all of it.
|
The [documentation for userlist](http://docs.python.org/release/2.5.2/lib/module-UserList.html) tells you to subclass `list` if you don't require your code to work with Python <2.2. You probably don't get around overriding at least the methods which allow to add/remove elements. Beware, this includes the slicing operators.
| 6,120
|
18,619,205
|
To start I will mention that I am new to the Python Language and come from a networking background. If you are wondering why I am using Python 2.5 it is due to some device constraints.
What I am trying to do is count the number of lines that are in a string of data as seen below. (not in a file)
```
data = ['This','is','some','test','data','\nThis','is','some','test','data','\nThis','is','some','test','data','\nThis','is','some','test','data','\n']
```
Unfortunately I can't seem to get the correct syntax to count **\n**. So far this is what I have come up with.
```
num = data.count('\n')
print num
```
The above code is clearly wrong for the intended output since it returns 1 in relation to the test data, I would like to see it return 4 if possible. It only finds 1 of the '\n' being the last one in the example since it matches.
If I use the following code instead to find all of them it doesn't run due to syntax.
```
num = data.count (\n)
print num
```
I based this code from the following link, the example is the third one down.
<http://anh.cs.luc.edu/python/hands-on/3.1/handsonHtml/strings3.html>
Is there anyway of accomplishing this or should I be looking at a different solution?
Any help is appreciated
Thank-you
|
2013/09/04
|
[
"https://Stackoverflow.com/questions/18619205",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2240963/"
] |
`list.count` is used to get the count of an item in list, but you need to do a substring search in each item to get the count as 4.
```
>>> data = ['This','is','some','test','data','\nThis','is','some','test','data','\nThis','is','some','test','data','\nThis','is','some','test','data','\n']
```
Total number of items that contain `'\n'`:
```
>>> sum('\n' in item for item in data)
4
```
Sum of count of all `'\n'`s in `data`:
```
>>> data = ['\n\nfoo','\n\n\nbar']
>>> sum(item.count('\n') for item in data)
5
```
|
Your first code does not work because you only have one `'\n'` in your actual list of strings. When you compare `'\n'` to something like `'nThis'`, then it will say that `\n` is not equal to that string and not include it in the count.
What you can do is join the list into a string like so:
```
x = ''.join(num)
```
and then try using the `count` method of the string class. This will let you find substrings that are equivalent to '\n' in the total string.
```
y = x.count('\n')
print y
```
| 6,121
|
54,997,210
|
Im currently writing a program in python where I have to figure out smileys like these `:)`, `:(`, `:-)`, `:-(` should be replace if it is followed by special characters and punctuation should be replaced in this pattern :
ex : `Hi, this is good :)#` should be replaced to `Hi, this is good :)`.
I have created regex pattern for sub it but couldn't enclose this smiley `:-)` in my `re.compile`.It is considering that as a range.
`re.sub(r"[^a-zA-Z0-9:):D)]+", " " , words)` this is working fine
I need to add `:-)` smiley to the regex.
|
2019/03/05
|
[
"https://Stackoverflow.com/questions/54997210",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4045224/"
] |
One approach is to use the following pattern:
```
(:\)|:\(|:-\)|:-\()[^A-Za-z0-9]+
```
This matches *and* captures a smiley face, then matches any number of non alphanumeric characters immediately afterwards. The replacement is just the captured smiley face, thereby removing the non alpha characters.
```
input = "Hi, this is good :)#"
output = re.sub(r"(:\)|:\(|:-\)|:-\()[^A-Za-z0-9]+", "\1" , input)
print(output)
Hi, this is good :)
```
|
you can escape special characters with `\` try:
```
re.sub("[^a-zA-Z0-9:):D:\-))]+", " " , words)
```
| 6,122
|
37,451,031
|
I have some python code to unzip a file and then remove it (the original file), but my code catches an exception: it cannot remove the file, because it is in use.
I think the problem is that when the removal code runs, the unzip action has not finished, so the exception is thrown. So, how can I check the run state of the unzip action before removing the file?
```
file = zipfile.ZipFile(lfilename)
for filename in file.namelist():
file.extract(filename,dir)
remove(lfilename)
```
|
2016/05/26
|
[
"https://Stackoverflow.com/questions/37451031",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2213922/"
] |
The documentation for ZipFile says:
>
> ZipFile is also a context manager and therefore supports the [with](https://docs.python.org/2/reference/compound_stmts.html#with) statement.
>
>
>
So, I'd recommend doing the following:
```
with zipfile.ZipFile(lfilename) as file:
file.extract(filename, dir)
remove(lfilename)
```
One advantage of using a with statement is that the file is closed automatically. It is also beautiful (short, concise, effective).
See also [PEP 343](https://www.python.org/dev/peps/pep-0343/).
|
Try closing the file before removing it.
```
file = zipfile.ZipFile(lfilename)
for filename in file.namelist():
file.extract(filename,dir)
file.close()
remove(lfilename)
```
| 6,124
|
33,687,594
|
I have been trying to debug this issue, but can't seem to figure it out.
When debugging I can see that all the variables are where they should be, but I can't seem to get them out.
When running I get the error message `'dict' object is not callable`
This is the full error message from Django
```
Environment:
Request Method: GET
Request URL: http://127.0.0.1:8000/?form_base_currency=7&form_counter_currency=14&form_base_amount=127
Django Version: 1.8.6
Python Version: 3.4.3
Installed Applications:
['django.contrib.admin',
'django.contrib.auth',
'django.contrib.contenttypes',
'django.contrib.sessions',
'django.contrib.messages',
'django.contrib.staticfiles',
'client']
Installed Middleware:
['django.middleware.security.SecurityMiddleware',
'django.contrib.sessions.middleware.SessionMiddleware',
'django.middleware.common.CommonMiddleware',
'django.middleware.csrf.CsrfViewMiddleware',
'django.contrib.auth.middleware.AuthenticationMiddleware',
'django.contrib.messages.middleware.MessageMiddleware',
'django.middleware.clickjacking.XFrameOptionsMiddleware']
Traceback:
File "/home/johan/sdp/currency-converter/lib/python3.4/site-packages/django/core/handlers/base.py" in get_response
132. response = wrapped_callback(request, *callback_args, **callback_kwargs)
File "/home/johan/sdp/currency-converter/currency_converter/client/views.py" in index
22. form_base_currency = form.cleaned_data('form_base_currency').currency_code
Exception Type: TypeError at /
Exception Value: 'dict' object is not callable
```
For clarity I have added a screenshot from the debugger variables.
[](https://i.stack.imgur.com/YxCha.png)
This is the code I have been using:
```
if request.method == 'GET':
form = CurrencyConverterForm(request.GET)
if form.is_valid():
form_base_currency = form.cleaned_data('form_base_currency').currency_code
form_counter_currency = form.cleaned_data('form_counter_currency')
form_base_amount = form.data.cleaned_data('form_base_amount')
```
To get form\_base\_currency working I tried these different methods:
```
form_base_currency = form.cleaned_data('form_base_currency').currency_code
form_base_currency = form.cleaned_data.form_base_currency.currency_code
form_base_currency = form.cleaned_data('form_base_currency.currency_code')
```
None of them work. Could someone tell me how I can solve this?
|
2015/11/13
|
[
"https://Stackoverflow.com/questions/33687594",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5039579/"
] |
Dictionaries need square brackets
```
form_counter_currency = form.cleaned_data['form_counter_currency']
```
although you may want to use `get` so you can provide a default
```
form_counter_currency = form.cleaned_data.get('form_counter_currency', None)
```
|
import this
```
from rest_framework.response import Response
```
Then after write this in **views.py** file
```
class userlist(APIView):
def get(self,request):
user1=webdata.objects.all()
serializer=webdataserializers(user1,many=True)
return Response(serializer.data)
def post(self):
pass
```
| 6,127
|
7,748,563
|
*Disclaimer: complete rewrite for clarity as of 10/14/2011*
**Given** the `number` primitive in JavaScript is an [IEEE 754](http://en.wikipedia.org/wiki/IEEE_754-2008) 64-bit floating point (*known in other languages as a double*), and [using floats to model currencies is a **bad idea**](https://stackoverflow.com/questions/3730019/why-not-use-double-or-float-to-represent-currency), **is a Money prototype** (JavaScript) **or a [Coffeescript Class](http://rzrsharp.net/2011/06/21/classes-in-coffeescript.html)** that eases use of pseudo-integer cents and string [currency ISO 4217 code](http://en.wikipedia.org/wiki/ISO_4217) to represent currency **available**?
^ There's still gotta be a better way to say that.
I'm hoping to find something that mirrors the common design pattern of the many other languages out there that do include an integer primitive.
As examples, I'm familiar with the [money gem](http://rubygems.org/gems/money) for ruby, and the [python-money](http://pypi.python.org/pypi/python-money/0.5) package, both of which implement variations of this design pattern.
Ideally looking for something that will play nice with [backbone.js](http://documentcloud.github.com/backbone/) and [node.js](http://nodejs.org/), but all suggestions appreciated.
*Edit 4*: As far as I can tell, as long as an implementation of `roundDownOrUp ? floor : ceiling` is called on the Number after every operation (& in between chained operations) everything would function as if one were dealing with integers.
---
### Old information, retained to document the history of the question.
I read [How can I format numbers as money in JavaScript?](https://stackoverflow.com/questions/149055/how-can-i-format-numbers-as-money-in-javascript)
where I found [accounting.js](http://josscrowcroft.github.com/accounting.js/) and [jQuery Globalize](http://wiki.jqueryui.com/w/page/39118647/Globalize) which both do pretty printing but are not designed to model currencies and perform operations with them.
*Edit 1*: Just found [JSorm Currency](http://jsorm.com/wiki/Currency) in the [npm registry](http://search.npmjs.org/#/jsorm-i18n) which is ISO 4217 aware, but does not appear to include any fixes for float "*gotchas*". Please correct if I have misread.
*Edit 2 folded into rewrite.*
*Edit 3*: It looks like a good option would be to use [node-bigint](https://github.com/substack/node-bigint) as suggested by @RicardoTomasi.
|
2011/10/13
|
[
"https://Stackoverflow.com/questions/7748563",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/902839/"
] |
Both [bigdecimal.js](https://github.com/jhs/bigdecimal.js/) and [node-bigint](https://github.com/substack/node-bigint) have arbitrary precision.
I'd go with bigint. bigdecimal is a GWT version of of Java's BigDecimal, clocking in at 113kb, so the code is not what one would call *readable*.
**update:** [money.js](http://josscrowcroft.github.com/money.js/) has just been released, but it uses javascript's native Number, and is focused on currency conversion.
|
There is a GREAT $.money class the does almost everything you could ever want from money in the ku4js-kernel library. You can find the documentation [here](http://kodmunki.github.io/ku4js-kernel/#money). Have fun! :{)}
| 6,128
|
67,579,796
|
I am trying to change the JSON format using python. The received message has some key-value pairs and needs to change certain key names before forwarding the message.
for normal key-value pairs, I have used "data. pop" method, data["newkey"]=data.pop("oldkey") .
But it got complicated with nested key-values. This is just a part of big file that needs to be convrted.
How to convert this
```
{
"atrk1": "form_varient",
"atrv1": "red_top",
"atrt1": "string",
"atrk2": "ref",
"atrv2": "XPOWJRICW993LKJD",
"atrt2": "string"
}
```
into this?
```
"attributes": {
"form_varient": {
"value": "red_top",
"type": "string"
},
"ref": {
"value": "XPOWJRICW993LKJD",
"type": "string"
}
}
```
---
|
2021/05/18
|
[
"https://Stackoverflow.com/questions/67579796",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/15715705/"
] |
If the keys gonna be in the same format you can do something like this.
```
d = {
"ev": "contact_form_submitted",
"et": "form_submit",
"id": "cl_app_id_001",
"uid": "cl_app_id_001-uid-001",
"mid": "cl_app_id_001-uid-001",
"t": "Vegefoods - Free Bootstrap 4 Template by Colorlib",
"p": "http://shielded-eyrie-45679.herokuapp.com/contact-us",
"l": "en-US",
"sc": "1920 x 1080",
"atrk1": "form_varient",
"atrv1": "red_top",
"atrt1": "string",
"atrk2": "ref",
"atrv2": "XPOWJRICW993LKJD",
"atrt2": "string",
"uatrk1": "name",
"uatrv1": "iron man",
"uatrt1": "string",
"uatrk2": "email",
"uatrv2": "ironman@avengers.com",
"uatrt2": "string",
"uatrk3": "age",
"uatrv3": "32",
"uatrt3": "integer"
}
d["attributes"] = {}
d["traits"] = {}
keys_to_remove = []
for k in d.keys():
if k.startswith("atrk"):
value_key = k.replace("atrk","atrv")
type_key = k.replace("atrk","atrt")
d["attributes"][d[k]] = {"value":d[value_key],"type":d[type_key]}
keys_to_remove += [value_key,k,type_key]
if k.startswith("uatrk"):
keys_to_remove.append(k)
value_key = k.replace("uatrk","uatrv")
type_key = k.replace("uatrk","uatrt")
d["traits"][d[k]] = {"value":d[value_key],"type":d[type_key]}
keys_to_remove += [value_key,k,type_key]
for k in keys_to_remove:
if k in d:
del d[k]
```
|
Use the following code it will successfully convert it.
```
json1={
"atrk1": "form_varient",
"atrv1": "red_top",
"atrt1": "string",
"atrk2": "ref",
"atrv2": "XPOWJRICW993LKJD",
"atrt2": "string"
}
json2={}
keys=[]
values=[]
types=[]
for i in json1:
if i[:4]=='atrk':
keys.append(json1[i])
values.append([])
types.append([])
elif i[:4]=='atrv':
values[int(i[-1:])-1].append(json1[i])
elif i[:4]=='atrt':
types[int(i[-1:])-1].append(json1[i])
for i in range(len(keys)):
json2[keys[i]]={
'value':values[i],'type':types[i]
}
json3={}
json3['attributes']=json2
print(json3)
```
| 6,130
|
58,740,865
|
I'm trying to scrape this page/iframe with selenium/python but I can't insert any text in this selected form.
[link](https://ibb.co/cF8ZRZP)
```py
from selenium import webdriver
from time import sleep
driver = webdriver.Firefox()
url = 'http://web.transparencia.pe.gov.br/despesas/despesa-geral/'
driver.get(url)
sleep(10)
driver.switch_to.frame(driver.find_element_by_tag_name("iframe"))
el = driver.find_element_by_xpath("//*[@id='html_selectug']")
el.click()
```
When I try to get the listbox:
```
el_cl = el.find_element_by_class_name('chzn-select')
el_cl.click()
```
An exception is raised
```
selenium.common.exceptions.ElementNotInteractableException: Message: Element <select class="chzn-select"> could not be scrolled into view
```
any tips?
|
2019/11/07
|
[
"https://Stackoverflow.com/questions/58740865",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/7651009/"
] |
In such kind of situation, the right tool is to use [std::integer\_sequence](https://en.cppreference.com/w/cpp/utility/integer_sequence)
```
#include <iostream>
#include <utility>
template <size_t N>
void make()
{
std::cout << N << std::endl;
}
template <size_t... I>
void do_make_helper(std::index_sequence<I...>)
{
(make<I+1>(), ...);
}
template <std::size_t N>
void do_make()
{
do_make_helper(std::make_index_sequence<N>());
}
int main()
{
do_make<10>();
}
```
prints
```
1
2
3
4
5
6
7
8
9
10
```
|
As a start point with handcrafted index list:
```
template <size_t N> int make();
template<> int make<1>() { std::cout<< "First" << std::endl; return 100; }
template<> int make<2>() { std::cout << "Second" << std::endl; return 200; }
template<> int make<3>() { std::cout << "Third" << std::endl; return 100; }
struct ExecuteInOrder { ExecuteInOrder( ... ) {} };
template < typename T>
void CallTest(T t )
{
std::cout << "Your test code goes here...: " << t << std::endl;
}
template< size_t ... N >
void Do()
{
ExecuteInOrder {(CallTest(make<N>()),1)...};
}
int main()
{
Do<1,2,3>();
}
```
or you can simply make it recursive and use index first and last like this:
```
template < size_t FIRST, size_t LAST >
void ExecuteAndTest()
{
auto foo = make<FIRST>();
std::cout << "Here we go with the test" << foo << std::endl;
// go for next step
if constexpr ( LAST != FIRST ) { ExecuteAndTest<FIRST+1, LAST>(); }
}
int main()
{
// first and last index of integer sequence
ExecuteAndTest<1,3>();
}
```
and finally with always from 1 to N
```
template < size_t FIRST, size_t LAST >
void ExecuteAndTest_Impl()
{
auto foo = make<FIRST>();
std::cout << "Here we go with the test" << foo << std::endl;
// go for next step
if constexpr ( LAST!= FIRST) { ExecuteAndTest_Impl<FIRST+1, LAST>(); }
}
template < size_t LAST >
void ExecuteAndTest()
{
ExecuteAndTest_Impl<1,LAST>();
}
int main()
{
// or always start with 1 to n inclusive
ExecuteAndTest<3>();
}
```
| 6,131
|
42,369,259
|
**Preface**
I was wondering how to conceptualize data classes in a *pythonic* way.
Specifically I’m talking about DTO ([Data Transfer Object](https://martinfowler.com/eaaCatalog/dataTransferObject.html).)
I found a good answer in @jeff-oneill question “[Using Python class as a data container](https://stackoverflow.com/questions/3357581/using-python-class-as-a-data-container/)” where @joe-kington had a good point to use built-in `namedtuple`.
**Question**
In section 8.3.4 of python 2.7 documentation there is good [example](https://docs.python.org/2.7/library/collections.html#collections.somenamedtuple._fields) on how to combine several named tuples.
My question is how to achieve the reverse?
**Example**
Considering the example from documentation:
```
>>> p._fields # view the field names
('x', 'y')
>>> Color = namedtuple('Color', 'red green blue')
>>> Pixel = namedtuple('Pixel', Point._fields + Color._fields)
>>> Pixel(11, 22, 128, 255, 0)
Pixel(x=11, y=22, red=128, green=255, blue=0)
```
How can I deduce a “Color” or a “Point” instance from a “Pixel” instance?
Preferably in *pythonic* spirit.
|
2017/02/21
|
[
"https://Stackoverflow.com/questions/42369259",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/7598113/"
] |
Here it is. By the way, if you need this operation often, you may create a function for `color_ins` creation, based on `pixel_ins`. Or even for any subnamedtuple!
```
from collections import namedtuple
Point = namedtuple('Point', 'x y')
Color = namedtuple('Color', 'red green blue')
Pixel = namedtuple('Pixel', Point._fields + Color._fields)
pixel_ins = Pixel(x=11, y=22, red=128, green=255, blue=0)
color_ins = Color._make(getattr(pixel_ins, field) for field in Color._fields)
print color_ins
```
Output: `Color(red=128, green=255, blue=0)`
Function for extracting arbitrary subnamedtuple (without error handling):
```
def extract_sub_namedtuple(parent_ins, child_cls):
return child_cls._make(getattr(parent_ins, field) for field in child_cls._fields)
color_ins = extract_sub_namedtuple(pixel_ins, Color)
point_ins = extract_sub_namedtuple(pixel_ins, Point)
```
|
`Point._fields + Color._fields` is simply a tuple. So given this:
```
from collections import namedtuple
Point = namedtuple('Point', ['x', 'y'])
Color = namedtuple('Color', 'red green blue')
Pixel = namedtuple('Pixel', Point._fields + Color._fields)
f = Point._fields + Color._fields
```
`type(f)` is just `tuple`. Therefore, there is no way to know where it came from.
I recommend that you look into [attrs](https://attrs.readthedocs.io/en/stable/) for easily doing property objects. This will allow you to do proper inheritance and avoid the overheads of defining all the nice methods to access fields.
So you can do
```
import attr
@attr.s
class Point:
x, y = attr.ib(), attr.ib()
@attr.s
class Color:
red, green, blue = attr.ib(), attr.ib(), attr.ib()
class Pixel(Point, Color):
pass
```
Now, `Pixel.__bases__` will give you `(__main__.Point, __main__.Color)`.
| 6,134
|
44,535,068
|
I want to cover a image with a transparent solid color overlay in the shape of a black-white mask
Currently I'm using the following java code to implement this.
```
redImg = new Mat(image.size(), image.type(), new Scalar(255, 0, 0));
redImg.copyTo(image, mask);
```
I'm not familiar with the python api.
So I want to know if there any alternative api in python.
Is there any better implementation?
image:
[](https://i.stack.imgur.com/72iEF.jpg)
mask:
[](https://i.stack.imgur.com/dd1l4.jpg)
what i want:
[](https://i.stack.imgur.com/YP2d1.jpg)
|
2017/06/14
|
[
"https://Stackoverflow.com/questions/44535068",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2640554/"
] |
Now after I deal with all this Python, OpenCV, Numpy thing for a while, I find out it's quite simple to implement this with code:
```
image[mask] = (0, 0, 255)
```
-------------- the original answer --------------
I solved this by the following code:
```
redImg = np.zeros(image.shape, image.dtype)
redImg[:,:] = (0, 0, 255)
redMask = cv2.bitwise_and(redImg, redImg, mask=mask)
cv2.addWeighted(redMask, 1, image, 1, 0, image)
```
|
The idea is to convert the mask to a binary format where pixels are either `0` (black) or `255` (white). White pixels represent sections that are kept while black sections are thrown away. Then set all white pixels on the mask to your desired `BGR` color.
**Input image and mask**


**Result**

Code
```
import cv2
image = cv2.imread('1.jpg')
mask = cv2.imread('mask.jpg', 0)
mask = cv2.threshold(mask, 0, 255, cv2.THRESH_BINARY + cv2.THRESH_OTSU)[1]
image[mask==255] = (36,255,12)
cv2.imshow('image', image)
cv2.imshow('mask', mask)
cv2.waitKey()
```
| 6,142
|
57,251,368
|
Kindly need some help please :)
I have two date-time's i am using the date-time.combine to concatenate
one is datetime.date (pretty much todays date) - the other is datetime.time (which is a manually defined time) keep getting stuck with the below error;
```
Traceback (most recent call last):
File "sunsetTimer.py", line 167, in <module>
if currentTime >= lightOffDT:
TypeError: can't compare datetime.datetime to tuple
```
Rather new at Python, so probably a really stupid question.
have tried pulling from the tuple with lightOffDT[0] - but get an error that an integer is required.
when I print, it prints as a normal date-time e.g 2019-07-29 23:30:00
```
todayDate = datetime.date.today()
off1 = datetime.time(23,30,0)
lightOffDT = datetime.datetime.combine(todayDate,off1)
currentTime >= lightOffDT: #currentTime is today (datetime)
```
I would like to compare the combined date-time so I can compare to the current date and time.
currentTime is calculated as:
```
import tzlocal
local_timezone = tzlocal.get_localzone()
currentTime = datetime.datetime.now(local_timezone)
```
TOTAL CODE; - This is on a Raspberry pi.
```
from datetime import datetime, timedelta
from time import localtime, strftime
import RPi.GPIO as GPIO
import datetime
import ephem
import pytz
import sys
import tzlocal
Mon = 0
Tue = 1
Wed = 2
Thu = 3
Fri = 4
Sat = 5
Sun = 6
Pin11 = 11 # pin11
Pin12 = 12 # pin12
GPIO.setwarnings(False)
GPIO.setmode(GPIO.BOARD) # Numbers GPIOs by physical location
GPIO.setup(Pin11, GPIO.OUT) # Set LedPin 11 mode is output / deck lights
GPIO.setup(Pin12, GPIO.OUT) # Set LedPin 11 mode is output / path lights
SEC30 = timedelta(seconds=30)
home = ephem.Observer()
# replace lat, long, and elevation to yours
home.lat = '-37.076732'
home.long = '174.939366'
home.elevation = 5
local_timezone = tzlocal.get_localzone() # Gets time zone Pacific/Auckland
sun = ephem.Sun()
sun.compute(home)
fmt = "%d-%b-%Y %H%MUTC"
#Weekend timers
#def lightonTimes_weekend():
#on1 = set via sunsettime
#on2 = datetime.time(04,30,00)
def lightoffTimes_Deviate():
off1 = datetime.time(23,30,0)
return off1
# Weekday timers
def lightonTimes_Normal():
#on1 = set via sunsettime
on2 = datetime.time(4,30,0)
return on2
def lightoffTimes_Normal():
off1 = datetime.time(22,30,0)
off2 = datetime.time(5,30,0)
return off1, off2
def dateTimeTomorrow():
tomorrowDate = datetime.date.today() + datetime.timedelta(days=1)
return tomorrowDate
def localtimesTZ():
currentTime = datetime.datetime.now(local_timezone) # Current New Zealand TimeZones.
todayDate = datetime.date.today()
tday = todayDate.weekday()
return currentTime, tday, todayDate
#def ephemtimes_Tomorrow():
def ephemtimes():
#sun.compute(home)
nextrise = home.next_rising(sun)
nextset = home.previous_setting(sun)
nextriseutc= nextrise.datetime() + SEC30
nextsetutc= nextset.datetime() + SEC30
sunrise = nextriseutc.replace(tzinfo=pytz.utc).astimezone(local_timezone)
sunset = nextsetutc.replace(tzinfo=pytz.utc).astimezone(local_timezone)
sunriseTime = sunrise.time()
sunsetTime = sunset.time()
#print "Sunrise local ", sunrise
#print "Sunset local ", sunset
#print "Current time ", currentTime
#print "Local Time: ", local_timezone
#print "next sunrise: ", nextriseutc.strftime(fmt)
#print "next sunset: ", nextsetutc.strftime(fmt)
return sunrise, sunriseTime, sunset, sunsetTime
if __name__ == '__main__':
sunrise, sunriseTime, sunset, sunsetTime = ephemtimes() # calls times function to pull sunrise, sunset times.
currentTime, tday, todayDate = localtimesTZ()
#print "Current Time " + str(currentTime)
#print todayDate
#print sunrise
#print "Sunset time " + str(sunset)
#print sunriseTime
#print sunsetTime
#print tday
tomorrowDate = dateTimeTomorrow()
#print tomorrowDate
# start loop here
#off1 >= lightoffTimes_Deviate()
if (tday == Sun) or (tday == Mon): # timer for weekend (Sunday or Monday)
#CurrentTime & SunSet time are in full datetime - converted to Local Time
if currentTime > sunset:
GPIO.output(Pin11, GPIO.LOW) # Turn GPIO pins on
GPIO.output(Pin12, GPIO.LOW)
# ***********************************
off1 = lightoffTimes_Deviate() #get off time
print 'error below'
print todayDate
print off1
#### this is where the problems start!!!
lightOffDT = datetime.datetime.combine(todayDate,off1)
print lightOffDT
#print "light off time " + str(lightoffdatetime)
while True:
currentTime = localtimesTZ()
if currentTime >= lightOffDT:
GPIO.output(Pin11, GPIO.HIGH) # Turn GPIO pins on
GPIO.output(Pin12, GPIO.HIGH)
break
else:
os.system('cls' if os.name == 'nt' else 'clear')
print "Current Time " + str(lightoffdatetimepython)
time.sleep(5)
```
|
2019/07/29
|
[
"https://Stackoverflow.com/questions/57251368",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/11847814/"
] |
Look at how you have defined `currentTime`:
```
currentTime = localtimesTZ()
```
Your `localtimesTZ()` actually returns a tuple `currentTime, tday, todayDate`, which is what is assigned to `currentTime`.
Not sure why you are doing that; returning just the `currentTime` should be sufficient, since it is a `datetime.datetime` object.
Then you try and compare that tuple to `lightOffDT`, which is a `datetime.datetime` object. Hence the error.
You could try:
```
if currentTime[0] >= lightOffDT:
```
That would actually compare two `datetime.datetime` objects.
|
It sounds like you are just trying to do a simple comparison of a manually set date and check if that day is today. If that is the case it would be simpler to use the same class datetime. Below is a simple example checking if today (manually defined) is today (defined by python)
from datetime import datetime
```
todayDate = datetime.today()
someOtherDate = datetime(year=2019,month=7,day=29)
if todayDate.date() == someOtherTime.date():
print(True)
```
outputs
```
True
```
| 6,145
|
6,265,517
|
Can anyone name a language with all the following properties:
1. Has algebraic data types
2. Has good support for linear algebra
3. Is fast(-er than python, at least)
4. Has at least some functional programming ability (I don't need monads)
5. Has been heard of, is not dead, and can interface on a C calling level
|
2011/06/07
|
[
"https://Stackoverflow.com/questions/6265517",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/787480/"
] |
Scala
=====
According to [Wikipedia](http://en.wikipedia.org/wiki/Algebraic_data_type) it has algebraic datatypes. And it is [fast](http://www.scribd.com/doc/57021877/Loop-Recognition-in-C-Java-Go-Scala). Scala is both functional and object oriented. And it's a young language with a growing userbase but still to some extent compatible with Java.
There is a Scala library [Scalala](http://code.google.com/p/scalala/) for linear algebra:
>
> A high performance numeric linear algebra library for Scala, with rich Matlab-like operators on vectors and matrices; a library of numerical routines
>
>
>
|
I'd say C and C++. And they work well with:
* [Matlab](http://www.mathworks.com/products/matlab/)
* [Maple](http://www.maplesoft.com/products/Maple/)
| 6,147
|
54,722,389
|
First off, let me say that yes I have researched this extensively for a few days now with no luck. I have looked at numerous examples and similar situations such as [this one](https://stackoverflow.com/questions/35149265/python-super-method-class-name-not-defined), but so far nothing has been able to resolve me issue.
My problem is I have a Python project that has a primary class, with two nested classes (yea yea I know), one of those classes is a subclass of the first. I can not figure out why I keep getting `NameError: global name 'InnerSubClass' is not defined`.
I understand scoping (both classes in question are in the same scope) but nothing I try seems to resolve the issue (I want to keep the two classes nested at a minimum) despite this problem working for other people.
Here is a simple example of what I am trying to do:
```
class SomeClass(object):
def __init__(self):
"""lots of other working stuff"""
class MainClass(object):
def __init__(self):
self.stuff = []
self.moreStuffs = []
class InnerClass(object):
def __init__(self, thing, otherThing):
self.thing = thing
self.otherThing = otherThing
self.otherStuff = []
class InnerSubClass(InnerClass):
def __init__(self, thing, otherThing, newThing):
super(InnerSubClass).__init__(thing, otherThing)
self.newThing = newThing
"""other code that worked before the addition of 'InnerSubClass'"""
def doSomething(self):
innerclass = self.InnerSubClass('thisthing', 'thatthing', 'thingthing')
print("just more thing words %s" % innerclass.newThing)
myThing = MainClass()
myThing.doSomething()
```
I have tried changing `super(InnerSubClass).__init__(thing, otherThing)`
to
`super(InnerClass.InnerSubClass).__init__(thing, otherThing)`
and even
`super(MainClass.InnerClass.InnerSubClass).__init__(thing, otherThing)` with no success. I made "InnerSubClass" inherit straight from object `InnerSubClass(object):` etc, and it still doesn't work.
Granted I am far from a seasoned python developer and come from mostly other compiled OO languages, and can't seem to wrap my head around why this isn't working. If I get rid of the "InnerSubClass", everything works just fine.
It doesn't seem like python offers "private" classes and functions like other languages, which is fine but I would like to utilize the nesting to at least keep objects "lumped" together. In this case, nothing should be instantiating "InnerClass" or "InnerSubClass" except functions in "MainClass".
Please provide helpful advice and explain why it doesn't work as expected with background information on how this should be done properly. If this was as simple as it seems, it would have been figured out by now.
**edit:** for clarification, this is only for v2
|
2019/02/16
|
[
"https://Stackoverflow.com/questions/54722389",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1348576/"
] |
[Here](https://stackoverflow.com/questions/38778158/how-to-do-nested-class-and-inherit-inside-the-class?rq=1) they do it like this
```
super(MainClass.InnerSubClass, self).__init__(thing, otherThing)
```
So that you can test it here is the full working example
```
class SomeClass(object):
def __init__(self):
"""lots of other working stuff"""
class MainClass(object):
def __init__(self):
self.stuff = []
self.moreStuffs = []
class InnerClass(object):
def __init__(self, thing, otherThing):
self.thing = thing
self.otherThing = otherThing
self.otherStuff = []
class InnerSubClass(InnerClass):
def __init__(self, thing, otherThing, newThing):
super(MainClass.InnerSubClass, self).__init__(thing, otherThing)
self.newThing = newThing
"""other code that worked before the addition of 'InnerSubClass'"""
def doSomething(self):
innerclass = self.InnerSubClass('thisthing', 'thatthing', 'thingthing')
print("just more thing words %s" % innerclass.newThing)
print("and I also inherit from InnerClass %s" % innerclass.otherThing)
myThing = MainClass()
myThing.doSomething()
```
The output is
```
just more thing words thingthing
and I also inherit from InnerClass thatthing
```
|
If you have reasons for not using `MainClass.InnerSubClass`, you can also use `type(self)` or `self.__class__` ([OK, but which one](https://stackoverflow.com/questions/1060499/difference-between-typeobj-and-obj-class)) inside `__init__` to get the containing class. This works well lots of layers deep (which shouldn't happen anyway), and requires the argument passed to `super` to be the type of the instance (which it should be anyway) **but breaks if you subclass**, as seen [here](https://ideone.com/Ld8PAX). The concept might be clearer to you than scoping rules:
```
class MainClass:
class Inner:
pass
class InnerSub(Inner):
def __init__(self):
print(super(self.__class__))
print(super(type(self)))
MainClass().InnerSub()
```
| 6,150
|
17,903,820
|
code below creates a layout and displays some text in the layout. Next the layout is displayed on the console screen using raw display module from urwid library. (More info on my complete project can be gleaned from questions at [widget advice for a console project](https://stackoverflow.com/questions/17846930/required-widgets-for-displaying-a-1d-console-application) and [urwid for a console project](https://stackoverflow.com/questions/17381319/using-urwid-to-create-a-2d-console-application). My skype help request being [here](https://stackoverflow.com/questions/17846113/widget-to-choose-for-1d-urwid-application).) However running the code fails as an Assertion Error is raised as described below :
Error on running code is :
`Traceback (most recent call last):
File "./yamlUrwidUIPhase6.py", line 98, in <module>
main()
File "./yamlUrwidUIPhase6.py", line 92, in main
form.main()
File "./yamlUrwidUIPhase6.py", line 48, in main
self.view = formLayout()
File "./yamlUrwidUIPhase6.py", line 77, in formLayout
ui.draw_screen(dim, frame.render(dim, True))
File "/usr/lib64/python2.7/site-packages/urwid/raw_display.py", line 535, in draw_screen
assert self._started
AssertionError`
The code :
```
import sys
sys.path.append('./lib')
import os
from pprint import pprint
import random
import urwid
ui=urwid.raw_display.Screen()
class FormDisplay(object):
def __init__(self):
global ui
self.ui = ui
palette = ui.register_palette([
('Field', 'dark green, bold', 'black'), # information fields, Search: etc.
('Info', 'dark green', 'black'), # information in fields
('Bg', 'black', 'black'), # screen background
('InfoFooterText', 'white', 'dark blue'), # footer text
('InfoFooterHotkey', 'dark cyan, bold', 'dark blue'), # hotkeys in footer text
('InfoFooter', 'black', 'dark blue'), # footer background
('InfoHeaderText', 'white, bold', 'dark blue'), # header text
('InfoHeader', 'black', 'dark blue'), # header background
('BigText', RandomColor(), 'black'), # main menu banner text
('GeneralInfo', 'brown', 'black'), # main menu text
('LastModifiedField', 'dark cyan, bold', 'black'), # Last modified:
('LastModifiedDate', 'dark cyan', 'black'), # info in Last modified:
('PopupMessageText', 'black', 'dark cyan'), # popup message text
('PopupMessageBg', 'black', 'dark cyan'), # popup message background
('SearchBoxHeaderText', 'light gray, bold', 'dark cyan'), # field names in the search box
('SearchBoxHeaderBg', 'black', 'dark cyan'), # field name background in the search box
('OnFocusBg', 'white', 'dark magenta') # background when a widget is focused
])
urwid.set_encoding('utf8')
def main(self):
global ui
#self.view = ui.run_wrapper(formLayout)
self.view = formLayout()
self.ui.start()
self.loop = urwid.MainLoop(self.view, self.palette, unhandled_input=self.unhandled_input)
self.loop.run()
def unhandled_input(self, key):
if key == 'f8':
quit()
return
def formLayout():
global ui
text1 = urwid.Text("Urwid 3DS Application program - F8 exits.")
text2 = urwid.Text("One mission accomplished")
textH = urwid.Text("topmost Pile text")
cols = urwid.Columns([text1,text2])
pile = urwid.Pile([textH,cols])
fill = urwid.Filler(pile)
textT = urwid.Text("Display")
textSH = urwid.Text("Pile text in Frame")
textF = urwid.Text("Good progress !")
frame = urwid.Frame(fill,header=urwid.Pile([textT,textSH]),footer=textF)
dim = ui.get_cols_rows()
ui.draw_screen(dim, frame.render(dim, True))
return
def RandomColor():
'''Pick a random color for the main menu text'''
listOfColors = ['dark red', 'dark green', 'brown', 'dark blue',
'dark magenta', 'dark cyan', 'light gray',
'dark gray', 'light red', 'light green', 'yellow',
'light blue', 'light magenta', 'light cyan', 'default']
color = listOfColors[random.randint(0, 14)]
return color
def main():
form = FormDisplay()
form.main()
########################################
##### MAIN ENTRY POINT
########################################
if __name__ == '__main__':
main()
```
I don't want to change the function formLayout as I intend to add more to this basic code framework, where in another function will be added that repeatedly calls formLayout to keep updating the screen based on reading values from a yml file. I already have a separate code that deals with reading the yaml file and extracting ordered dictionaries out it. After figuring out how to get basic urwid console working, I can move on to integrating both to create my final application.
|
2013/07/28
|
[
"https://Stackoverflow.com/questions/17903820",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2534758/"
] |
It means that self.\_started has been evaluated to False.
```
assert <Some boolean expression>, "Message if exp is False"
```
If the expression is evaluated to True, nothing special will happen, but if the expression is evaluated to False an AssertionError exception will be thrown.
You can try/except the line if you want a cleaner message:
```
try:
assert self._started, "The screen is not up and running !"
except AssertionError as e:
print "Oops, something happened: " + str(e)
```
Some documentation: <http://www.tutorialspoint.com/python/assertions_in_python.htm>
|
You call `formLayout` before you `start` the screen. `formLayout` calls `ui.draw_screen`, which requires that the screen has been started.
| 6,152
|
19,006,095
|
I wanted to find the non-unique elements in the list, but I am not able to figure out why this is not happening in the below code section.
```
>>> d = [1, 2, 1, 2, 4, 4, 5, 'a', 'b', 'a', 'b', 'c', 6,'f',3]
>>> for i in d:
... if d.count(i) == 1:
... d.remove(i)
...
>>> d
[1, 2, 1, 2, 4, 4, 'a', 'b', 'a', 'b', 6, 3]
```
6 and 3 should have been removed.
where as, if I use
```
d = [1, 2, 1, 2, 4, 4, 5, 'a', 'b', 'a', 'b', 'c']
```
I am getting correct answer. Please explain what is happening, I am confused !!!
I am using python 2.7.5.
|
2013/09/25
|
[
"https://Stackoverflow.com/questions/19006095",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/969731/"
] |
Thanks for all the answers and comments !
Thought for a while and got another answer in my previous way I have written the code. So, I am posting it.
```
d = [1, 2, 1, 2, 4, 4, 5, 'a', 'b', 'a', 'b', 'c', 6,'f',3]
e = d[:] # just a bit of trick/spice
>>> for i in d:
... if d.count(i) == 1:
... e.remove(i)
...
>>> e
[1, 2, 1, 2, 4, 4, 'a', 'b', 'a', 'b']
```
@arshajii, Your explanation led me to this trick. Thanks !
|
You can also do like this :
```
data=[1,2,3,4,1,2,3,1,2,1,5,6]
first_list=[]
second_list=[]
for i in data:
if data.count(i)==1:
first_list.append(i)
else:
second_list.append(i)
print (second_list)
```
Result
======
[1, 2, 3, 1, 2, 3, 1, 2, 1]
| 6,153
|
15,000,311
|
I am having trouble to start a python script and get the parameters I send to the script.

As you can see if I start the following test script with python comand, it works, if not, well, no arguments are passed to the script :/
```
import optparse
import sys
oOptParse = optparse.OptionParser()
oOptParse.add_option("--arg", dest="arg", help="Test param")
oOptParse.set_default("arg", None)
if len(sys.argv) == 1:
oOptParse.print_help( )
sys.exit( 1 )
aOptions = oOptParse.parse_args( )
oOptions = aOptions[0]
print (oOptions.arg)
```
Do you have any idea what could be the problem ?
Thanks a lot !
|
2013/02/21
|
[
"https://Stackoverflow.com/questions/15000311",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2095037/"
] |
Check out [ImageResizer](http://imageresizing.net) - it's a suite of NuGet packages designed for this exact purpose.
It runs eBay in Denmark, MSN Olympics, and a few other big sites.
Dynamic image processing can be done safely and efficiently, but not in a sane amount of code. It's [trickier than it appears](http://www.nathanaeljones.com/blog/2009/20-image-resizing-pitfalls).
|
I wouldn't recommend this but you can do next thing:
```
using (Image img = Image.FromStream(originalImage))
{
using (Bitmap bitmap = new Bitmap(img, width, height))
{
bitmap.Save(outputStream, ImageFormat.Jpeg);
}
}
```
Be aware that this could cause OutOfMemoryException.
| 6,163
|
14,486,802
|
What are common uses for Python's built-in `coerce` function? I can see applying it if I do not know the `type` of a numeric value [as per the documentation](http://docs.python.org/2/library/functions.html#coerce), but do other common usages exist? I would guess that `coerce()` is also called when performing arithmetic computations, *e.g.* `x = 1.0 +2`. It's a built-in function, so presumably it has some potential common usage?
|
2013/01/23
|
[
"https://Stackoverflow.com/questions/14486802",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/839375/"
] |
Its a left over from [early python](http://docs.python.org/release/1.5.2/ref/numeric-types.html), it basically makes a tuple of numbers to be the same underlying number type e.g.
```
>>> type(10)
<type 'int'>
>>> type(10.0101010)
<type 'float'>
>>> nums = coerce(10, 10.001010)
>>> type(nums[0])
<type 'float'>
>>> type(nums[1])
<type 'float'>
```
It is also to allow objects to act like numbers with old classes
(a bad example of its usage here would be ...)
```
>>> class bad:
... """ Dont do this, even if coerce was a good idea this simply
... makes itself int ignoring type of other ! """
... def __init__(self, s):
... self.s = s
... def __coerce__(self, other):
... return (other, int(self.s))
...
>>> coerce(10, bad("102"))
(102, 10)
```
|
Python core programing says:
>
> Function coerce () provides the programmer do not rely on the Python interpreter, but custom two numerical type conversion."
>
>
>
e.g.
```
>>> coerce(1, 2)
(1, 2)
>>>
>>> coerce(1.3, 134L)
(1.3, 134.0)
>>>
>>> coerce(1, 134L)
(1L, 134L)
>>>
>>> coerce(1j, 134L)
(1j, (134+0j))
>>>
>>> coerce(1.23-41j, 134L)
((1.23-41j), (134+0j))
```
| 6,164
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.