qid
int64 46k
74.7M
| question
stringlengths 54
37.8k
| date
stringlengths 10
10
| metadata
listlengths 3
3
| response_j
stringlengths 29
22k
| response_k
stringlengths 26
13.4k
| __index_level_0__
int64 0
17.8k
|
|---|---|---|---|---|---|---|
14,411,394
|
I'm trying to send a signal to the django development server to kill the parent and child processes.
```
$ python manage.py runserver
Validating models...
0 errors found
Django version 1.4.1, using settings 'myproject.settings'
Development server is running at http://127.0.0.1:8000/
Quit the server with CONTROL-C.
$ ps axf
26077 pts/12 Ss 0:00 \_ -bash
4189 pts/12 S+ 0:00 | \_ python manage.py runserver
4194 pts/12 Sl+ 0:00 | \_ /myproject/.virtualenv/bin/python manage.py runserver
$ kill -s SIGINT 4189
$ ps axf
4194 pts/12 Sl 0:00 /sh/myproject/.virtualenv/bin/python manage.py runserver
```
My understanding is that SIGINT should emulate pressing Ctrl-C in the terminal, but notice that SIGINT terminates the parent, 4189, but not the child, 4194. Same behavior for SIGKILL, SIGTERM, SIGSTOP. Using Ctrl-C from the terminal kills both as expected.
Is there a way to terminate the parent in a way that also kills the child without knowing the child's PID?
|
2013/01/19
|
[
"https://Stackoverflow.com/questions/14411394",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/246871/"
] |
Put a dash in front of the process, this should kill the process group.
```
kill -s SIGINT -4189
```
|
`kill -9 4189`
Have a try, it should work!
| 6,165
|
11,941,027
|
Python beginner here.
I have two text files with the same format of tab-delimited information. They contain rows with 3 columns (identifier, chromosome and position) eg:
File 1:
```
2323 2 125
2324 3 754
```
... etc
File 2:
```
2323 2 150
2324 3 12000
```
... etc
I want to create a list or matrix (not sure exactly what is best or how this works, maybe a list of lists that becomes a matrix?!) by going through each identifier (the first column in each row) in one file and associate it with its position (column3), then find this matching identifier in the next file and also save the other position (column3) in this file. SO in the end each identifier will be associated with 2 different positions from the two different files.
This is what I need help with. For the next step I will look for identifiers with the largest numerical difference between positions.
Any help, tips or solutions are greatly appreciated, I am a python beginner with a very basic knowledge.
Many thanks in advance!
Rubal
|
2012/08/13
|
[
"https://Stackoverflow.com/questions/11941027",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/964689/"
] |
There can be several answers to your question:
1. If you plan to use extensive computations with matrices, I advice you to look at [numpy](http://numpy.scipy.org/) library that is very efficient. You can see how to create matrices with numpy [here](http://docs.scipy.org/doc/numpy/reference/generated/numpy.matrix.html).
2. The second possible answer to your question is to use [biopython](http://biopython.org/wiki/Main_Page) library (I've made conclusion about that you're working with chromosomes).
3. You can use Python nested lists to create matrices.
Here's a code snippet of how to do that (assuming we're reading from File 2)
```
matrix = []
with open(path_to_file2, 'rt') as f:
for line in f:
matrix.append(map(int, line.strip().split(' ')))
```
You can then get values of the created matrix:
```
matrix[0] # First row == [2323, 2, 150]
matrix[0][1] # Second column, first row == 2
```
|
You could use a dictionary having the identifier as the key and a list of the positions as the value. Then you can calculate the difference between the positions and have it as the 3rd element of the list. You can then iterate through the dictionary finding the largest value in position [2] of the dictionary's values.
```
d = {}
for each line in file1:
d[identifier] = [position]
for each line in file2:
d[identifier].append(position)
d[identifier].append(d[identifier][1]-d[identifier][0])
maxDiff = 0
for x in d:
value = d[x][2]
if value > maxDiff:
maxDiff = value
```
| 6,171
|
61,232,982
|
This is my code:
```
users.age.mean().astype(int64)
```
(where users is the name of dataframe and age is a column in it)
This is the error I am getting:
```
AttributeError
Traceback (most recent call last)
<ipython-input-29-10b672e7f7ae> in <module>
----> 1 users.age.mean().astype(int64)
AttributeError: 'float' object has no attribute 'astype'
```
|
2020/04/15
|
[
"https://Stackoverflow.com/questions/61232982",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/13322571/"
] |
`users.age.mean()` returns a float not a series. Floats don't have `astype`, only pandas series.
Try:
`x = numpy.int64(users.age.mean())`
Or:
`x = int(users.age.mean())`
|
Try `int` before your function example:
```
X = int(users.age.mean())
```
Hope it helps!
| 6,173
|
997,419
|
Is there any lib available to connect to yahoo messenger using either the standard protocol or the http way from python?
|
2009/06/15
|
[
"https://Stackoverflow.com/questions/997419",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/9789/"
] |
Google is your friend.
The [Python Package Index](http://pypi.python.org/pypi) has [several](http://pypi.python.org/pypi?%3Aaction=search&term=yahoo&submit=search) modules to do with Yahoo, including this [one](http://pypi.python.org/pypi/pyahoolib/0.2) which matches your requirements.
|
There is also the [Yahoo IM SDK](http://developer.yahoo.com/messenger/guide/index.html) that might help.
| 6,174
|
5,342,359
|
is it possible to Insert a python tuple in a postgresql database
|
2011/03/17
|
[
"https://Stackoverflow.com/questions/5342359",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/654796/"
] |
Yes it is, PostgreSQL supports array as column type.
```
CREATE TABLE tuples_table (
tuple_of_strings text[],
tuple_of_ints integer[]
);
```
Then inserting is done like this:
```
INSERT INTO tuples_table VALUES (
('{"a","b","c"}', '{1,2}'),
('{"e",'f... etc"}', '{3,4,5}')
);
```
|
Really we need more information. What data is **inside** the tuple? Is it just integers? Just strings? Is it megabytes of images?
If you had a Python tuple like `(4,6,2,"Hello",7)` you could insert **the string** `'(4,6,2,"Hello",7)'` into a Postgres database, but that's probably not the answer you're looking for.
You really need to figure out what data you're really trying to store before you can figure out how/where to store it.
---
**EDIT:** So the short answer is "no", you cannot store an ***arbitrary*** Python tuple in a postgres database, but there's probably some way to take whatever is *inside* the tuple and store it somewhere useful.
| 6,175
|
65,764,302
|
I am working through some exercises on python and my task is to create a program that can take a number, divide it by 2 if it is an even number. Or it will take the number, multiply it by 3, then add 1. It should ask for an input and afterwards it should wait for another input.
My code looks like this, mind you I am new to all this.
```
print('Please enter a number:')
def numberCheck(c):
if c % 2 == 0:
print(c // 2)
elif c % 2 == 1:
print(3 * c + 1)
c = int(input())
numberCheck(c)
```
I want it to print "Please enter a number" only once. but after running I want it to wait for another number. How would I do this?
I tried a `for` loop but that uses a range and I don't want to set a range unless there is a way to do it to infinity.
|
2021/01/17
|
[
"https://Stackoverflow.com/questions/65764302",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/15025453/"
] |
So my understanding is you want the function to run indefinitely. this is my approach, through the use of a while loop
```
print("Enter a number")
while True:
c = int(input(""))
numberCheck(c)
```
|
You could use a `while` loop with `True`:
```py
while True:
c = int(input())
numberCheck(c)
```
| 6,179
|
2,341,972
|
I'm writing a basic html-proxy in python (3), and up to now I'm not using prebuild classes like http.server.
I'm just starting a socket which accepts connection:
```
self.listen_socket = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
self.listen_socket.bind((socket.gethostname(), 4321))
self.listen_socket.listen(5)
(a, b) = self.listen_socket.accept()
content = a.recv(100000)
```
Now content stores data like:
```
b'GET http://www.google.com/firefox HTTP/1.1\r\nHost: www.google.com\r\nUser-Agent: Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.9.2) Gecko/20100207 Namoroka/3.6\r\nAccept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8\r\nAccept-Language: en-us,en;q=0.5\r\nAccept-Encoding: gzip,deflate\r\nAccept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.7\r\nKeep-Alive: 115\r\nProxy-Connection: keep-alive\r\nCookie: PREF=ID=1ac935f4d893f655:U=73a4849dc5fc23a4:TM=1266851688:LM=1267023171:S=Log1PmXRMlNjX3Of; NID=32=EnrZjTqILuW2_aMLtgsJ96FdEMF3s5FoMJSVq9GMr9dhLhTAd3F5RcQ3ImyVBiO2eYNKKMhzlGg7r8zXmeSq50EigS5sdKtCL9BMHpgCxZazA2NiyB0bTRWhp8-0BObn\r\n\r\n'
```
How can I regexp it? Converting to string does not work for me.
Or, eventually, I need to find out the address which is inquired, like `http://www.google.com/firefox` in this case. Is there a parser that I do not know? How can I achieve the result?
Thanks in advance.
|
2010/02/26
|
[
"https://Stackoverflow.com/questions/2341972",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/258267/"
] |
You need to include an encoding when converting to a string, for example use:
```
>>> str(b'GET http://...', 'UTF-8')
'GET http://...'
```
If you don't use an encoding then as you've discovered you get something a little less helpful:
```
>>> str(b'GET http://...')
"b'GET http://...'"
```
|
Also, you might want to check the `*HTTPServer` classes. They provide a wrapper around being HTTP servers and will also parse headers for you.
If you can't, well, at the very least they will provide source code examples on how to do it!
| 6,182
|
19,806,494
|
I'm attempting to write a cython interface to the complex version of the MUMPS solver (zmumps). I'm running into some problems as I have no previous experience with either C or cython. Following the example of the [pymumps package](https://github.com/bfroehle/pymumps) I was able to get the real version of the code (dmumps) to work.
I believe that my problem are the pointers to the ZMUMPS\_COMPLEX structures. For the
So far I have the following (lifted heavily from [pymumps](https://github.com/bfroehle/pymumps)):
**zmumps\_c.pxd:**
```
from libc.string cimport strncpy
cdef extern from "mumps_c_types.h":
ctypedef struct ZMUMPS_COMPLEX "ZMUMPS_COMPLEX":
double r
double i
cdef extern from "zmumps_c.h":
ctypedef int MUMPS_INT
ctypedef struct c_ZMUMPS_STRUC_C "ZMUMPS_STRUC_C":
MUMPS_INT sym, par, job
MUMPS_INT comm_fortran # Fortran communicator
MUMPS_INT n
# Assembled entry
MUMPS_INT nz
MUMPS_INT *irn
MUMPS_INT *jcn
ZMUMPS_COMPLEX *a
# RHS and statistics
ZMUMPS_COMPLEX *rhs
MUMPS_INT infog[40]
void c_zmumps_c "zmumps_c" (c_ZMUMPS_STRUC_C *)
```
**zmumps\_c.pyx**
```
cdef class ZMUMPS_STRUC_C:
cdef c_ZMUMPS_STRUC_C ob
property sym:
def __get__(self): return self.ob.sym
def __set__(self, value): self.ob.sym = value
property par:
def __get__(self): return self.ob.par
def __set__(self, value): self.ob.par = value
property job:
def __get__(self): return self.ob.job
def __set__(self, value): self.ob.job = value
property comm_fortran:
def __get__(self): return self.ob.comm_fortran
def __set__(self, value): self.ob.comm_fortran = value
property n:
def __get__(self): return self.ob.n
def __set__(self, value): self.ob.n = value
property nz:
def __get__(self): return self.ob.nz
def __set__(self, value): self.ob.nz = value
property irn:
def __get__(self): return <long> self.ob.irn
def __set__(self, long value): self.ob.irn = <MUMPS_INT*> value
property jcn:
def __get__(self): return <long> self.ob.jcn
def __set__(self, long value): self.ob.jcn = <MUMPS_INT*> value
property a:
def __get__(self): return <long> self.ob.a
def __set__(self, long value): self.ob.a = <ZMUMPS_COMPLEX*> value
property rhs:
def __get__(self): return <long> self.ob.rhs
def __set__(self, long value): self.ob.rhs = <ZMUMPS_COMPLEX*> value
property infog:
def __get__(self):
cdef MUMPS_INT[:] view = self.ob.infog
return view
def zmumps_c(ZMUMPS_STRUC_C s not None):
c_zmumps_c(&s.ob)
```
In my python code I'm able to set the irn and jcn using
```
import zmumps_c
import numpy as np
MUMPS_STRUC_C = staticmethod(zmumps_c.ZMUMPS_STRUC_C)
id = MUMPS_STRUC_C()
x = np.r_[1:10]
id.irn = x.__array_interface__['data'][0]
```
However, I have no idea how to set the values of a or rhs. Any help would be greatly appreciated.
|
2013/11/06
|
[
"https://Stackoverflow.com/questions/19806494",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2661958/"
] |
Form a user standpoint hovering 1 cm over the screens highly inconvenient compared to placing a finger over the screen. Swipes seen by a front facing camera with a small aperture will be contaminated with the motion blur for a reasonable speed of swipe.
A few years back I solved this problem by considering how motion blur of swipe would eventually occlude the image on the background. In particular, if background has some gradient, it will be erased by the motion blur of the moving hand. Thus if you histogram vertical gradient, you will see a hole moving through your histogram when you swipe and the direction of motion of this hole is your swipe direction.
Two more pointers: if there is not much gradient on the background one can simply reduce image resolution and work with the gradient image of the hand that is less affected by the motion blur at lower resolutions. When camera itself moves the edge histogram will translate globally as opposed to a hole moving through a histogram. Finally, extreme phone rotations can be detected by embeded gyroscope to avoid false positives.
|
You can't detect the motion vector with the proximity sensor. But there usually is a much smarter sensor that allows for much more precision: the front camera. It is more complex to read gestures with a camera, but you can definitely do that with OpenCV, for example.
| 6,185
|
50,242,147
|
It's easy in python to calculate simple permutations using [itertools.permutations()](https://docs.python.org/3/library/itertools.html#itertools.permutations).
You can even find some [possible permutations of multiple lists](https://stackoverflow.com/q/2853212).
```
import itertools
s=[ [ 'a', 'b', 'c'], ['d'], ['e', 'f'] ]
for l in list(itertools.product(*s)):
print(l)
('a', 'd', 'e')
('a', 'd', 'f')
('b', 'd', 'e')
('b', 'd', 'f')
('c', 'd', 'e')
('c', 'd', 'f')
```
It's also possible to find [permutations of different lengths](https://stackoverflow.com/a/5898031/99923).
```
import itertools
s = [1, 2, 3]
for L in range(0, len(s)+1):
for subset in itertools.combinations(s, L):
print(subset)
()
(1,)
(2,)
(3,)
(1, 2)
(1, 3)
(2, 3)
(1, 2, 3)
```
How would you find permutations of all possible 1) **lengths**, 2) **orders**, and 3) from **multiple lists**?
I would assume the first step would be to combine the lists into one. A list will not de-dup items like a set would.
```
s=[ [ 'a', 'b', 'c'], ['d'], ['e', 'f'] ]
('a', 'b')
('a', 'c')
('a', 'd')
('a', 'e')
('a', 'f')
...
('b', 'a')
('c', 'a')
...
('a', 'b', 'c', 'd', 'e')
...
('a', 'b', 'c', 'd', 'e', 'f')
...
('f', 'a', 'b', 'c', 'd', 'e')
```
|
2018/05/08
|
[
"https://Stackoverflow.com/questions/50242147",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/99923/"
] |
Like you suggested, do:
```
s = [x for y in s for x in y]
```
and then use your solution for finding permutations of different lengths:
```
for L in range(0, len(s)+1):
for subset in itertools.combinations(s, L):
print(subset)
```
would find:
```
()
('a',)
('b',)
('c',)
('d',)
('e',)
('f',)
('a', 'b')
('a', 'c')
('a', 'd')
('a', 'e')
('a', 'f')
('b', 'c')
('b', 'd')
('b', 'e')
('b', 'f')
('c', 'd')
('c', 'e')
('c', 'f')
('d', 'e')
('d', 'f')
('e', 'f')
('a', 'b', 'c')
('a', 'b', 'd')
('a', 'b', 'e')
('a', 'b', 'f')
('a', 'c', 'd')
('a', 'c', 'e')
('a', 'c', 'f')
('a', 'd', 'e')
('a', 'd', 'f')
('a', 'e', 'f')
('b', 'c', 'd')
('b', 'c', 'e')
('b', 'c', 'f')
('b', 'd', 'e')
('b', 'd', 'f')
('b', 'e', 'f')
('c', 'd', 'e')
('c', 'd', 'f')
('c', 'e', 'f')
('d', 'e', 'f')
('a', 'b', 'c', 'd')
('a', 'b', 'c', 'e')
('a', 'b', 'c', 'f')
('a', 'b', 'd', 'e')
('a', 'b', 'd', 'f')
('a', 'b', 'e', 'f')
('a', 'c', 'd', 'e')
('a', 'c', 'd', 'f')
('a', 'c', 'e', 'f')
('a', 'd', 'e', 'f')
('b', 'c', 'd', 'e')
('b', 'c', 'd', 'f')
('b', 'c', 'e', 'f')
('b', 'd', 'e', 'f')
('c', 'd', 'e', 'f')
('a', 'b', 'c', 'd', 'e')
('a', 'b', 'c', 'd', 'f')
('a', 'b', 'c', 'e', 'f')
('a', 'b', 'd', 'e', 'f')
('a', 'c', 'd', 'e', 'f')
('b', 'c', 'd', 'e', 'f')
('a', 'b', 'c', 'd', 'e', 'f')
```
If you want to distinguish e.g. `('d', 'e', 'f')` from `('f', 'e', 'd')` (thanks [@Kefeng91](https://stackoverflow.com/users/9729313/kefeng91) for pointing this out) and others, replace `itertools.combinations` with `itertools.permutations`, like [@YakymPirozhenko](https://stackoverflow.com/users/4585963/yakym-pirozhenko) suggests.
|
Here's a simple one liner (You can replace `feature_cols` instead of `s`)
**Combinations:**
```py
[combo for i in range(1, len(feature_cols) + 1) for combo in itertools.combinations(feature_cols, i) ]
```
**Permutations:**
```py
[combo for i in range(1, len(feature_cols) + 1) for combo in itertools.permutations(feature_cols, i) ]
```
See [my answer here](https://stackoverflow.com/a/68694374/8751871) for more details
| 6,186
|
63,096,451
|
I am pretty good at python and couldn't figure out how to use the Mojang API with python.
I want to d something like `GET https://api.mojang.com/users/profiles/minecraft/<username>?at=<timestamp>`(from the API) but I can't figure out how to do it! Does anyone know how to do this? I'm in python 3.8.
<https://wiki.vg/Mojang_API#Username_-.3E_UUID_at_time>
|
2020/07/26
|
[
"https://Stackoverflow.com/questions/63096451",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/13996262/"
] |
It's pretty straightforward, just replace `<username>` with the person's username, and the response will give your their `uuid`.
Here is an example using **[`requests`](https://2.python-requests.org/en/master/)**:
```
import requests
username = 'KrisJelbring'
url = f'https://api.mojang.com/users/profiles/minecraft/{username}?'
response = requests.get(url)
uuid = response.json()['id']
print(uuid) #7125ba8b1c864508b92bb5c042ccfe2b
```
|
The documentation is relatively straightforward.
You want to send a `GET` request with their username:
```py
import requests
username = "Bob"
resp = requests.get(f"https://api.mojang.com/users/profiles/minecraft/{username}")
uuid = resp.json()["id"]
print(f"Bob's current UUID is {uuid}")
```
Optionally, you can include a UNIX timestamp to get the username's UUID at a specific point in time:
```py
username = "Alice"
# UNIX timestamp that equates to 01/01/2018
timestamp = 1514764800
resp = requests.get(f"https://api.mojang.com/users/profiles/minecraft/{username}?at={timestamp}")
uuid = resp.json()["id"]
print(f"Alice's UUID on 01/01/2018 was {uuid}")
```
Other Solution (third party library)
------------------------------------
Alternatively, you can use my newly released [Mojang package](https://github.com/summer/mojang), if you don't want to deal with the HTTP requests, JSON, and other web junk in your code.
Install it with pip by running the following console command:
```
python -m pip install mojang
```
Usage:
```py
from mojang import API
api = API()
uuid = api.get_uuid("Alice")
print(f"Alice's UUID is {uuid}")
# or with a timestamp
uuid = api.get_uuid("Alice", timestamp=1514764800)
print(f"Alice's UUID on 01/01/2018 was {uuid}")
```
| 6,187
|
65,073,434
|
I'm using the Mask\_RCNN package from this repo: `https://github.com/matterport/Mask_RCNN`.
I tried to train my own dataset using this package but it gives me an error at the beginning.
```
2020-11-30 12:13:16.577252: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcuda.so.1
2020-11-30 12:13:16.587017: E tensorflow/stream_executor/cuda/cuda_driver.cc:314] failed call to cuInit: CUDA_ERROR_NO_DEVICE: no CUDA-capable device is detected
2020-11-30 12:13:16.587075: I tensorflow/stream_executor/cuda/cuda_diagnostics.cc:156] kernel driver does not appear to be running on this host (7612ade969e5): /proc/driver/nvidia/version does not exist
2020-11-30 12:13:16.587479: I tensorflow/core/platform/cpu_feature_guard.cc:142] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN)to use the following CPU instructions in performance-critical operations: AVX2 FMA
To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags.
2020-11-30 12:13:16.593569: I tensorflow/core/platform/profile_utils/cpu_utils.cc:104] CPU Frequency: 2300000000 Hz
2020-11-30 12:13:16.593811: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x1b2aa00 initialized for platform Host (this does not guarantee that XLA will be used). Devices:
2020-11-30 12:13:16.593846: I tensorflow/compiler/xla/service/service.cc:176] StreamExecutor device (0): Host, Default Version
Traceback (most recent call last):
File "machines.py", line 345, in <module>
model_dir=args.logs)
File "/content/Mask_RCNN/mrcnn/model.py", line 1837, in __init__
self.keras_model = self.build(mode=mode, config=config)
File "/content/Mask_RCNN/mrcnn/model.py", line 1934, in build
anchors = KL.Lambda(lambda x: tf.Variable(anchors), name="anchors")(input_image)
File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/engine/base_layer.py", line 926, in __call__
input_list)
File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/engine/base_layer.py", line 1117, in _functional_construction_call
outputs = call_fn(cast_inputs, *args, **kwargs)
File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/layers/core.py", line 904, in call
self._check_variables(created_variables, tape.watched_variables())
File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/layers/core.py", line 931, in _check_variables
raise ValueError(error_str)
ValueError:
The following Variables were created within a Lambda layer (anchors)
but are not tracked by said layer:
<tf.Variable 'anchors/Variable:0' shape=(1, 261888, 4) dtype=float32>
The layer cannot safely ensure proper Variable reuse across multiple
calls, and consquently this behavior is disallowed for safety. Lambda
layers are not well suited to stateful computation; instead, writing a
subclassed Layer is the recommend way to define layers with
Variables.
```
I looked up the part of code responsible for the problem (located at `file: /mrcnn/model.py` `line: 1935` in the repo):
`IN[0]: anchors = KL.Lambda(lambda x: tf.Variable(anchors), name="anchors")(input_image)`
If anyone have an idea how to solve it or have already solved it, please mention the solution.
|
2020/11/30
|
[
"https://Stackoverflow.com/questions/65073434",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/11919189/"
] |
Go to mrcnn/model.py and add:
```
class AnchorsLayer(KL.Layer):
def __init__(self, anchors, name="anchors", **kwargs):
super(AnchorsLayer, self).__init__(name=name, **kwargs)
self.anchors = tf.Variable(anchors)
def call(self, dummy):
return self.anchors
def get_config(self):
config = super(AnchorsLayer, self).get_config()
return config
```
Then find the line:
```
anchors = KL.Lambda(lambda x: tf.Variable(anchors), name="anchors")(input_image)
```
and replace it with:
```
anchors = AnchorsLayer(anchors, name="anchors")(input_image)
```
Works like a charm in TF 2.4!
|
ROOT CAUSE:
The bahavior of Lambda layer of Keras in Tensorflow 2.X was changed from Tensorflow 1.X.
In Keras in Tensorflow 1.X, all tf.Variable and tf.get\_variable are automatically tracked into the layer.weights via variable creator context so they receive gradient and trainable automatically. Such approach has problem with auto graph compilation that convert Python code into Execution Graph in Tensorflow 2.X so it is removed and now Lambda layer has the code to check for variable creation and raise the error as you see. In short, Lambda layer in Tensorflow 2.X has to be stateless. If you want to create variable, the correct way in Tensorflow 2.X is to subclass layer class and add trainable weight as a class member.
SOLUTIONS:
There are 2 choices -
1. Change to use Tensorflow 1.X.. This error will not be raised.
2. Replace the Lambda layer with subclass of Keras Layer:
```
class AnchorsLayer(tensorflow.keras.layers.Layer):
def __init__(self, anchors):
super(AnchorLayer, self).__init__()
self.anchors_v = tf.Variable(anchors)
def call(self):
return self.anchors_v
# Then replace the Lambda call with this:
anchors_layer = AnchorLayers(anchors)
anchors = anchors_layer()
```
| 6,189
|
37,848,815
|
I develop a web-app using Flask under Python3. I have a problem with postgresql enum type on db migrate/upgrade.
I added a column "status" to model:
```
class Banner(db.Model):
...
status = db.Column(db.Enum('active', 'inactive', 'archive', name='banner_status'))
...
```
Generated migration by `python manage.py db migrate` is:
```
from alembic import op
import sqlalchemy as sa
def upgrade():
op.add_column('banner', sa.Column('status', sa.Enum('active', 'inactive', 'archive', name='banner_status'), nullable=True))
def downgrade():
op.drop_column('banner', 'status')
```
And when I do `python manage.py db upgrade` I get an error:
```
...
sqlalchemy.exc.ProgrammingError: (psycopg2.ProgrammingError) type "banner_status" does not exist
LINE 1: ALTER TABLE banner ADD COLUMN status banner_status
[SQL: 'ALTER TABLE banner ADD COLUMN status banner_status']
```
**Why migration does not create a type "banner\_status"?**
**What am I doing wrong?**
```
$ pip freeze
alembic==0.8.6
Flask==0.10.1
Flask-Fixtures==0.3.3
Flask-Login==0.3.2
Flask-Migrate==1.8.0
Flask-Script==2.0.5
Flask-SQLAlchemy==2.1
itsdangerous==0.24
Jinja2==2.8
Mako==1.0.4
MarkupSafe==0.23
psycopg2==2.6.1
python-editor==1.0
requests==2.10.0
SQLAlchemy==1.0.13
Werkzeug==0.11.9
```
|
2016/06/16
|
[
"https://Stackoverflow.com/questions/37848815",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2111562/"
] |
I decided on this problem using that.
I changed the code of migration, and migration looks like this:
```
from alembic import op
import sqlalchemy as sa
from sqlalchemy.dialects import postgresql
def upgrade():
banner_status = postgresql.ENUM('active', 'inactive', 'archive', name='banner_status')
banner_status.create(op.get_bind())
op.add_column('banner', sa.Column('status', sa.Enum('active', 'inactive', 'archive', name='banner_status'), nullable=True))
def downgrade():
op.drop_column('banner', 'status')
banner_status = postgresql.ENUM('active', 'inactive', 'archive', name='banner_status')
banner_status.drop(op.get_bind())
```
And now `python manage.py db upgrade\downgrade` is successfully executed.
|
I think this way is more simple:
```
from alembic import op
import sqlalchemy as sa
from sqlalchemy.dialects import postgresql
def upgrade():
# others_column = ...
banner_status = postgresql.ENUM('active', 'inactive', 'archive', name='banner_status', create_type=False), nullable=False)
```
Also added the `postgresql.ENUM` to your `downgrade()` function if that needed.
| 6,190
|
12,576,734
|
I have a list which needs to be alphabetized(ignoring lower and uppercase) and printed with spaces and "+" seperating each element in the list. Here's my code:
```
#!/usr/bin/python3.2
fruit = ['A', 'banana', 'Watermelon', 'mango'] #list containing fruits name
for diet in sorted(fruit):
print(diet)
```
This prints the each fruits in a single line. I want my result to be like this:
```
A + banana + mango + Watermelon
```
How do I achieve this result? Thanks!
|
2012/09/25
|
[
"https://Stackoverflow.com/questions/12576734",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1687755/"
] |
```
In [10]: fruit = ['A', 'banana', 'Watermelon', 'mango']
In [11]: ' + '.join(sorted(fruit, key=str.lower))
Out[11]: 'A + banana + mango + Watermelon'
```
for more detials visit :
<http://docs.python.org/library/stdtypes.html#str.join>
<http://wiki.python.org/moin/HowTo/Sorting/>
|
```
print(" + ".join(sorted(fruit, key=str.lower)))
```
| 6,191
|
39,505,630
|
This is my first question on stack overflow so bear with me, please.
I am trying to download automatically (i.e. scrape) the text of some Italian laws from the website: [http://www.normattiva.it/](http://www.normattiva.it)
I am using this code below (and similar permutations):
```
import requests, sys
debug = {'verbose': sys.stderr}
user_agent = {'User-agent': 'Mozilla/5.0', 'Connection':'keep-alive'}
url = 'http://www.normattiva.it/atto/caricaArticolo?art.progressivo=0&art.idArticolo=1&art.versione=1&art.codiceRedazionale=047U0001&art.dataPubblicazioneGazzetta=1947-12-27&atto.tipoProvvedimento=COSTITUZIONE&art.idGruppo=1&art.idSottoArticolo1=10&art.idSottoArticolo=1&art.flagTipoArticolo=0#art'
r = requests.session()
s = r.get(url, headers=user_agent)
#print(s.text)
print(s.url)
print(s.headers)
print(s.request.headers)
```
As you can see I am trying to load the "**caricaArticolo**" query.
However, the output is a page saying that my search is invalid (***"session is not valid or expired"***)
It seems that the page recognizes that I am not using a browser and loads a "breakout" javascript function.
```
<body onload="javascript:breakout();">
```
I tried to use "browser" simulator python scripts such as **selenium** and **robobrowser** but the result is the same.
Is there anyone who is willing to spend 10 minutes looking at the page output and give help?
|
2016/09/15
|
[
"https://Stackoverflow.com/questions/39505630",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6696413/"
] |
Once you click any link on the page with dev tools open, under the doc tab under Network:
[](https://i.stack.imgur.com/orZHr.png)
You can see three links, the first is what we click on, the second returns the html that allows you to jump to a specific *Article* and the last contains the article text.
In the source returned from the firstlink, you can see two *iframe* tags:
```
<div id="alberoTesto">
<iframe
src="/atto/caricaAlberoArticoli?atto.dataPubblicazioneGazzetta=2016-08-31&atto.codiceRedazionale=16G00182&atto.tipoProvvedimento=DECRETO LEGISLATIVO"
name="leftFrame" scrolling="auto" id="leftFrame" title="leftFrame" height="100%" style="width: 285px; float:left;" frameborder="0">
</iframe>
<iframe
src="/atto/caricaArticoloDefault?atto.dataPubblicazioneGazzetta=2016-08-31&atto.codiceRedazionale=16G00182&atto.tipoProvvedimento=DECRETO LEGISLATIVO"
name="mainFrame" id="mainFrame" title="mainFrame" height="100%" style="width: 800px; float:left;" scrolling="auto" frameborder="0">
</iframe>
```
The first is for the Article, the latter with */caricaArticoloDefault* and the *id* *mainFrame* is what we want.
You need to use the cookies from the initial requests so you can do it with the *Session* object and by parsing the pages using [bs4](https://www.crummy.com/software/BeautifulSoup/bs4/doc/#find-all-next-and-find-next):
```
import requests, sys
import os
from urlparse import urljoin
import io
user_agent = {'User-agent': 'Mozilla/5.0', 'Connection': 'keep-alive'}
url = 'http://www.normattiva.it/atto/caricaArticolo?art.progressivo=0&art.idArticolo=1&art.versione=1&art.codiceRedazionale=047U0001&art.dataPubblicazioneGazzetta=1947-12-27&atto.tipoProvvedimento=COSTITUZIONE&art.idGruppo=1&art.idSottoArticolo1=10&art.idSottoArticolo=1&art.flagTipoArticolo=0#art'
with requests.session() as s:
s.headers.update(user_agent)
r = s.get("http://www.normattiva.it/")
soup = BeautifulSoup(r.content, "lxml")
# get all the links from the initial page
for a in soup.select("div.testo p a[href^=http]"):
soup = BeautifulSoup(s.get(a["href"]).content)
# The link to the text is in a iframe tag retuened from the previous get.
text_src_link = soup.select_one("#mainFrame")["src"]
# Pick something to make the names unique
with io.open(os.path.basename(text_src_link), "w", encoding="utf-8") as f:
# The text is in pre tag that is in the div with the pre class
text = BeautifulSoup(s.get(urljoin("http://www.normattiva.it", text_src_link)).content, "html.parser")\
.select_one("div.wrapper_pre pre").text
f.write(text)
```
A snippet of the first text file:
```
IL PRESIDENTE DELLA REPUBBLICA
Visti gli articoli 76, 87 e 117, secondo comma, lettera d), della
Costituzione;
Vistala legge 28 novembre 2005, n. 246 e, in particolare,
l'articolo 14:
comma 14, cosi' come sostituito dall'articolo 4, comma 1, lettera
a), della legge 18 giugno 2009, n. 69, con il quale e' stata
conferita al Governo la delega ad adottare, con le modalita' di cui
all'articolo 20 della legge 15 marzo 1997, n. 59, decreti legislativi
che individuano le disposizioni legislative statali, pubblicate
anteriormente al 1° gennaio 1970, anche se modificate con
provvedimenti successivi, delle quali si ritiene indispensabile la
permanenza in vigore, secondo i principi e criteri direttivi fissati
nello stesso comma 14, dalla lettera a) alla lettera h);
comma 15, con cui si stabilisce che i decreti legislativi di cui
al citato comma 14, provvedono, altresi', alla semplificazione o al
riassetto della materia che ne e' oggetto, nel rispetto dei principi
e criteri direttivi di cui all'articolo 20 della legge 15 marzo 1997,
n. 59, anche al fine di armonizzare le disposizioni mantenute in
vigore con quelle pubblicate successivamente alla data del 1° gennaio
1970;
comma 22, con cui si stabiliscono i termini per l'acquisizione del
prescritto parere da parte della Commissione parlamentare per la
semplificazione;
Visto il decreto legislativo 30 luglio 1999, n. 300, recante
riforma dell'organizzazione del Governo, a norma dell'articolo 11
della legge 15 marzo 1997, n. 59 e, in particolare, gli articoli da
20 a 22;
```
|
wonderful, wonderful, wonderful Padraic. It works. Just had to edits slightly to clear imports but it's works wonderfully. Thanks very much. I am just discovering python's potential and you have made my journey much easier with this specific task. I would have not solved it alone.
```
import requests, sys
import os
from urllib.parse import urljoin
from bs4 import BeautifulSoup
import io
user_agent = {'User-agent': 'Mozilla/5.0', 'Connection': 'keep-alive'}
url = 'http://www.normattiva.it/atto/caricaArticolo?art.progressivo=0&art.idArticolo=1&art.versione=1&art.codiceRedazionale=047U0001&art.dataPubblicazioneGazzetta=1947-12-27&atto.tipoProvvedimento=COSTITUZIONE&art.idGruppo=1&art.idSottoArticolo1=10&art.idSottoArticolo=1&art.flagTipoArticolo=0#art'
with requests.session() as s:
s.headers.update(user_agent)
r = s.get("http://www.normattiva.it/")
soup = BeautifulSoup(r.content, "lxml")
# get all the links from the initial page
for a in soup.select("div.testo p a[href^=http]"):
soup = BeautifulSoup(s.get(a["href"]).content)
# The link to the text is in a iframe tag retuened from the previous get.
text_src_link = soup.select_one("#mainFrame")["src"]
# Pick something to make the names unique
with io.open(os.path.basename(text_src_link), "w", encoding="utf-8") as f:
# The text is in pre tag that is in the div with the pre class
text = BeautifulSoup(s.get(urljoin("http://www.normattiva.it", text_src_link)).content, "html.parser")\
.select_one("div.wrapper_pre pre").text
f.write(text)
```
| 6,192
|
61,209,143
|
Python : 3.7.6
rpy2: 3.2.7
R: 3.3.3
I’m using GCE at AI Platform to perform some clustering.
I’ve installed the r-base, updated properly, installed the python-rpy2 and I’m getting this error.
```py
import rpy2.robjects as robjects
error: symbol 'R_tryCatchError' not found in library '/usr/lib/R/lib/libR.so': /usr/lib/R/lib/libR.so: undefined symbol: R_tryCatchError
```
Someone could help me?
|
2020/04/14
|
[
"https://Stackoverflow.com/questions/61209143",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/13214910/"
] |
Here's a one-liner to update it without mutating the original array:
```js
const updatedPersons = persons.map(p => p.id === id ? {...p, lastname} : p);
```
|
```
persons.forEach(person => {
if(this.id === person.id) person.lastname = this.lastname
})
```
| 6,193
|
14,693,256
|
I've just finished up installing mod\_wsgi but I'm having problems starting my Pyramid application.
I'm using python 2.7, Apache 2.2.3, mod\_wsgi 3.4 on CentOS 5.8
Here is my httpd.config file
```
WSGISocketPrefix run/wsgi
<VirtualHost *:80>
ServerName myapp.domain.com
ServerAlias myapp
WSGIApplicationGroup %{GLOBAL}
WSGIPassAuthorization On
WSGIDaemonProcess pyramid user=apache group=apache processes=1 threads=4 \
python-path=/var/wsgi_sites/site-packages
WSGIScriptAlias / /var/wsgi_sites/myapp/apache.wsgi
<Directory /var/wsgi_sites/myapp>
WSGIProcessGroup pyramid
Order allow,deny
Allow from all
</Directory>
LogLevel debug
ErrorLog /var/log/httpd/myapp_error
</VirtualHost>
```
I have given Apache ownership of site-package, python-eggs and myapp folders.
Module which I'm using to create WSGI application appache.wsgi contains the following code
```
import os
os.environ['PYTHON_EGG_CACHE'] = '/var/wsgi_sites/python-eggs'
from pyramid.paster import get_app
application = get_app('/var/wsgi_sites/myapp/development.ini','main')
```
When I restart Apache and try to access application I get the following error
```
mod_wsgi (pid=14842, process='pyramid', application=''): Loading WSGI script '/var/wsgi_sites/myapp/apache.wsgi'.
mod_wsgi (pid=14842): Target WSGI script '/var/wsgi_sites/myapp/apache.wsgi' cannot be loaded as Python module.
mod_wsgi (pid=14842): Exception occurred processing WSGI script '/var/wsgi_sites/myapp/apache.wsgi'.
Traceback (most recent call last):
File "/var/wsgi_sites/myapp/apache.wsgi", line 4, in ?
from pyramid.paster import get_app
File "/var/wsgi_sites/site-packages/pyramid-1.3.2-py2.7.egg/pyramid/__init__.py", line 1, in ?
from pyramid.request import Request
File "/var/wsgi_sites/site-packages/pyramid-1.3.2-py2.7.egg/pyramid/request.py", line
class Request(BaseRequest, DeprecatedRequestMethodsMixin, URLMethodsMixin,
^
SyntaxError: invalid syntax
```
I tried looking at the request.py file but there are no syntax errors.
|
2013/02/04
|
[
"https://Stackoverflow.com/questions/14693256",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1210711/"
] |
Often when you get syntax errors the preceding line is the culprit. Looking at [the Pyramid source](https://github.com/Pylons/pyramid/blob/master/pyramid/request.py) we see that the preceding line is:
```
@implementer(IRequest)
```
This is a class-decorator. Class-decorators were added to Python in version 2.6. The default version of Python on CentOS 5.8 is 2.4.
Your solution is to either:
1. use an OS with a more recent version of Python, or
2. ensure that your Pyramid application uses version 2.7. This involves installing Python 2.7 **in addition to** the system's default Python installation which is used by other applications and must be left alone.
If you choose to install 2.7 you will do something like the following:
```
$ wget http://www.python.org/ftp/python/2.7.3/Python-2.7.3.tar.bz2
$ tar xf Python-2.7.3.tar.bz2
$ cd Python-2.7.3
$ ./configure --prefix=/usr/local
$ make && make altinstall
```
|
I see two different wsgi files mentioned in the error
/var/wsgi\_sites/project\_name\_api/apache.wsgi
and
/var/wsgi\_sites/myapp/apache.wsgi
I cannot see any project\_name path reference in the httpd.conf that you pasted.
You might want to start by reviewing that. If the problem persists then please post additional information to help you further.
| 6,195
|
47,747,532
|
I'm confronted with a problem as below and hoping some body could give some advice.
I need to convert a lot of excel tables in different shapes into constructed data, the excel tables are as below.
```
|--------------------|----|----|
|user:Sam | | |
|--------------------|----|----|
|mail:sam@example.com| | |
|-------|----------------|-----|
|user |Jack | |
|-------|----------------|-----|
|mail |jack@example.com| |
|-------|----------------|-----|
|-------|-----|---------------|---------|
|user |May | | |
|-------|-----|---------------|---------|
| |mail |may@example.com| |
|-------|-----|---------------|---------|
|user | Alex |mail |alex@example.com|
```
The target result would be like the following format.
```
|-------|-------------------|
|user | email |
|-------|-------------------|
|Jack | jack@example.com |
|-------|-------------------|
|Sam | sam@example.com |
|-------|-------------------|
|Alex | alex@example.com |
|-------|-------------------|
|May | may@example.com |
|-------|-------------------|
```
My current solution is to define a function for each type of excel table. But there would be thousands of different excel files so I would have to repeat write similar code. So my question is whether there is common solution for it.
I found one [similar question](https://stackoverflow.com/questions/32089023/looking-for-a-little-python-machine-learning-advice) about this but there is no more information.I think machine learning may help to solve the problem, but I know little about that. Is there any one who could share some thoughts?
Thanks very much!
|
2017/12/11
|
[
"https://Stackoverflow.com/questions/47747532",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/8226471/"
] |
Looking at the patterns you have provided in your question we see that the data is sometimes in a separate cell, other times encoded in the text with a ':' separator. I'd flatten it out and parse the assembled text for a linear pattern.
I suggest you read the excel file using something like [xlrd](https://pypi.python.org/pypi/xlrd).
Then step through the cells pulling out the text and parse out the fields you are interested in.
```
<cell>'user'<cell|':'>user_name<cell>'mail'<cell|':'>email_address<cell>
```
where `<cell>` is one or more cell boundaries, possibly spread over rows.
Once you have the user email pairs you can write them out using [xlwt](https://pypi.python.org/pypi/xlwt).
|
You have 4 types of files.
If that is all you can write 1 function with 4 if statements.
```
def table_sort(file):
If file == condition:
extract_data_this_way
elif file == other_condition:
extract_data_this_way
elif file == other_condition:
extract_data_this_way
else:
extract_data_this_way
```
If you use pandas to do this it will make it a lot easier to code.
I'd you have a lot of files. You can pass in a list and use a for loop to iterate. Or use glob to load all excel files in a directory and loop that way .
| 6,196
|
18,314,228
|
I have a list of strings and I like to split that list in different "sublists" based on the character length of the words in th list e.g:
```
List = [a, bb, aa, ccc, dddd]
Sublist1 = [a]
Sublist2= [bb, aa]
Sublist3= [ccc]
Sublist2= [dddd]
```
How can i achieve this in python ?
Thank you
|
2013/08/19
|
[
"https://Stackoverflow.com/questions/18314228",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/413734/"
] |
I think you should use dictionaries
```
>>> dict_sublist = {}
>>> for el in List:
... dict_sublist.setdefault(len(el), []).append(el)
...
>>> dict_sublist
{1: ['a'], 2: ['bb', 'aa'], 3: ['ccc'], 4: ['dddd']}
```
|
Assuming you're happy with a list of lists, indexed by length, how about something like
```
by_length = []
for word in List:
wl = len(word)
while len(by_length) < wl:
by_length.append([])
by_length[wl].append(word)
print "The words of length 3 are %s" % by_length[3]
```
| 6,197
|
29,997,120
|
I have a problem with a little server-client assignment in python 2.7.
The client can send 5 types of requests to the server:
1. get the server's IP
2. get contents of a directory on the server
3. run cmd command on the server and get the output
4. open a calculator on the server
5. disconnect
This is the error I get:
```
error:
msg_type, data_len = unpack("BH", client_structs[:3])
struct.error: unpack requires a string argument of length 4
```
Code:
```
client_structs = client_soc.recv(1024)
msg_type, data_len = unpack("BH", client_structs[:3])
```
Doesn't the substring contain 4 chars including the null?
Would appreciate explanation about this error + how to solve it.
Entire server code:
```
__author__ = 'eyal'
from struct import pack, unpack, calcsize
import socket
from os import listdir
from subprocess import check_output, call
def server():
ser_soc = socket.socket()
ser_soc.bind(("0.0.0.0", 8080))
ser_soc.listen(1)
while True:
accept_flag = raw_input("Would you like to wait for a client? (y/n) ")
if accept_flag == "y":
client_soc, client_address = ser_soc.accept()
while True:
client_structs = client_soc.recv(1024)
data_size = calcsize(client_structs) - 3
data_str = 'c' * data_size
unpacked_data = unpack("BH" + data_str, client_structs)
if unpacked_data[0] == 1:
ip = socket.gethostbyname(socket.gethostname())
ip_data = 'c' * len(ip)
to_send = pack("BH" + str(len(ip)) + ip_data, unpacked_data[0], len(ip), ip)
elif unpacked_data[0] == 2:
content = listdir(str(unpacked_data[2]))
content_str = "\r\n".join(content)
content_data = 'c' * len(content_str)
to_send = pack("BH" + str(len(content_str)) + content_data, unpacked_data[0],
len(content_str), content_str)
elif unpacked_data[0] == 3:
command = str(unpacked_data[2:]).split()
output = check_output(command)
message_data = 'c' * len(output)
to_send = pack("BH" + message_data, unpacked_data[0], len(output), output)
elif unpacked_data[0] == 4:
call("gnome-calculator")
msg_data = 'c' * len("The calculator is open.")
to_send = pack("BH" + msg_data, unpacked_data[0], len("The calculator is open."),
"The calculator is open.")
elif unpacked_data[0] == 5:
client_soc.close()
break
else:
to_send = pack("BH" + 'c' * len("invalid message type, try again"),
unpacked_data[0], len("invalid message type, try again"),
"invalid message type, try again")
if unpacked_data[0] != 5:
client_soc.send(to_send)
else:
break
ser_soc.close()
def main():
server()
if __name__ == "__main__":
main()
```
Entire client code:
```
__author__ = 'eyal'
from struct import pack, unpack, calcsize
import socket
def client():
my_soc = socket.socket()
my_soc.connect(("127.0.0.1", 8080))
while True:
send_flag = raw_input("Would you like to send the server a request? (y/n) ")
if send_flag == "y":
msg_code = input("What type of request would you like to send?\n"
"1. Get the server's IP address.\n"
"2. Get content of a directory on the server.\n"
"3. Run a terminal command on the server and get the output.\n"
"4. Open a calculator on the server.\n"
"5. Disconnect from the server.\n"
"Your choice: ")
if msg_code == 1 or msg_code == 4 or msg_code == 5:
to_send = pack("BH", msg_code, 0)
elif msg_code == 2:
path = raw_input("Enter path of wanted directory to get content of: ")
to_send = pack("BH" + 'c' * len(path), msg_code, len(path), path)
elif msg_code == 3:
command = raw_input("Enter the wanted terminal command, including arguments: ")
to_send = pack("BH" + 'c' * len(command), msg_code, len(command), command)
else:
print "Invalid message code, try again\n"
if 1 <= msg_code <= 5:
my_soc.send(to_send)
else:
break
data = my_soc.recv(1024)
unpacked_data = unpack("BH" + 'c' * (calcsize(data) - 3), data)
print "The server's response to your type-" + str(msg_code) + " request:"
print unpacked_data[2]
my_soc.close()
def main():
client()
if __name__ == "__main__":
main()
```
|
2015/05/02
|
[
"https://Stackoverflow.com/questions/29997120",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3554255/"
] |
In code shown in your question:
```
HashTable::HashTable(int buckets) {
this->buckets = buckets;
vector<Entry>* table = new vector<Entry>[buckets];
}
```
you create a local variable `table` which is a pointer to `vector<Entry>` and then leak that memory. Then in `HashTable::insert` you try to access member variable `table` which is uninitialized.
|
```
HashTable::HashTable(int buckets) {
this->buckets = buckets;
vector<Entry>* table = new vector<Entry>[buckets]; // this table is local to this function, also a memory leek.
}
```
As I can see in your `HashTable` constructor, you are initializing a local `vector<Entry>* table` to your constructor.
```
Entry HashTable::insert(GameBoard board, int number) {
int index = compress(board.hashCode());
Entry entry = Entry(board, number);
table[index].push_back(entry);
return entry;
}
```
and I can see you are going to `push_back()` to some other `table` at `insert` method. The seg. fault occuses because you going to `push_back` to an uninitalized `table`.
Do you have any `vector<Entry>* table` in your `HashTable` class. If you have change your `HashTable::HashTable(int buckets)` to initialize that table as shown in below.
```
HashTable::HashTable(int buckets) {
this->buckets = buckets;
table = new vector<Entry>[buckets]; // Init class attribute `vector<Entry>* table`
}
```
If you don't you have any `vector<Entry>* table` in your `HashTable` class, add it to your class and use above `HashTable::HashTable(int buckets)`.
This will resolve your issue.
| 6,203
|
54,233,559
|
I am generating a doc using python docx module.
I want to bold the specific cell of a row in python docx
here is the code
```
book_title = '\n-:\n {}\n\n'.format(book_title)
book_desc = '-: {}\n\n:\n{}\n\n :\n{}'.format(book.author,book_description,sales_point)
row1.cells[1].text = (book_title + book_desc)
```
I just want to bold the book\_title.
If I apply a style it automatically applies to whole document.
|
2019/01/17
|
[
"https://Stackoverflow.com/questions/54233559",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6892109/"
] |
Here is how I understand it:
Paragraph is holding the run objects and styles (bold, italic) are methods of run.
So following this logic here is what might solve your question:
```
row1_cells[0].paragraphs[0].add_run(book_title + book_desc).bold=True
```
This is just an example for the first cell of the table. Please amend it in your code.
|
Since you are using the docx module, you can style your text/paragraph by explicitly defining the style.
In order to apply a style, use the following code snippet referenced from docx documentation [here](https://python-docx.readthedocs.io/en/latest/user/styles-using.html#apply-a-style).
```
>>> from docx import Document
>>> document = Document()
>>> style = document.styles['Normal']
>>> font = style.font
>>> font.bold= True
```
This will change the font style to bold for the applied paragraph.
| 6,204
|
10,216,019
|
So I was developing an app in Django and needed a function from the 1.4 version so I decided to update.
But then a weird error appeared when I wanted to do `syncdb`
I am using the new `manage.py` and as You can see it makes some of the tables but then fails :
```
./manage.py syncdb
Creating tables ...
Creating table auth_permission
Creating table auth_group_permissions
Creating table auth_group
Creating table auth_user_user_permissions
Creating table auth_user_groups
Creating table auth_user
Creating table django_content_type
Creating table django_session
Creating table django_site
Traceback (most recent call last):
File "./manage.py", line 9, in <module>
execute_from_command_line(sys.argv)
File "/usr/local/lib/python2.7/dist-packages/Django-1.4-py2.7.egg/django/core/management/__init__.py", line 443, in execute_from_command_line
utility.execute()
File "/usr/local/lib/python2.7/dist-packages/Django-1.4-py2.7.egg/django/core/management/__init__.py", line 382, in execute
self.fetch_command(subcommand).run_from_argv(self.argv)
File "/usr/local/lib/python2.7/dist-packages/Django-1.4-py2.7.egg/django/core/management/base.py", line 196, in run_from_argv
self.execute(*args, **options.__dict__)
File "/usr/local/lib/python2.7/dist-packages/Django-1.4-py2.7.egg/django/core/management/base.py", line 232, in execute
output = self.handle(*args, **options)
File "/usr/local/lib/python2.7/dist-packages/Django-1.4-py2.7.egg/django/core/management/base.py", line 371, in handle
return self.handle_noargs(**options)
File "/usr/local/lib/python2.7/dist-packages/Django-1.4-py2.7.egg/django/core/management/commands/syncdb.py", line 91, in handle_noargs
sql, references = connection.creation.sql_create_model(model, self.style, seen_models)
File "/usr/local/lib/python2.7/dist-packages/Django-1.4-py2.7.egg/django/db/backends/creation.py", line 44, in sql_create_model
col_type = f.db_type(connection=self.connection)
TypeError: db_type() got an unexpected keyword argument 'connection'
```
|
2012/04/18
|
[
"https://Stackoverflow.com/questions/10216019",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1092459/"
] |
I had the same issue, the definition for my custom field was missing the connection parameter.
```
from django.db import models
class BigIntegerField(models.IntegerField):
def db_type(self, connection):
return "bigint"
```
|
Although already old, answered and accepted question but I am adding my understanding I have added it because I am not using customized type and it is a Django Evolution error (but not syncdb)`evolve --hint --execute`. I think it may be helpful for someone in future. .
I am average in Python and new to Django. I also encounter same issue when I added some new features to my existing project. To add new feature I had to add some new fields of `models.CharField()` type,as follows.
```
included_domains = models.CharField(
"set of comma(,) seprated list of domains in target emails",
default="",
max_length=it_len.EMAIL_LEN*5)
excluded_domains = models.CharField(
"set of comma(,) seprated list of domains NOT in target emails",
default="",
max_length=it_len.EMAIL_LEN*5)
```
The Django version I am using is 1.3.1:
```
$ python -c "import django; print django.get_version()"
1.3.1 <--------# version
$python manage.py syncdb
Project signature has changed - an evolution is required
```
[Django Evolution:](https://code.google.com/p/django-evolution/#Django_Evolution) Django Evolution is an extension to Django that allows you to track changes in your models over time, and to update the database to reflect those changes.
```
$ python manage.py evolve --hint
#----- Evolution for messagingframework
from django_evolution.mutations import AddField
from django.db import models
MUTATIONS = [
AddField('MessageConfiguration', 'excluded_domains', models.CharField, initial=u'', max_length=300),
AddField('MessageConfiguration', 'included_domains', models.CharField, initial=u'', max_length=300)
]
#----------------------
Trial evolution successful.
Run './manage.py evolve --hint --execute' to apply evolution.
```
The trial was susses and when I tried to apply changes in DB
```
$ python manage.py evolve --hint --execute
Traceback (most recent call last):
File "manage.py", line 25, in <module>
execute_manager(settings)
File "/var/www/sites/www.taxspanner.com/django/core/management/__init__.py", line 362, in execute_manager
utility.execute()
File "/var/www/sites/www.taxspanner.com/django/core/management/__init__.py", line 303, in execute
self.fetch_command(subcommand).run_from_argv(self.argv)
File "/var/www/sites/www.taxspanner.com/django/core/management/base.py", line 195, in run_from_argv
self.execute(*args, **options.__dict__)
File "/var/www/sites/www.taxspanner.com/django/core/management/base.py", line 222, in execute
output = self.handle(*args, **options)
File "/usr/local/lib/python2.7/dist-packages/django_evolution-0.6.9.dev_r225-py2.7.egg/django_evolution/management/commands/evolve.py", line 60, in handle
self.evolve(*app_labels, **options)
File "/usr/local/lib/python2.7/dist-packages/django_evolution-0.6.9.dev_r225-py2.7.egg/django_evolution/management/commands/evolve.py", line 140, in evolve
database))
File "/usr/local/lib/python2.7/dist-packages/django_evolution-0.6.9.dev_r225-py2.7.egg/django_evolution/mutations.py", line 426, in mutate
return self.add_column(app_label, proj_sig, database)
File "/usr/local/lib/python2.7/dist-packages/django_evolution-0.6.9.dev_r225-py2.7.egg/django_evolution/mutations.py", line 438, in add_column
sql_statements = evolver.add_column(model, field, self.initial)
File "/usr/local/lib/python2.7/dist-packages/django_evolution-0.6.9.dev_r225-py2.7.egg/django_evolution/db/common.py", line 142, in add_column
f.db_type(connection=self.connection), # <=== here f is field class object
TypeError: db_type() got an unexpected keyword argument 'connection'
```
To understand this exception I check that this exception is something similar to:
```
>>> def f(a):
... print a
...
>>> f('b', b='a')
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: f() got an unexpected keyword argument 'b'
>>>
```
So the function signature has been changed.
Because I have not added any *new customized or enum* fields but only two similar fields that was already in model and char type field is supported by most of database (I am ussing PostgreSQL) even I was getting this error!
Then I read from @: [Russell Keith-Magee-4 Reply](http://python.6.x6.nabble.com/Django-1-4-TypeError-get-db-prep-value-got-an-unexpected-keyword-argument-connection-td4695088.html).
>
> What you've hit here is the end of the deprecation cycle for code that
> doesn't support multiple databases.
>
>
> In Django 1.2, we introduced multiple database support; in order to
> support this, the prototype for `get_db_preb_lookup()` and
> `get_db_prep_value()` was changed.
>
>
> For backwards compatibility, we added a shim that would transparently
> 'fix' these methods if they hadn't already been fixed by the
> developer.
>
>
> In Django 1.2, the usage of these shims raised a
> PendingDeprecationWarning. In Django 1.3, they raised a
> DeprecationWarning.
>
>
> Under Django 1.4, the shim code was been removed -- so any code that
> wasn't updated will now raise errors like the one you describe.
>
>
>
But I am not getting any DeprecationWarning warning assuming because of newer version of Django Evolution.
But from above quote I could understand that to support multiple databases function signature is added and an extra argument `connection` is needed. I also check the `db_type()` signature in my installation of Django as follows:
```
/django$ grep --exclude-dir=".svn" -n 'def db_type(' * -R
contrib/localflavor/us/models.py:8: def db_type(self):
contrib/localflavor/us/models.py:24: def db_type(self):
:
:
```
Ialso refer of Django documentation
>
> ### [Field.db\_type(self, connection):](https://docs.djangoproject.com/en/dev/howto/custom-model-fields/#django.db.models.Field.db_type)
>
>
> Returns the *database column data type for the Field*, taking into account the connection
> object, and the settings associated with it.
>
>
>
And Then I could understand that to resolve this issue I have to inherited `models.filed` class and overwrite `def db_type()` function. And because I am using [PostgreSQL in which to create 300 chars type field](http://www.postgresql.org/docs/9.1/static/datatype-character.html) I need to return `'char(300)'`. In my models.py I added:
```
class CharMaxlengthN(models.Field):
def db_type(self, connection):
return 'char(%d)' % self.max_length # because I am using postgresql
```
If you encounter similar problem please check your underline DB's manual that which type of column you need to create and return a string.
And changed the definition of new fields (that I need to add) read comments:
```
included_domains = CharMaxlengthN( # <--Notice change
"set of comma(,) seprated list of domains in target emails",
default="",
max_length=it_len.EMAIL_LEN*5)
excluded_domains = CharMaxlengthN( # <-- Notice change
"set of comma(,) seprated list of domains NOT in target emails",
default="",
max_length=it_len.EMAIL_LEN*5)
```
Then I executed same command that was failing previously:
```
t$ python manage.py evolve --hint --execute
You have requested a database evolution. This will alter tables
and data currently in the None database, and may result in
IRREVERSABLE DATA LOSS. Evolutions should be *thoroughly* reviewed
prior to execution.
Are you sure you want to execute the evolutions?
Type 'yes' to continue, or 'no' to cancel: yes
Evolution successful.
```
I also check my DB and tested my new added features It is now working perfectly, and no DB problem.
If you wants to create ENUM field read [`Specifying a mySQL ENUM in a Django model`](https://stackoverflow.com/a/19040441/1673391).
**Edit:** I realized instead of sub classing `models.Field` I should have inherit more specific subclass that is `models.CharField`.
Similarly I need to create Decimal DB fields so I added following class in model:
```
class DecimalField(models.DecimalField):
def db_type(self, connection):
d = {
'max_digits': self.max_digits,
'decimal_places': self.decimal_places,
}
return 'numeric(%(max_digits)s, %(decimal_places)s)' % d
```
| 6,208
|
35,689,139
|
I am writing a simple web application where I want to use a print out a few korean characters. Although I changed the encoding in the header, the web application, when opened in chrome, prints out gibberish instead of regular Korean characters. I also changed my chrome language settings to display korean as well. Here's my code:
```
#!/usr/bin/env python
#-*- encoding: iso-8859-1 -*-
import cgi
import sys
form = cgi.FieldStorage()
print "Content-type: text/html; charset=iso-8859-1 "
print "Accept-Language: fi, en, ko"
print("Welcome")
print("환영")
print("Tervetuloa")
```
|
2016/02/28
|
[
"https://Stackoverflow.com/questions/35689139",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4590749/"
] |
Change your encoding/charset to a charset that supports all the characters. For example by replacing both occurrences of `iso-8859-1` to `utf-8`. [UTF-8](https://en.wikipedia.org/wiki/UTF-8) can support Korean characters and basically any writing systems that exist.
|
You can use the [korean package](https://pypi.python.org/pypi/korean) :
Example :
```
from korean import Noun
fmt = u'{subj:은} {obj:을} 먹었다.'
print fmt.format(subj=Noun(u'나'), obj=Noun(u'밥'))
print fmt.format(subj=Noun(u'학생'), obj=Noun(u'돈까스'))
```
Output :
```
나은 밥을 먹었다.
학생은 돈까스을 먹었다.
```
| 6,209
|
67,663,059
|
I m testing my incomplete kivy app to grap a suitable apk of that. using buildozer and ubuntu i generate the apk, but it crashes right after starting it on android device. Is buildozer spec file the root cause should change something inside that? , or its incompatible version issue.
please share kivy, kivymd, python and buildozer versions that are compatible. the py file is run on pycharm suitable with no error.
|
2021/05/23
|
[
"https://Stackoverflow.com/questions/67663059",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/13331490/"
] |
Try using `kivy 2.0.0rc4`. Install it in plugins trough settings in pycharm. And your `buildozer.spec` should be like this:
```
requirements = python3,kivy==2.0.0rc4
```
|
Refer to the requirements specidied in buildozer.spec file for the KivyMD-kitchen\_sink app in the repo.
This is the link -> [Kitchen\_Sink\_Repo](https://github.com/kivymd/KivyMD/blob/master/demos/kitchen_sink/buildozer.spec)
**Tip**
If, after changing the `requirements` you still see your app crashing, run the following command(s)
```
buildozer android clean
buildozer android debug deploy run
```
Why?
Because when buildozer installs the earlier specified requirements, it is quiet possible that it installs versions that are not matching your apps specifications. So clean it and then run.
You should now be good to go.
| 6,210
|
60,389,566
|
I am new to multi-processing and python, from the documentation,
<https://docs.python.org/3/library/multiprocessing.html>
I was able to run the below code.
```
from multiprocessing import Pool
def f(x):
return x*x
if __name__ == '__main__':
with Pool(5) as p:
print(p.map(f, [1, 2, 3]))
```
But if `f(x)` is changed to `f(x,y,z)` such that
```
def f(x, y, z):
return x*y*z
```
What is the syntax to pass the 3 arguments to `f(x, y, z)` from the `p.map` method?
|
2020/02/25
|
[
"https://Stackoverflow.com/questions/60389566",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/9066431/"
] |
Maybe you are looking for [`starmap`](https://docs.python.org/3/library/multiprocessing.html#multiprocessing.pool.Pool.starmap)?
>
> `starmap(func, iterable[, chunksize])`
>
>
> Like map() except that the elements of the iterable are expected to be iterables that are unpacked as arguments.
>
>
> Hence an iterable of `[(1,2), (3, 4)]` results in `[func(1,2), func(3,4)]`.
>
>
> New in version 3.3.
>
>
>
|
Use [`p.starmap()`](https://docs.python.org/3/library/multiprocessing.html#multiprocessing.pool.Pool.starmap), it's meant for exactly this case.
| 6,211
|
70,074,369
|
I have a list of words like this:
```
word_list=[{"word": "python",
"repeted": 4},
{"word": "awsome",
"repeted": 3},
{"word": "frameworks",
"repeted": 2},
{"word": "programing",
"repeted": 2},
{"word": "stackoverflow",
"repeted": 2},
{"word": "work",
"repeted": 1},
{"word": "error",
"repeted": 1},
{"word": "teach",
"repeted": 1}
]
```
,that comes from another list of notes:
```
note_list = [{"note_id":1,
"note_txt":"A curated list of awesome Python frameworks"},
{"note_id":2,
"note_txt":"what is awesome Python frameworks"},
{"note_id":3,
"note_txt":"awesome Python is good to wok with it"},
{"note_id":4,
"note_txt":"use stackoverflow to lern programing with python is awsome"},
{"note_id":5,
"note_txt":"error in programing is good to learn"},
{"note_id":6,
"note_txt":"stackoverflow is very useful to share our knoloedge"},
{"note_id":7,
"note_txt":"teach, work"},
]
```
I want to know how can I map every word to its own note:
```
maped_list=[{"word": "python",
"notes_ids": [1,2,3,4]},
{"word": "awsome",
"notes_ids": [1,2,3]},
{"word": "frameworks",
"notes_ids": [1,2]},
{"word": "programing",
"notes_ids": [4,5]},
{"word": "stackoverflow",
"notes_ids": [4,6]},
{"word": "work",
"notes_ids": [7]},
{"word": "error",
"notes_ids": [5]},
{"word": "teach",
"notes_ids": [7]}
]
```
my work:
```
# i started by appending all the notes text into one list
notes_test = []
for note in note_list:
notes_test.append(note['note_txt'])
# calculate the reptition of each word
dict = {}
for sentence in notes_test:
for word in re.split('\s', sentence): # split with whitespace
try:
dict[word] += 1
except KeyError:
dict[word] = 1
word_list= []
for key in dict.keys():
word = {}
word['word'] = key
word['repeted'] = dict[key]
word_list.append(word)
```
my question:
1. how can I map the word list and note list to get the mapped list
2. how do you find the quality of my code, any remarks
|
2021/11/23
|
[
"https://Stackoverflow.com/questions/70074369",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/17483739/"
] |
numpy broadcasting is so useful here:
```py
bm = df_other.values[:, 0] == df.values
```
Output:
```py
>>> bm
array([[ True, True, False, False, False],
[False, False, True, False, False],
[False, False, False, True, False]])
```
If you need it as ints:
```py
>>> bm.astype(int)
array([[1, 1, 0, 0, 0],
[0, 0, 1, 0, 0],
[0, 0, 0, 1, 0]])
```
|
Another way to do this using pandas methods are as follows:
```
pd.crosstab(df_other['a'], df_other['c']).reindex(df['a']).to_numpy(dtype=int)
```
Output:
```
array([[1, 1, 0, 0, 0],
[0, 0, 1, 0, 0],
[0, 0, 0, 1, 0]])
```
| 6,212
|
63,759,451
|
I'm trying to check the validity of comma-separated strings in python. That is, it's possible that the strings contain mistakes whereby there are more than one comma used.
Here is a valid string:
```
foo = "a, b, c, d, e"
```
This is a valid string as it is comma-delimited; only one comma, not several or spaces only.
Here is an invalid string:
```
invalid = "a,, b, c,,,,d, e,,; f, g"
```
The invalid string is invalid because (1) it uses more than one comma and (2) it also uses a semicolon `;`.
What would be the most effective way to check that the strings are valid?
My first attempt was to try something like:
```
def check_valid_string(input_string):
if ",," in input_string or ";" in input_string:
return "Not valid" ## or False
else:
return "Valid" ## or True
```
however, it's not clear that this would catch all possible invalid strings. It's also not clear to me that this approach is the most computationally efficient (i.e. quick).
|
2020/09/05
|
[
"https://Stackoverflow.com/questions/63759451",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5269850/"
] |
It appears the best way to accomplish this is with regex:
Here is a valid string:
```
valid = "a, b, c, foo, bar, dog, cat"
```
Here are various invalid strings:
```
## invalid1 is invalid as it contains multiple , i.e. `,,` and :
invalid1 = "a,, b, c,,,,d, e,,; f, g"
## invalid2 is invalid as it contains `, ,`
invalid2 = "a b, ,c, d, e"
## invalid3 is invalid as it contains spaces between strings
invalid3 = "a, b, d, elephant, f g"
```
Here is the regex to check whether the string is valid:
```
import re
pattern = re.compile(r"^(\w+)(,\s*\w+)*$")
def check_valid(input_string):
if pattern.match(input_string) == None:
return "Invalid"
else:
return "Valid"
```
Here is the function:
```
>>> check_valid(invalid)
'Invalid'
>>> check_valid(invalid2)
'Invalid'
>>> check_valid(invalid3)
'Invalid'
>>> check_valid(valid)
'Valid'
```
|
Here you have some way to check if it's valid:
```
def is_valid(comma_sep_str):
if ';' in comma_sep_str or ',,' in comma_sep_str:
return 'Not valid'
else:
return 'Valid'
myString1 = "a,, b, c,,,,d, e,,; f, g"
myString2 = "a, b, c, d, e"
print(is_valid(myString1))
print(is_valid(myString2))
```
PS: Maybe is not the most effective but it will check whether is valid or not. Note that in all wrong cases you will always have at least one of this two: ",," or ";".
| 6,213
|
62,882,592
|
I have a `df` named `data` as follows:
```
id upper_ci lower_ci max_power_contractual
0 12858 60.19878860406808 49.827481214215204 0
1 12858 60.61189293066522 49.298784196530896 0
2 12858 60.34397624424309 49.718421137642885 70
3 12858 59.87472261936114 49.464255779713476 10
4 12858 60.2735279368527 49.41672240525131 0
```
I am trying to create a new column named `up_threshold` as follows:
* If the value of `max_power_contractual` is either zero (`0`) or `NaN`, then the value in the `up_threshold` should be the value in the `upper_ci`
* If the value of `max_power_contractual` is not zero, and the condition: `max_power_contractual > upper_ci` is `True`, then the value in the `up_threshold` should be the value in the `upper_ci`
* If the value of `max_power_contractual < upper_ci` is `True`, then the value in the `up_threshold` should be the value in the `max_power_contractual`
I tried:
```
if (data['max_power_contractual'] in (0, np.nan)) or (data['max_power_contractual'] > data['upper_ci']):
data['up_threshold'] = data['upper_ci']
elif (data['upper_ci'] > data['max_power_contractual'] == 0):
data['up_threshold'] = data['max_power_contractual']
```
But it gives me the following error:
>
> Traceback (most recent call last):
>
>
> File "/home/cortex/.config/spyder-py3/temp.py", line 179, in
>
> data = cp\_detection(data, threshold)
>
>
> File "/home/cortex/.config/spyder-py3/temp.py", line 146, in
> cp\_detection
> if data['max\_power\_contractual'] == 0:
>
>
> File
> "/home/cortex/.local/lib/python3.7/site-packages/pandas/core/generic.py",
> line 1479, in **nonzero**
> f"The truth value of a {type(self).**name**} is ambiguous. "
>
>
> ValueError: The truth value of a Series is ambiguous. Use a.empty,
> a.bool(), a.item(), a.any() or a.all().
>
>
>
Can someone please tell me my mistake and how can I solve it?
**Expected output:**
```
id upper_ci lower_ci max_power_contractual up_threshold
0 12858 60.19878860406808 49.827481214215204 0 60.19878860406808 (Since `max_power_contractual` value is 0)
1 12858 60.61189293066522 49.298784196530896 NaN 60.61189293066522 (Since `max_power_contractual` value is NaN)
2 12858 60.34397624424309 49.718421137642885 70 60.34397624424309 (Since `upper_ci < max_power_contractual`)
3 12858 59.87472261936114 49.464255779713476 10 10 (Since `upper_ci > max_power_contractual`)
```
|
2020/07/13
|
[
"https://Stackoverflow.com/questions/62882592",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/11853632/"
] |
You can use `np.where` to add the new column:
```
df['up_threshold'] = np.where(df['max_power_contractual'].fillna(0) == 0, df['upper_ci'],
np.where(df['max_power_contractual'] > df['upper_ci'], df['upper_ci'], df['max_power_contractual'])
)
print(df)
```
Prints:
```
id upper_ci lower_ci max_power_contractual up_threshold
0 12858 60.198789 49.827481 0.0 60.198789
1 12858 60.611893 49.298784 NaN 60.611893
2 12858 60.343976 49.718421 70.0 60.343976
3 12858 59.874723 49.464256 10.0 10.000000
4 12858 60.273528 49.416722 0.0 60.273528
```
|
Not the efficient way but easier to understand
```
In [17]: def process(data):
...: result = None
...: if (data['max_power_contractual'] in (0, np.nan)) or (data['max_power_contractual'] > data['upper_ci']):
...: result = data['upper_ci']
...: elif (data['upper_ci'] > data['max_power_contractual']):
...: result = data['max_power_contractual']
...:
...: return result
...:
In [18]: df.apply(process, axis=1)
Out[18]:
0 60.198789
1 60.611893
2 60.343976
3 10.000000
4 60.273528
dtype: float64
In [19]: df["up_threshold"] = df.apply(process, axis=1)
In [20]: df
Out[20]:
id upper_ci lower_ci max_power_contractual up_threshold
0 12858 60.198789 49.827481 0 60.198789
1 12858 60.611893 49.298784 0 60.611893
2 12858 60.343976 49.718421 70 60.343976
3 12858 59.874723 49.464256 10 10.000000
4 12858 60.273528 49.416722 0 60.273528
```
| 6,214
|
39,666,183
|
Im trying to extract all the images from a page. I have used Mechanize Urllib and selenium to extract the Html but the part i want to extract is never there. Also when i view the page source im not able to view the part i want to extract. Instead of the Description i want to extract there is this:
```
<div class="loading32"></div>
</div>
</div>
</div>
```
But if i try to view it using the inspect element option its there.
Is there a easy way to figure out what this script does without any java knowledge? So i can bypass it. or is there a way to get an equivalent of inspect element using selenium in python 2.7? What is the difference between View page source and inspect element anyway?
|
2016/09/23
|
[
"https://Stackoverflow.com/questions/39666183",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6736217/"
] |
Possibly you're trying to get elements that are created with a client sided script. I don't think javascript elements run when you just send a GET/POST request (which is what I'm assuming you mean by "view source").
|
At the time I was not aware how much content is loaded in through js after the page is loaded.
Mechanize does not have a JavaScript interpreter.
The way I ended up solving this is extracting the links from the \*.js file and redoing the get commend with urllib and getting the required content that way.
| 6,216
|
32,302,725
|
Hi there am new to OOP and python, I am currently trying to increment a User Id variable from a child Class, when I create an instance of the parent class using inheritance it doesn't seem to recognise the Id Variable from its parent class. Example here
```
class User:
_ID = 0
def __init__(self, name):
self.name = name
self.id = self._ID
self.__class__._ID += 1
class Customer(User):
def __init__(self, name):
def lastname(self):
return "self.name.split()[-1]"
```
If i do i am able to access the class attribute
```
>> Chris = User("Christopher Allan")
>> Chris.id
>> 0
```
When I try to run
```
>> Andy = Customer('Andy Smith')
>> Andy.id
>> Traceback (most recent call last):
File "<pyshell#83>", line 1, in <module>
Andy.id
AttributeError: 'Customer' object has no attribute 'id'
```
**Update**
I completed the rest of the Customer Class which was the cause of the code not working for me, sorry about that people I used pass before as for briefness of question I didn't test that it would work with pass in the Customer class.
|
2015/08/31
|
[
"https://Stackoverflow.com/questions/32302725",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2374021/"
] |
First, the way to get hold of the ListView itself is relatively easy. In an Activity subclass, you would do this:
```
ListView itemList = (ListView) findViewById(R.id.ItemList);
```
In your example above, the ArrayAdapter needs a layout id in it's constructor. This layout should contain a single TextView element (or some subclass) which will be used to render the list item.
```
<TextView .... />
```
In many cases, the value
```
android.R.layout.simple_list_item_1
```
is sufficient. If you want different formatting, but still a single TextView, you can supply your own layout file in this constructor.
If you want anything more complex than a straight TextView, then you can create a subclass of ArrayAdapter, and override the getView method. In this case, i'd recommend following the ViewHolder pattern as described here
[How can I make my ArrayAdapter follow the ViewHolder pattern?](https://stackoverflow.com/questions/3832254/how-can-i-make-my-arrayadapter-follow-the-viewholder-pattern)
The reasoning for the ViewHolder pattern can be seen here
<http://developer.android.com/training/improving-layouts/smooth-scrolling.html>
Finally, set the adapter on the ListView when you're all done:
```
itemList.setAdapter(adapter);
```
As for what "R" is, it's a file generated by your IDE (eclipse, intellij, android studio), that lives in the main package (as indicated in your AndroidManifest.xml). Everytime you create a new element in a layout file with a new id, an entry gets added to that class under the "R.id" scope. The same happens as you create layout files, drawables, dimension values, string values, etc...
If you're outside you're main package, just use the IDE to help you import the class. Just take care you import the one from your package, as android it self has an "android.R" for it's own resources.
|
Writing apps for Android is much more complicated than writing apps for Windows in vb6. You should really study basics and do some tutorials. Start [here!](http://developer.android.com/training/index.html)
But for your question, to get access to xml control in your code, first you have to make object of that control, e.g.
```
private Button button1;
```
then connect it with actual control from XML layout by method [findViewById()](http://developer.android.com/reference/android/app/Activity.html#findViewById(int))
```
button1 = (Button) findViewById(R.id.your_button_id_in_xml_layout);
```
| 6,217
|
51,420,803
|
I have been trying to install `python-poppler-qt4` but it shows the error that `ModuleNotFoundError: No module name sipdistutils`.
When I tried installing the `sipdistutils`, it again showed the error.
**Error Message**
[](https://i.stack.imgur.com/m6Pdk.png)
|
2018/07/19
|
[
"https://Stackoverflow.com/questions/51420803",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/10104837/"
] |
I have found a simillar issue here: <https://github.com/wbsoft/python-poppler-qt5/issues/14>
I think that `sipdistutils` should be part of `sip` package. Please verify if you have it installed:
```
$ pip freeze | grep sip
sip==4.19.1
```
If there's no output install it with `pip install sip`.
If this won't work some proposed solutions:
1. >
> It seems like the pip version of sip does not install sipdistutils. To install it from source, you can do that:
> `wget https://sourceforge.net/projects/pyqt/files/sip/sip-4.19.3/sip-4.19.3.tar.gz
> tar zxvf sip-4.19.3.tar.gz
> cd sip-4.19.3
> python configure.py
> make
> make install`
>
>
>
2. >
> You can get the sipdistutils.py from riverbank's mercurial server, i.e. from [here](https://www.riverbankcomputing.com/hg/sip/file/14685a6e736e/sipdistutils.py). It is self-contained. Just place it into your Python site-packages folder
>
>
>
|
It is true that using `sipdistutils` for building python extensions is no longer the way to do things. So, the absolute fix is to modify the build procedure for the package but since I am not in control of that (though I may try to find time to contribute to the project) I did find a work-around.
In our case, on Ubuntu 20.04, we're using `pyenv` with the `virtualenv` plugin to create virtual environments for our applications and it is within these environments where we run into the issue. Did a little digging around and figured out that if you have three source files in your build path, it can be made to work:
* `sipdistutils.py` which is provided by `python3-sip-dev` (apt, v4.19.21)
* `sipconfig.py` which is provided by `python3-sip` (apt, v4.19.21)
* `sipconfig_nd8.py` which is provided by `python3-sip` (apt, v.419.21)
*note: v4.19, these files are not present in newer versions of sip*
Now, simply installing those with `apt` would be enough if we were using the apt installed python, since we are not, I simply copied those three files from their default installed location to a custom path that we are using via `PYTHONPATH`.
| 6,221
|
52,750,669
|
I want to create a new list (V) from other lists (a, b, c) and using a function, but I would like to take advantage of python and apply the function to the three lists and not element by element.
For example, I have the lists a, b and c; and the result after apply the function should be V. Thanks.
```
def mag(a, b, c):
# something sophisticated
return (a+b)*c
a = [1, 5, 7]
b = [4, 8, 3]
c = [2, 6, 3]
V = [10, 78, 30]
```
|
2018/10/11
|
[
"https://Stackoverflow.com/questions/52750669",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/8766822/"
] |
You want to first zip the arguments, then map the function on the unpacked tuples:
```
from itertools import starmap
starmap(mag, zip(a,b,c))
```
See [here](https://ideone.com/uTNa5L) for an example.
|
What about using only built-in functions? Like [zip](https://docs.python.org/3.3/library/functions.html#zip)
```
>>> [mag(a_, b_, c_) for a_,b_,c_ in zip(a, b, c)]
[10, 78, 30]
```
Plus another python buit-in function, [map](https://docs.python.org/3.3/library/functions.html#map) which returns an
iterator and thus makes things go faster and ends up saving memory:
```
>>> gen = map(lambda uple:mag(*uple), zip(a, b, c))
>>> list(gen)
[10, 78, 30]
```
| 6,222
|
12,246,908
|
What is my requirement ?
--> I need Exception notifier which will email to some specific configured user, about any sort of exception occurring in plain python app and web.py.
I want something similar to this <http://matharvard.ca/posts/2011/jul/31/exception-notification-for-rails-3/>
Is there anything same sort present ??
Please reply asap.
Thanks.
|
2012/09/03
|
[
"https://Stackoverflow.com/questions/12246908",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/486565/"
] |
You can get what you want by:
* Wrapping your code in `try..except` clause.
* Using `logging` module to log the exceptions with a certain level of severity e.g `ERROR`.
* Setting an `SMTPHandler` for exceptions of and above certain level.
This way is quite flexible. Your messages can be send to several places (like log files) and you can reconfigure your settings easily.
|
You can overwrite the `excepthook` function from the [`sys`](http://docs.python.org/library/sys.html#sys.excepthook) module, and handle any uncought exceptions there.
| 6,227
|
3,027,394
|
what does the '~' mean in python?
i found this BF interpreter in python a while ago.
```
import sys
#c,i,r,p=0,0,[0]*255,raw_input()
c=0
i=0
p=raw_input()
r=[0]*255
while c<len(p):
m,n,u=p[c],0,r[i]
if m==">":i+=1
if m=="<":i-=1
if m=="+":r[i]+=1
if m=="-":r[i]-=1
if m==".":sys.stdout.write(chr(u))
if m=="[":
if ~u:
while 1:
m=p[c]
if m=="]":n-=1
if m=="[":n+=1
if ~n:break
c+=1
if m=="]":
if u:
while 1:
m=p[c]
if m=="]":n-=1
if m=="[":n+=1
if ~n:break
c-=1
c+=1
```
and i want to know what it does because i want to make one on my ti 84 (and a PF one)
BF is <http://en.wikipedia.org/wiki/Brainfuck>
and PF is something similar
|
2010/06/12
|
[
"https://Stackoverflow.com/questions/3027394",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/365060/"
] |
Bitwise NOT, just like in C.
In two's complement representation, `~n` is equivalent to `-n - 1`.
|
`~` is bitwise-not.
I can't really think of a good way to illustrate it (unless you know that `-1` is the bitwise negation of `0`), but [the wikipedia entry](http://en.wikipedia.org/wiki/Bitwise_NOT#NOT) is pretty good.
| 6,231
|
67,183,501
|
```
import os
import numpy as np
from scipy.signal import *
import csv
import matplotlib.pyplot as plt
from scipy import signal
from brainflow.board_shim import BoardShim, BrainFlowInputParams, LogLevels, BoardIds
from brainflow.data_filter import DataFilter, FilterTypes, AggOperations, WindowFunctions, DetrendOperations
from sklearn.cluster import KMeans
#Options to read: 'EEG-IO', 'EEG-VV', 'EEG-VR', 'EEG-MB'
data_folder = 'EEG-IO'
# Parameters and bandpass filtering
fs = 250.0
# Reading data files
file_idx = 0
list_of_files = [f for f in os.listdir(data_folder) if os.path.isfile(os.path.join(data_folder, f)) and '_data' in f] #List of all the files, Lists are randomized, its only looking for file with _data in it
print(list_of_files)
file_sig = list_of_files[file_idx] # Data File
file_stim = list_of_files[file_idx].replace('_data','_labels') #Label File, Replacing _data with _labels
print ("Reading: ", file_sig, file_stim)
# Loading data
if data_folder == 'EEG-IO' or data_folder == 'EEG-MB':
data_sig = np.loadtxt(open(os.path.join(data_folder,file_sig), "rb"), delimiter=";", skiprows=1, usecols=(0,1,2)) #data_sig would be a buffer
elif data_folder == 'EEG-VR' or data_folder == 'EEG-VV':
data_sig = np.loadtxt(open(os.path.join(data_folder,file_sig), "rb"), delimiter=",", skiprows=5, usecols=(0,1,2))
data_sig = data_sig[0:(int(200*fs)+1),:] # getting data ready -- not needed for previous 2 datasets
data_sig = data_sig[:,0:3] #
data_sig[:,0] = np.array(range(0,len(data_sig)))/fs
############ Calculating PSD ############
index, ch = data_sig.shape[0], data_sig.shape[1]
# print(index)
feature_vectors = [[], []]
feature_vectorsa = [[], []]
feature_vectorsb = [[], []]
feature_vectorsc = [[], []]
#for x in range(ch):
#for x in range(1,3):
#while x <
#while x>0:
x=1
while x>0 and x<3:
if x==1:
data_sig[:,1] = lowpass(data_sig[:,1], 10, fs, 4)
elif x==2:
data_sig[:,2] = lowpass(data_sig[:,2], 10, fs, 4)
for y in range(500, 19328 ,500):
#print(ch)
if x==1:
DataFilter.detrend(data_sig[y-500:y, 1], DetrendOperations.LINEAR.value)
psd = DataFilter.get_psd_welch(data_sig[y-500:y, 1], nfft, nfft//2, 250,
WindowFunctions.BLACKMAN_HARRIS.value)
band_power_delta = DataFilter.get_band_power(psd, 1.0, 4.0)
# Theta 4-8
band_power_theta = DataFilter.get_band_power(psd, 4.0, 8.0)
#Alpha 8-12
band_power_alpha = DataFilter.get_band_power(psd, 8.0, 12.0)
#Beta 12-30
band_power_beta = DataFilter.get_band_power(psd, 12.0, 30.0)
# print(feature_vectors.shape)
feature_vectors[x].insert(y, [band_power_delta, band_power_theta, band_power_alpha, band_power_beta])
feature_vectorsa[x].insert(y, [band_power_delta, band_power_theta])
elif x==2:
DataFilter.detrend(data_sig[y-500:y, 2], DetrendOperations.LINEAR.value)
psd = DataFilter.get_psd_welch(data_sig[y-500:y, 2], nfft, nfft//2, 250,
WindowFunctions.BLACKMAN_HARRIS.value)
band_power_delta = DataFilter.get_band_power(psd, 1.0, 4.0)
# Theta 4-8
band_power_theta = DataFilter.get_band_power(psd, 4.0, 8.0)
#Alpha 8-12
band_power_alpha = DataFilter.get_band_power(psd, 8.0, 12.0)
#Beta 12-30
band_power_beta = DataFilter.get_band_power(psd, 12.0, 30.0)
# print(feature_vectors.shape)
# feature_vectorsc[x].insert(y, [band_power_delta, band_power_theta, band_power_alpha, band_power_beta])
# feature_vectorsd[x].insert(y, [band_power_delta, band_power_theta])
x = x+1
print(feature_vectorsa)
powers = np.log10(np.asarray(feature_vectors, dtype=float))
powers1 = np.log10(np.asarray(feature_vectorsa, dtype=float))
# powers2 = np.log10(np.asarray(feature_vectorsb))
# powers3 = np.log10(np.asarray(feature_vectorsc))
print(powers.shape)
print(powers1.shape)
```
Super confused. When I run my code, I keep on getting this error:
>
> ValueError: setting an array element with a sequence. The requested array has an inhomogeneous shape after 1 dimensions. The detected shape was (2,) + inhomogeneous part.
>
>
>
Traceback:
>
> File "/Users/mikaelhaji/Downloads/EEG-EyeBlinks/read\_data.py", line 170, in
> powers = np.log10(np.asarray(feature\_vectors, dtype=float))
> File "/Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/site-packages/numpy/core/\_asarray.py", line 102, in asarray
> return array(a, dtype, copy=False, order=order)
> ValueError: setting an array element with a sequence. The requested array has an inhomogeneous shape after 1 dimensions. The detected shape was (2,) + inhomogeneous part.
>
>
>
If you have any thoughts/ answers as to why this may be occurring, please let me know.
Thanks in advance for the responses.
|
2021/04/20
|
[
"https://Stackoverflow.com/questions/67183501",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/15708550/"
] |
Here's a simple case that produces your error message:
```
In [19]: np.asarray([[1,2,3],[4,5]],float)
Traceback (most recent call last):
File "<ipython-input-19-72fd80bc7856>", line 1, in <module>
np.asarray([[1,2,3],[4,5]],float)
File "/usr/local/lib/python3.8/dist-packages/numpy/core/_asarray.py", line 102, in asarray
return array(a, dtype, copy=False, order=order)
ValueError: setting an array element with a sequence. The requested array has an inhomogeneous shape after 1 dimensions. The detected shape was (2,) + inhomogeneous part.
```
If I omit the `float`, it makes an object dtype array - with warning.
```
In [20]: np.asarray([[1,2,3],[4,5]])
/usr/local/lib/python3.8/dist-packages/numpy/core/_asarray.py:102: VisibleDeprecationWarning: Creating an ndarray from ragged nested sequences (which is a list-or-tuple of lists-or-tuples-or ndarrays with different lengths or shapes) is deprecated. If you meant to do this, you must specify 'dtype=object' when creating the ndarray.
return array(a, dtype, copy=False, order=order)
Out[20]: array([list([1, 2, 3]), list([4, 5])], dtype=object)
```
|
I was getting the same error.
I was opening a txt file that contains a table of values, and saving it into a NumPy array defining the dtype as float since otherwise, the numbers would be strings.
```
with open(dirfile) as fh:
next(fh)
header = next(fh)[2:]
next(fh)
data = np.array([line.strip().split() for line in fh], float)
```
For the previous files, it worked perfectly however for the last file it did not:
**The requested array has an inhomogeneous shape after 1 dimensions. The detected shape was (35351,) + inhomogeneous part.**
However when I ran `data = nploadtxt(fh)` a new error appeared: **Wrong number of columns at line 35351**
So, my problem was that the last line of the file was missing the values of the two last columns. I corrected it in the txt file since I wanted to have the same structure of a numpy.array(dtype=float) and everything worked fine.
| 6,235
|
68,654,663
|
I'm trying to aggregate my data by getting the sum every 30 seconds. I would like to know if the result of this aggregation is zero, this will happen if there are no rows in that 30s region.
Here's a minimal working example illustrating the result I would like with pandas, and where it falls short with pyspark.
Input data
==========
```py
import pandas as pd
from pyspark.sql import functions as F
df = pd.DataFrame(
[
(17, "2017-03-10T15:27:18+00:00"),
(13, "2017-03-10T15:27:29+00:00"),
(25, "2017-03-10T15:27:30+00:00"),
(101, "2017-03-10T15:29:00+00:00"),
(99, "2017-03-10T15:29:29+00:00")
],
columns=["dollars", "timestamp"],
)
df["timestamp"] = pd.to_datetime(df["timestamp"])
print(df)
```
```
dollars timestamp
0 17 2017-03-10 15:27:18+00:00
1 13 2017-03-10 15:27:29+00:00
2 25 2017-03-10 15:27:30+00:00
3 101 2017-03-10 15:29:00+00:00
4 99 2017-03-10 15:29:29+00:00
```
Pandas solution
===============
With pandas, we can use resample to aggregate every 30 second window, and then apply the sum function over these windows (note the results for `2017-03-10 15:28:00+00:00`, and `2017-03-10 15:28:30+00:00`):
```py
desired_result = df.set_index("timestamp").resample("30S").sum()
desired_result
```
```
dollars
timestamp
2017-03-10 15:27:00+00:00 30
2017-03-10 15:27:30+00:00 25
2017-03-10 15:28:00+00:00 0
2017-03-10 15:28:30+00:00 0
2017-03-10 15:29:00+00:00 200
```
PySpark near solution
=====================
In pyspark, we can use [`pyspark.sql.functions.window`](http://spark.apache.org/docs/latest/api/python/reference/api/pyspark.sql.functions.window.html) to window over every 30 seconds (adapted, with thanks from [this stack answer](https://stackoverflow.com/a/65839905/2550114)), but this will miss out the window where there are no rows:
```py
spark: pyspark.sql.session.SparkSession # I expect you to have set up your session...
sdf = spark.createDataFrame(df)
sdf.groupby(
F.window("timestamp", windowDuration="30 seconds", slideDuration="30 seconds")
).agg(F.sum("dollars")).display()
```
```
window,sum(dollars)
"{""start"":""2017-03-10T15:27:30.000+0000"",""end"":""2017-03-10T15:28:00.000+0000""}",25
"{""start"":""2017-03-10T15:27:00.000+0000"",""end"":""2017-03-10T15:27:30.000+0000""}",30
"{""start"":""2017-03-10T15:29:00.000+0000"",""end"":""2017-03-10T15:29:30.000+0000""}",200
```
Question
========
How do I get pyspark to return window results for time window where there are no rows (like pandas)?
|
2021/08/04
|
[
"https://Stackoverflow.com/questions/68654663",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2550114/"
] |
The HTML5 specification now has a whole section on [Button Layout](https://html.spec.whatwg.org/multipage/rendering.html#button-layout)
Sometimes it's treated like a replaced element, and sometimes like an inline-block element. But it's never treated as a non-replaced inline element.
In detail, it says that:
>
> Button layout is as follows:
>
>
> The 'display' property is expected to act as follows:
>
>
> If the computed value of 'display' is 'inline-grid', 'grid',
> 'inline-flex', or 'flex', then behave as the computed value.
>
>
> Otherwise, if the computed value of 'display' is a value such that
> the outer display type is 'inline', then behave as 'inline-block'.
>
>
> Otherwise, behave as 'flow-root'.
>
>
> ...
>
>
> If the element is absolutely-positioned, then for the purpose of the CSS
> visual formatting model, act as if the element is a replaced element. [CSS]
>
>
> If the computed value of 'inline-size' is 'auto', then the used value is
> the fit-content inline size.
>
>
> For the purpose of the 'normal' keyword of the 'align-self' property, act as if the element is a replaced element.
>
>
> ...
>
>
>
|
If you want more clarification...it seems that the **button element is a replaced element** in most modern browsers today and in the past, which means no matter how you style it, even after changing the default UA browser styles, it still retains width and height characteristics regardless of display properties. It therefore does have design characteristics tied to the browser and OS that override both the default UA style sheet in the browser and the author's styles, UNLIKE the non-replaced elements which can be changed.
Take the following test that demonstrates that:
```
<style type="text/css">
button,
p,
div {
all:revert;
all:unset;
all:initial;
display:initial;
width:initial;
height:initial;
display:inline !important;
width:100px;
height:100px;
background:green;
color:white;
text-align:center;
}
</style>
<button>button</button>
<br />
<p>paragraph</p>
<br />
<div>div</div>
```
When the `<button>`, `<p>`, and `<div>` elements are completely cleared of their CSS properties (`all:revert` and `display:initial`), then `display:inline` set with width and height, only `<p>` and `<div>` lose dimension. But the button element in modern browsers (Chrome and Firefox) still retains its "special" replaced ability to regain dimensions, regardless. Therefore, yes its "replaced" status affects its width and height characteristics.
Additional: If you set the dimensions above to "0px", the button element's background collapses but the "clickable" interface dimensions on the button element do not. The text area on the button is still clickable in most modern browsers. In Safari and Internet Explorer, the button becomes tiny but still exists with dimensions and is clickable.
The point is, yes these replaced elements have dimensions you can control but not entirely erase.
| 6,238
|
43,934,830
|
According to [PythonCentral](http://pythoncentral.io/pyside-pyqt-tutorial-qwebview/) :
>
> QWebView ... allows you to display web pages from URLs, arbitrary HTML, *XML with XSLT stylesheets*, web pages constructed as QWebPages, and other data whose MIME types it knows how to interpret
>
>
>
However, the xml contents are displayed as if it were interpreted as html, that is, the tags filtered away and the textnodes shown w/o line breaks.
**Question is: how do I show xml in QWebView with the xsl style sheet applied?**
The same xml-file opened in any stand-alone webbrowser shows fine. The html-file resulted from the transformed xml (by lxml.etree) also displays well in QWebView.
Here is my (abbreviated) xml file:
```
<?xml version='1.0' encoding='UTF-8'?>
<?xml-stylesheet type="text/xsl" href="../../page.xsl"?>
<specimen>
...
</specimen>
```
|
2017/05/12
|
[
"https://Stackoverflow.com/questions/43934830",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/508402/"
] |
Try converting to NSData then storing to nsuserdefaults like below
```
func saveListIdArray(_ params: NSMutableArray = []) {
let data = NSKeyedArchiver.archivedData(withRootObject: params)
UserDefaults.standard.set(data, forKey: "test")
UserDefaults.standard.synchronize()
}
```
For retrieving the data use
```
if let data = UserDefaults.standard.object(forKey: "test") as? Data {
if let storedData = NSKeyedUnarchiver.unarchiveObject(with: data) as? NSMutableArray
{
// In here you can access your array
}
}
```
|
You are force unwrapping the NSMutableArray for a key. Don't force unwrap when you try to get the value from a dictionary or UserDefault for a key because there may be a chance that the value does not exist for that key and force unwrapping will crash your app.
Do this as:
```
//to get array from user default
if let array = UserDefaults.standard.object(forKey:"ArrayKey") as? NSMutableArray
print(array)
}
```
| 6,239
|
39,010,366
|
While executing the code below, I'm getting `AttributeError: attribute '__doc__' of 'type' objects is not writable`.
```
from functools import wraps
def memoize(f):
""" Memoization decorator for functions taking one or more arguments.
Saves repeated api calls for a given value, by caching it.
"""
@wraps(f)
class memodict(dict):
"""memodict"""
def __init__(self, f):
self.f = f
def __call__(self, *args):
return self[args]
def __missing__(self, key):
ret = self[key] = self.f(*key)
return ret
return memodict(f)
@memoize
def a():
"""blah"""
pass
```
Traceback:
```none
AttributeError Traceback (most recent call last)
<ipython-input-37-2afb130b1dd6> in <module>()
17 return ret
18 return memodict(f)
---> 19 @memoize
20 def a():
21 """blah"""
<ipython-input-37-2afb130b1dd6> in memoize(f)
7 """
8 @wraps(f)
----> 9 class memodict(dict):
10 """memodict"""
11 def __init__(self, f):
/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/functools.pyc in update_wrapper(wrapper, wrapped, assigned, updated)
31 """
32 for attr in assigned:
---> 33 setattr(wrapper, attr, getattr(wrapped, attr))
34 for attr in updated:
35 getattr(wrapper, attr).update(getattr(wrapped, attr, {}))
AttributeError: attribute '__doc__' of 'type' objects is not writable
```
Even though the doc string is provided, I don't know what's wrong with this.
It's works fine if not wrapped, but I need to do this.
|
2016/08/18
|
[
"https://Stackoverflow.com/questions/39010366",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2264738/"
] |
`@wraps(f)` is primarily designed to be used as a *function* decorator, rather than as a class decorator, so using it as the latter may lead to the occasional odd quirk.
The specific error message you're receiving relates to a limitation of builtin types on Python 2:
```
>>> class C(object): pass
...
>>> C.__doc__ = "Not allowed"
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
AttributeError: attribute '__doc__' of 'type' objects is not writable
```
If you use Python 3, switch to a classic class in Python 2 (by inheriting from `UserDict.UserDict` rather than the `dict` builtin), or use a closure to manage the result cache rather than a class instance, the decorator will be able to copy the docstring over from the underlying function.
|
The `wraps` decorator you're trying to apply to your class doesn't work because you can't modify the docstring of a class after it has been created. You can recreate the error with this code:
```
class Foo(object):
"""inital docstring"""
Foo.__doc__ = """new docstring""" # raises an exception in Python 2
```
The exception doesn't occur in Python 3 (I'm not exactly sure why it's changed).
A workaround might be to assign the class variable `__doc__` in your class, rather than using `wraps` to set the docstring after the class exists:
```
def memoize(f):
""" Memoization decorator for functions taking one or more arguments.
Saves repeated api calls for a given value, by caching it.
"""
class memodict(dict):
__doc__ = f.__doc__ # copy docstring to class variable
def __init__(self, f):
self.f = f
def __call__(self, *args):
return self[args]
def __missing__(self, key):
ret = self[key] = self.f(*key)
return ret
return memodict(f)
```
This won't copy any of the other attributes that `wraps` tries to copy (like `__name__`, etc.). You may want to fix those up yourself if they're important to you. The `__name__` attribute however needs to be set after the class is created (you can't assign it in the class definition):
```
class Foo(object):
__name__ = "Bar" # this has no effect
Foo.__name__ = "Bar" # this works
```
| 6,241
|
65,460,702
|
I am having an issue with my personal project, my python skills are pretty basic but any help would be greatly appreciated
Question:
TASK 1
To simulate the monitoring required, write a routine that allows entry of the baby’s temperature in
degrees Celsius. The routine should check whether the temperature is within the acceptable range, too
high or too low and output a suitable message in each case.
TASK 2
Write another routine that stores the temperatures taken over a three hour period in an array. This
routine should output the highest and lowest temperatures and calculate the difference between these
temperatures.
NOTE: MORE emphasis on task 2
my failed attempt:
```
from array import array
print("BABY TEMPERATURE CHECKER")
MinBbyTemp = float(36.0)
MaxBbyTemp = float(37.5)
routTemp = array("i", [])
BabyTemp = float(input("What is the temperature of the baby?"))
if BabyTemp < MinBbyTemp:
print("The temperature of the baby is low/unusual and needs to be worked on")
elif BabyTemp > MaxBbyTemp:
print("The temperature of the baby is too high and above the average")
else:
print("The temperature inputted is out of range")
```
|
2020/12/26
|
[
"https://Stackoverflow.com/questions/65460702",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/14861692/"
] |
According to the discord.py docs [bot.run()](https://discordpy.readthedocs.io/en/latest/api.html#discord.Client.run) is "A blocking call that abstracts away the event loop initialisation from you." and further they said if we want more control over the loop we could use start() coroutine instead of run(). So now we should create a task for calling this coroutine and we know [discord.py](https://discordpy.readthedocs.io/en/latest/) and [FastAPI](https://fastapi.tiangolo.com/) all are [asynchronous](https://docs.python.org/3/library/asyncio.html) applications. For starting a FastAPI app you need an ASGI server to handle it. In this case, we're using [Uvicorn](https://www.uvicorn.org/). So far we have run FastAPI app, now we need to start our discord bot. According to FastAPI docs we could use [startup/shutdown event](https://fastapi.tiangolo.com/advanced/events/), for calling bot.start() coroutine before the main API starts.
Here is an example of an app which has an API endpoint for sending a message to a discord's user:
```
import asyncio
import discord
import uvicorn
from config import TOKEN, USER_ID
from fastapi import FastAPI
app = FastAPI()
bot = discord.Client()
@app.on_event("startup")
async def startup_event(): #this fucntion will run before the main API starts
asyncio.create_task(bot.start(TOKEN))
await asyncio.sleep(4) #optional sleep for established connection with discord
print(f"{bot.user} has connected to Discord!")
@app.get("/")
async def root(msg: str): #API endpoint for sending a message to a discord's user
user = await send_message(msg)
return {"Message": f"'{msg}' sent to {user}"}
async def send_message(message):
user = await bot.fetch_user(USER_ID)
await user.send(message)
return user #for optional log in the response of endpoint
if __name__ == "__main__":
uvicorn.run(app, host="localhost", port=5000)
```
Tested with Python 3.7.4
|
You are not returning anything from your `send_message` function. Something like this should do good.
```py
@app.post("/items/")
async def create_item(item: Item):
msg = await send_message()
return msg
async def send_message():
user = await bot.fetch_user(USER_ID)
return await user.send('')
```
| 6,243
|
57,445,907
|
I want to detect malicious sites using python.
Now, I've tried using `requests` module to get the contents of a website, then would search for `malicious words` in it. But, I didn't get it to work.
[](https://i.stack.imgur.com/f3vOj.png)
this my all code : [link code](https://pastebin.com/t43WqW8U)
```
req_check = requests.get(url)
if 'malicious words' in req_check.content:
print ('[Your Site Detect Red Page] ===> '+url)
else:
print ('[Your Site Not Detect Red Page] ===> '+url)
```
|
2019/08/10
|
[
"https://Stackoverflow.com/questions/57445907",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/10675882/"
] |
It doesn't work because you're using the `requests` library wrong.
In your code, you essentially only get the HTML of the virus site (line of code: `req_check = requests.get(url, verify=False)` and `if 'example for detect ' in req_check.content:`{source: <https://pastebin.com/6x24SN6v>})
In Chrome, the browser runs through a database of known virus links (its more complicated than this) and sees if the link is safe. However, the `requests` library **does not** do this. Instead, you're better off using their API. If you want to see how the API can be used in conjunction with `requests`, you can see my answer on another question: [Is there a way to extract information from shadow-root on a Website?](https://stackoverflow.com/questions/57281787/is-there-a-way-to-extract-information-from-shadow-root-on-a-website/57299979#57299979)
Sidenote, the `redage()` is never called?
|
Tell the user to enter a website, then use selenium or something to upload the url to virustotal.com
| 6,245
|
54,806,005
|
I have a list of dictionaries that looks something like this->
```
list = [{"id":1,"path":"a/b", ........},
{"id":2,"path":"a/b/c", ........},
{"id":3,"path":"a/b/c/d", ........}]
```
Now I want to create a dict of path to id mapping.
That should look something like this->
```
d=dict()
d["a/b"] = 1
d["a/b/c"] = 2
d["a/b/c/d"] = 3
```
how to create it in pythonic way
|
2019/02/21
|
[
"https://Stackoverflow.com/questions/54806005",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/9593015/"
] |
Make outer div `fixed` then inner 4 div element will show next to each other.
```css
.groups {
display: flex;
display: -webkit-flex;
position: fixed;
margin: 0 auto;
left: 0;
right: 0;
justify-content: space-around;
max-width: 500px;
}
.g{
height: 70px;
width: 70px;
background-color: black;
margin: 5px;
}
```
```html
<div class="groups">
<div class="g g1"></div>
<div class="g g2"></div>
<div class="g g3"></div>
<div class="g g4"></div>
</div>
```
|
Otherwise you can do it with flex:
```css
.g{
height: 70px;
width: 70px;
background-color: black;
margin: 5px;
}
.groups {
display: flex;
justify-content: space-between;
width: 400px
}
```
```html
<div class="groups">
<div class="g g1"></div>
<div class="g g2"></div>
<div class="g g3"></div>
<div class="g g4"></div>
</div>
```
| 6,247
|
29,704,139
|
I am trying to apply `_pickle` to save data onto disk. But when calling `_pickle.dump`, I got an error
```
OverflowError: cannot serialize a bytes object larger than 4 GiB
```
Is this a hard limitation to use `_pickle`? (`cPickle` for python2)
|
2015/04/17
|
[
"https://Stackoverflow.com/questions/29704139",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2880978/"
] |
Not anymore in Python 3.4 which has PEP 3154 and Pickle 4.0
<https://www.python.org/dev/peps/pep-3154/>
But you need to say you want to use version 4 of the protocol:
<https://docs.python.org/3/library/pickle.html>
```
pickle.dump(d, open("file", 'w'), protocol=4)
```
|
Yes, this is a hard-coded limit; from [`save_bytes` function](https://hg.python.org/cpython/file/2d8e4047c270/Modules/_pickle.c#l1958):
```c
else if (size <= 0xffffffffL) {
// ...
}
else {
PyErr_SetString(PyExc_OverflowError,
"cannot serialize a bytes object larger than 4 GiB");
return -1; /* string too large */
}
```
The protocol uses 4 bytes to write the size of the object to disk, which means you can only track sizes of up to 232 == 4GB.
If you can break up the `bytes` object into multiple objects, each smaller than 4GB, you can still save the data to a pickle, of course.
| 6,251
|
53,012,388
|
When I do `python -mzeep https://testingapi.ercot.com/2007-08/Nodal/eEDS/EWS/?WSDL`
the operations are blank. When I pull that up in a browser I can find many things under an `<operation>` tag. What am I missing?
I'm not sure if this is relevant but I hate to exclude this info if it is. The site has a zip file of XSDs and WSDL files that I don't know what to do with [here](http://www.ercot.com/content/wcm/lists/89535/External_Web_Services_XSD_V1.20K.zip).
|
2018/10/26
|
[
"https://Stackoverflow.com/questions/53012388",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1818713/"
] |
>
> I actually don't need those serialized into the JSON file
>
>
>
In JSON.NET there is a `[JsonIgnore]` attribute that you can decorate properties with.
Related: [Newtonsoft ignore attributes?](https://stackoverflow.com/questions/6309725/newtonsoft-ignore-attributes)
|
To serialize and write it to a file, just do this:
```
string json = JsonConvert.SerializeObject(theme);
System.IO.File.WriteAllText("yourfile.json", json);
```
| 6,254
|
53,356,449
|
I'm having trouble carrying out what I think should be a pretty straightforward task on a NIDAQ usb6002: I have a low frequency sine wave that I'm measuring at an analog input channel, and when it crosses zero I would like to light an LED for 1 second. I'm trying to use the nidaqmx Python API, but haven't been able to clear up some of my basic questions with the documentation. <https://nidaqmx-python.readthedocs.io/en/latest/>
If anyone can offer any thoughts about the code or the basic logic of my setup, that would be hugely helpful.
Here's what I have tried so far. I start with some imports and the definition of my channels:
```
import matplotlib.pyplot as plt
from math import *
import nidaqmx
from nidaqmx import *
from nidaqmx.constants import *
import time
V_PIN = "Dev1/ai6"
LED_PIN = "Dev1/ao0"
```
I understand how tasks and things work generally- I can read and plot a signal of a given sampling rate and number of samples using task.ai\_channels methods without any trouble. But here's my best guess at how to carry out "detect zero and trigger output":
```
writeLED = nidaqmx.Task('LED')
writeLED.ao_channels.add_ao_voltage_chan(LED_PIN)
writeLED.timing.cfg_samp_clk_timing(1)
writeLED.triggers.start_trigger.cfg_anlg_edge_start_trig(V_PIN,trigger_level = 0)
writeLED.write([5], auto_start=True)
```
This gives me the error below at the cfg\_anlg\_edge line
```
DaqError: Requested value is not a supported value for this property. The property value may be invalid because it conflicts with another property.
Property: DAQmx_StartTrig_Type
Requested Value: DAQmx_Val_AnlgEdge
Possible Values: DAQmx_Val_DigEdge, DAQmx_Val_None
```
I don't know why an analog input channel wouldn't be supported here. Page 245 of this document makes it sound like it should be: <https://media.readthedocs.org/pdf/nidaqmx-python/latest/nidaqmx-python.pdf>
I'm sure there are other problems with the code, too. For example, it seems like the sample clock manipulations are quite a bit more complicated than what I've written above, but I haven't been able to find anything that explains how it would work in this situation.
Thanks in advance for any help!
|
2018/11/17
|
[
"https://Stackoverflow.com/questions/53356449",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/10397841/"
] |
In short: Web worksers do not ignore messages even if the web worker thread is blocked.
All browsers events, including web worker `postMessage()`/`onmessage()` events are queued. This is the fundamental philosophy of JavaScript (`onmessage()` is done in JS even if you use WebAssembly). Have a look at ["Concurrency model and Event Loop" from MDN](https://developer.mozilla.org/en-US/docs/Web/JavaScript/EventLoop) for further detail.
So what going to happen in your case is, while `onmessage()` is blocked, the events from main thread `postMessage()` are queued automatically. When a single `onmessage()` job is finished in the worker thread, from the worker event queue, will check if `postMessage()` is called before it finishes and catch the message if there is. So you don't need to worry about that case as long as the `onmessage()` job takes like 10 seconds and the you get hundreds of events in the queue.
This is how asynchronous execution is done everywhere in the browser.
|
Considering you are targeting recent browsers (WebAssembly), you can most likely rely on SharedArrayBuffer and Atomics. Have a look at these solutions [Is it possible to pause/resume a web worker externally?](https://stackoverflow.com/questions/57701464/is-it-possible-to-pause-resume-a-web-worker-externally/71888014#71888014) , which in your case will need to be handled inside WebAssembly (`Atomics.wait` part)
| 6,255
|
30,019,283
|
I was wondering how to parse the CURL JSON output from the server into variables.
Currently, I have -
```
curl -X POST -H "Content: agent-type: application/x-www-form-urlencoded" https://www.toontownrewritten.com/api/login?format=json -d username="$USERNAME" -d password="$PASSWORD" | python -m json.tool
```
But it only outputs the JSON from the server and then have it parsed, like so:
```
{
"eta": "0",
"position": "0",
"queueToken": "6bee9e85-343f-41c7-a4d3-156f901da615",
"success": "delayed"
}
```
But how do I put - for example the success value above returned from the server into a variable $SUCCESS and have the value as delayed & have queueToken as a variable $queueToken and 6bee9e85-343f-41c7-a4d3-156f901da615 as a value?
Then when I use-
```
echo "$SUCCESS"
```
it shows this as the output -
```
delayed
```
And when I use
```
echo "$queueToken"
```
and the output as
```
6bee9e85-343f-41c7-a4d3-156f901da615
```
Thanks!
|
2015/05/03
|
[
"https://Stackoverflow.com/questions/30019283",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3152204/"
] |
Find and install `jq` (<https://stedolan.github.io/jq/>). `jq` is a JSON parser. JSON is not reliably parsed by line-oriented tools like `sed` because, like XML, JSON is not a line-oriented data format.
In terms of your question:
```
source <(
curl -X POST -H "$content_type" "$url" -d username="$USERNAME" -d password="$PASSWORD" |
jq -r '. as $h | keys | map(. + "=\"" + $h[.] + "\"") | .[]'
)
```
The `jq` syntax is a bit weird, I'm still working on it. It's basically a series of filters, each pipe taking the previous input and transforming it. In this case, the end result is some lines that look like `variable="value"`
This answer uses bash's "process substitution" to take the results of the `jq` command, treat it like a file, and `source` it into the current shell. The variables will then be available to use.
|
Here's an example of [Extract a JSON value from a BASH script](https://gist.github.com/cjus/1047794)
```
#!/bin/bash
function jsonval {
temp=`echo $json | sed 's/\\\\\//\//g' | sed 's/[{}]//g' | awk -v k="text" '{n=split($0,a,","); for (i=1; i<=n; i++) print a[i]}' | sed 's/\"\:\"/\|/g' | sed 's/[\,]/ /g' | sed 's/\"//g' | grep -w $prop`
echo ${temp##*|}
}
json=`curl -s -X GET http://twitter.com/users/show/$1.json`
prop='profile_image_url'
picurl=`jsonval`
`curl -s -X GET $picurl -o $1.png`
```
>
> A bash script which demonstrates parsing a JSON string to extract a
> property value. The script contains a jsonval function which operates
> on two variables, json and prop. When the script is passed the name of
> a twitter user it attempts to download the user's profile picture.
>
>
>
| 6,256
|
13,774,443
|
I'm making a request in python to a web service which returns AMF. I don't know if it's AMF0 or AMF3 yet.
```
r = requests.post(url, data=data)
>>> r.text
u'\x00\x03...'
```
([Full data here](http://pastebin.com/sdZnU8Ds))
How can I take `r.text` and convert it to a python object or similar? I found [amfast](http://code.google.com/p/amfast/) but it's `Decoder` class returns a `3.131513074181806e-294` assuming AMF0 and `None` for AMF3. (Both incorrect)
```
from amfast.decoder import Decoder
decoder = Decoder(amf3=False)
obj = decoder.decode(StringIO.StringIO(r.text))
```
|
2012/12/08
|
[
"https://Stackoverflow.com/questions/13774443",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/246265/"
] |
**EDIT:** I had calculated some of my probabilities wrongly. Also I've now mentioned that we need to randomly pick 2 distinct inputs for the function f in order to guarantee that, if f is balanced, then we know the probabilities of seeing the various possible outcomes.
The fact that the prior probability of the function being constant is not known makes this question harder, because it means we can't directly calculate the probability of success for any algorithm. We will, however, be able to calculate *bounds on* this probability.
I propose the following probabilistic algorithm:
* Pick two distinct 4-bit values at random, and supply each to the function f.
* If 0,0 or 1,1 is seen, output "constant" with probability 2/3 and "balanced" with probability 1/3.
* Otherwise (if 0,1 or 1,0 is seen), always report "balanced".
Let's start by looking at something we can actually calculate: conditional probabilities.
1. **"What is P(correct|constant), namely the probability that our algorithm gives the correct answer given that f is constant?"** When f is constant, our algorithm reports the right answer 2/3 of the time.
2. **"What is P(correct|balanced), namely the probability that our algorithm gives the correct answer given that f is balanced?"** When f is balanced, the probability of seeing 0,1 or 1,0 is 2\*(8/16 \* 8/15) = 8/15, in which case the correct answer will definitely be output. In the remaining 7/15 of cases -- i.e. those in which 0,0 or 1,1 is seen -- the correct answer will be output 1/3 of the time, so the total proportion of correct outputs will be 8/15 \* 1 + 7/15 \* 1/3 = 31/45 = 2/3 + 1/45 ≈ 0.6889.
Now suppose that the prior probability of the function being constant is p. Then the probability that the algorithm gives the correct answer is
pCorrect(p) = p\*P(correct|constant) + (1-p)\*P(correct|balanced).
Given that 0 <= p <= 1, pCorrect(p) must be at least min(P(correct|constant), P(correct|balanced)), and at most max(P(correct|constant), P(correct|balanced)). The minimum of 2/3 and 31/45 is 2/3, **thus pCorrect is bounded from below at 2/3, for any prior probability of the function being constant.** (It might help to think of p as a "mixing lever" that controls how much of each term to include. If p = 0 or p = 1, then we effectively just have P(correct|balanced) or P(correct|constant), respectively, and for any in-between value of p, we will have an in-between total.)
|
Look at the probabilities for the different types of functions to return different results for two given values:
```
constant 0,0 50%
constant 1,1 50%
balanced 0,0 4/8 * 3/7 = 21,4%
balanced 0,1 4/8 * 4/7 = 28.6%
balanced 1,0 4/8 * 4/7 = 28.6%
balanced 1,1 4/8 * 3/7 = 21.4%
```
If the results are 0,0 or 1,1 there is a 70% chance that the function is constant, while for the results 0,1 and 1,0 the is a 100% chance that the function is balanced. So, for the cases that occur 71.4% of the time we are 70% certain, and the cases that occur 28.6% of the time we are 100% certain. By average we are 78.6% certain.
| 6,258
|
17,081,363
|
I'm trying to convert C++ code to python but I'm stuck
original C++ code
```
int main(void)
{
int levels = 40;
int xp_for_first_level = 1000;
int xp_for_last_level = 1000000;
double B = log((double)xp_for_last_level / xp_for_first_level) / (levels - 1);
double A = (double)xp_for_first_level / (exp(B) - 1.0);
for (int i = 1; i <= levels; i++)
{
int old_xp = round(A * exp(B * (i - 1)));
int new_xp = round(A * exp(B * i));
std::cout << i << " " << (new_xp - old_xp) << std::endl;
}
}
```
python code
```
import math
from math import log
from math import exp
levels = 40
xp_for_first_level = 1000
xp_for_last_level = 1000000
B = log(xp_for_last_level / xp_for_first_level) / (levels - 1)
A = xp_for_first_level / (exp(B) - 1.0)
for i in range(1, levels):
old_xp = round(A * exp(B * (i - 1)))
new_xp = round(A * exp(B * i))
print(i + " " + (new_xp - old_xp))
```
Any help is appreciated I can't seem to completely get it to work, when I fix one bug I'm creating another one.
|
2013/06/13
|
[
"https://Stackoverflow.com/questions/17081363",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1934748/"
] |
Change the `print` line to:
```
print("%i %i" % (i, new_xp - old_xp))
```
Refer to this [list of allowed type conversion specifiers](http://docs.python.org/3/library/stdtypes.html#printf-style-string-formatting) for more informations.
Or use the new [format](http://docs.python.org/3/library/functions.html#format) method.
|
Depending on the version of python you are using, the cast to double in the C++ code
```
(double)xp_for_last_level / xp_for_first_level
```
might need to be taken into account in the python code. In python 3 you will get a float, in older python you can do
```
from __future__ import division
```
then `xp_for_last_level / xp_for_first_level` will give you a float.
See the [discussion here](https://stackoverflow.com/questions/1267869/how-can-i-force-division-to-be-floating-point-in-python)
| 6,259
|
57,169,697
|
When I use the PIL.ImageTk library to load a png to my GUI and use logging to log some events, it creates some unwanted logs in DEBUG mode.
I have tried changing the `level` of `logging` to `INFO` or `WARNING` (or higher). But that does not help:
```
logging.basicConfig(filename='mylog.log', filemode='a', format='%(asctime)s %(levelname)s: %(message)s', datefmt='%m/%d/%Y %I:%M:%S %p', level=logging.INFO)
```
For example, the following code will create a log file with some unwanted lines:
```
from PIL import ImageTk, Image
import logging
try:
import tkinter as tk # Python 3.x
except ImportError:
import Tkinter as tk # Python 2.x
class Example(tk.Frame):
def __init__(self, parent):
tk.Frame.__init__(self, parent)
for i in range(2):
self.grid_rowconfigure(i, weight=1)
self.grid_columnconfigure(0, weight=1)
self.img = ImageTk.PhotoImage(Image.open('test.png'))
logo = tk.Label(self, image = self.img)
logo.grid(row=0, column=0, columnspan=2, sticky="nw", pady=5, padx=10)
testLabel = tk.Label(self, width=8, text="This is a test")
testLabel.grid(row=1, column=0, sticky='ew', padx=5, pady=5)
logging.info("This is a test log...")
if __name__ == "__main__":
logging.basicConfig(filename='mylog.log', filemode='a', format='%(asctime)s %(levelname)s: %(message)s', datefmt='%m/%d/%Y %I:%M:%S %p', level=logging.DEBUG)
root = tk.Tk()
Example(root).pack(side="top", fill="both", expand=True)
root.mainloop()
```
Here is a sample image `test.png`

This will create a log file with some unwanted lines like this:
```
07/23/2019 01:34:23 PM DEBUG: STREAM b'IHDR' 16 13
07/23/2019 01:34:23 PM DEBUG: STREAM b'IDAT' 41 6744
07/23/2019 01:34:23 PM INFO: This is a test log...
```
It should have been only:
```
07/23/2019 01:34:23 PM INFO: This is a test log...
```
If you remove the image from the GUI, the problem goes away. Is there any workaround for this?
EDIT: I apologize for not going through the [docs](https://docs.python.org/3/library/logging.html#logging.basicConfig) carefully. This was happening because the root module was created with DEBUG level when I first ran the script in Spyder with `level=DEBUG` and it was never changed by `basicConfig` subsequently when I changed the level to INFO. If I reload all the modules and libs (only by restarting the kernel in Spyder), the problem goes away, which means `level=INFO` would work perfectly as I want.
|
2019/07/23
|
[
"https://Stackoverflow.com/questions/57169697",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2365866/"
] |
Problem is because module already created root logger and now `basicConfig` uses this logger but it can't change level for existing logger.
Doc: [basicConfig](https://docs.python.org/3/library/logging.html#logging.basicConfig)
>
> This function does nothing if the root logger already has handlers configured for it.
>
>
>
You have to create own logger (you can use `__name__` to make it unique) and then you can set root level and levels for file and console handlers. In own logger you will no see warnings from other loggers.
```
if __name__ == "__main__":
logger = logging.getLogger(__name__)
logger.setLevel(logging.DEBUG) # root level
# console
ch = logging.StreamHandler()
ch.setLevel(logging.DEBUG) # level only for console (if not collide with root level)
logger.addHandler(ch)
# file
fh = logging.FileHandler('mylog.log')
fh.setLevel(logging.DEBUG) # level only for file (if not collide with root level)
logger.addHandler(fh)
root = tk.Tk()
Example(root).pack(side="top", fill="both", expand=True)
root.mainloop()
```
Doc: [logging-advanced-tutorial](https://docs.python.org/3/howto/logging.html#logging-advanced-tutorial),
|
Instead Of Using:
```
...
level=logging.DEBUG
...
```
Use:
```
...
level=logging.INFO
...
```
And Your File Will Be:
>
> DD/MM/YYYY HH:MM:SS PM INFO: This is a test log...
>
>
>
| 6,261
|
60,414,356
|
I have a GUI application made using PySide2 and it some major modules it uses are OpenVino(2019), dlib, OpenCV-contrib(4.2.x) and Postgres(psycopg2) and I am trying to freeze the application using PyInstaller (--debug is True).
The program gets frozen without errors but during execution, I get the following error:
```
Fatal Python error: initfsencoding: unable to load the file system codec
ModuleNotFoundError: No module named 'encodings'
```
after which the application exits.
I have tried many suggestions provided in other stackoverflow questions/github issues but none of them have worked.
I have python version 3.7.6 but I have also tried with 3.6.8 (both local installation and after creating new venv in pycharm). I have tried different versions of pycharm as well(it shows som other errors below 3.5). I have tried pycharm 3.6 both develop branch and master branch.
I have checked my PYTHONPATH and PYTHONHOME in env variables, they are pointing to python's location.
I have modified my specfile to include the necessary binaries, files, imports and folders. I would share it if needed. Also any other logs during build or execution.
I would like to know what I should do to solve this, wheather this issue is because of some component or is this a PyInstaller issue, and if so, should I raise it on github.
My os is windows 10.
|
2020/02/26
|
[
"https://Stackoverflow.com/questions/60414356",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/8243797/"
] |
You changed the python version. So, you have to give a new path according to the Python version.
Just remove all older version and the current one and reinstall new Python v.3.8.1
|
You need to include base\_library.zip in your application folder
| 6,264
|
41,923,890
|
I'm attempting to create a simple selection sort program in python without using any built in functions. My problem right now is my code is only sorting the first digit of the list. What's wrong?
Here's my sort
```
def selectionsort(list1):
for x in range(len(list1)):
tiniest = minimum(list1)
swap(tiniest,x,list1)
return(list1)
```
Here's the minimum and swap functions I'm using
```
def swap(index1,index2,list1):
TheList = list1
temp = TheList[index1]
TheList[index1] = TheList[index2]
TheList[index2] = temp
return(TheList)
def minimum(list1):
small = list1[0]
for i in list1:
if i < small:
small = i
return small
```
An example of output
List = [3,2,1,0]
Output = [0,2,1,3]
|
2017/01/29
|
[
"https://Stackoverflow.com/questions/41923890",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6843896/"
] |
Some simplification will make it more readable/comprehensible:
```
def swap(lst, i1, i2):
lst[i1], lst[i2] = lst[i2], lst[i1] # easy-swapping by multi assignment
def minimum(lst, s): # s: start index
min_val, min_index = lst[s], s
for i in range(s+1, len(lst)):
if lst[i] < min_val:
min_val, min_index = lst[i], i
return min_index # return index of minimum, not minimum itself
def selection_sort(lst):
for i in range(len(lst)):
swap(lst, i, minimum(lst, i))
# find min index starting from current and swap with current
```
|
It seems `minimum` returns the value of the smallest element in `list1`, but your `swap` expects an index instead. Try making `minimum` return the index instead of the value of the smallest element.
| 6,265
|
74,101,582
|
I am calculating concentrations from a file that has separate Date and Time columns. I set the date and time columns as indexes as they are not suppose to change. However when I print the new dataframe it only prints the "date" one time like this:
```
55...
Date Time
2020-12-30 10:37:04 0.000000 ...
10:37:07 0.000000 ...
10:37:10 0.000000 ...
10:37:13 0.000000 ...
10:37:16 0.000000 ...
```
What I need is for it to print the date for each row just like it is in the original dataframe. It should remain the same as the original dataframe date and time values. The original dataframe is:
```
Date Time Accum. Scatter...
2020-12-30 10:37:04 3 0.789...
2020-12-30 10:37:07 3 0.814...
2020-12-30 10:37:10 3 0.787...
2020-12-30 10:37:13 3 0.803...
2020-12-30 10:37:16 3 0.798...
2020-12-30 10:37:19 3 0.818...
2020-12-30 10:37:22 3 0.809...
```
The code I have is :
```
df = pd.read_csv('UHSAS 20201230c.txt',delimiter=r'\s+',engine = 'python')
df.set_index(['Date', 'Time'],inplace=True)
concentration = df.iloc[:,13:].div(df.Sample,axis=0)
print(concentration.to_string())
```
I know it seems simple but I am new to pandas.
Thank you in advance.
|
2022/10/17
|
[
"https://Stackoverflow.com/questions/74101582",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/19096358/"
] |
From the Pandas [to\_string() docs](https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.to_string.html#pandas-dataframe-to-string).
>
> sparsify : bool, optional, default True
>
> Set to False for a DataFrame with a hierarchical index to print every multiindex key at each row.
>
>
>
---
```
print(concentration.to_string(sparsify=False))
```
---
```
import pandas as pd
d = {'a':[1,1,1,3,5,6],
'b':[1,2,3,4,5,6],
'c':[9,8,7,6,5,4]}
df = pd.DataFrame(d)
df = df.set_index(['a','b'])
>>> print(df.to_string())
c
a b
1 1 9
2 8
3 7
3 4 6
5 5 5
6 6 4
>>> print(df.to_string(sparsify=False))
c
a b
1 1 9
1 2 8
1 3 7
3 4 6
5 5 5
6 6 4
```
|
remove `inplace=True` and replace it with `reset_index()` at the end
here is the code (first two lines)
```
df = pd.read_csv('csv2.txt',delimiter=r'\s+',engine = 'python')
df.set_index(['Date', 'Time'] ).reset_index()
# cannot do calculation step, as sample column is not present in the sample data
```
```
Date Time Accum. Scatter...
0 2020-12-30 10:37:04 3 0.789...
1 2020-12-30 10:37:07 3 0.814...
2 2020-12-30 10:37:10 3 0.787...
3 2020-12-30 10:37:13 3 0.803...
4 2020-12-30 10:37:16 3 0.798...
5 2020-12-30 10:37:19 3 0.818...
6 2020-12-30 10:37:22 3 0.809...
```
| 6,266
|
65,463,877
|
I've installed Spark and components locally and I'm able to execute PySpark code in Jupyter, iPython and via spark-submit - however receiving the following WARNING's:
```
WARNING: An illegal reflective access operation has occurred
WARNING: Illegal reflective access by org.apache.spark.unsafe.Platform (file:/Users/ayubk/spark-3.0.1-bin-hadoop3.2/jars/spark-unsafe_2.12-3.0.1.jar) to constructor java.nio.DirectByteBuffer(long,int)
WARNING: Please consider reporting this to the maintainers of org.apache.spark.unsafe.Platform
WARNING: Use --illegal-access=warn to enable warnings of further illegal reflective access operations
WARNING: All illegal access operations will be denied in a future release
20/12/27 07:54:01 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Using Spark's default log4j profile: org/apache/spark/log4j-defaults.properties
```
The .py file executes but should I be worried about these warnings? Don't want to start writing some code to later find that it doesn't execute down the line. FYI installed PySpark locally. Here's the code:
`test.txt`:
```
This is a test file
This is the second line - TEST
This is the third line
this IS THE fourth LINE - tEsT
```
`test.py`:
```py
import pyspark
sc = pyspark.SparkContext.getOrCreate()
# sc = pyspark.SparkContext(master='local[*]') # or 'local[2]' ?
lines = sc.textFile("test.txt")
llist = lines.collect()
for line in llist:
print(line)
print("SparkContext version:\t", sc.version) # return SparkContext version
print("python version:\t", sc.pythonVer) # return python version
print("master URL:\t", sc.master) # master URL to connect to
print("path where spark is installed on worker nodes:\t", sc.sparkHome) # path where spark is installed on worker nodes
print("name of spark user running SparkContext:\t", sc.sparkUser()) # name of spark user running SparkContext
```
PATHs:
```sh
export SPARK_HOME=/Users/ayubk/spark-3.0.1-bin-hadoop3.2
export PATH=$SPARK_HOME:$PATH
export PYTHONPATH=$SPARK_HOME/python:$PYTHONPATH
export PYSPARK_DRIVER_PYTHON="jupyter"
export PYSPARK_DRIVER_PYTHON_OPTS="notebook"
export PYSPARK_PYTHON=python3
```
bash terminal:
```sh
$ spark-3.0.1-bin-hadoop3.2/bin/spark-submit test.py
```
```
WARNING: An illegal reflective access operation has occurred
WARNING: Illegal reflective access by org.apache.spark.unsafe.Platform (file:/Users/ayubk/spark-3.0.1-bin-hadoop3.2/jars/spark-unsafe_2.12-3.0.1.jar) to constructor java.nio.DirectByteBuffer(long,int)
WARNING: Please consider reporting this to the maintainers of org.apache.spark.unsafe.Platform
WARNING: Use --illegal-access=warn to enable warnings of further illegal reflective access operations
WARNING: All illegal access operations will be denied in a future release
20/12/27 08:00:00 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Using Spark's default log4j profile: org/apache/spark/log4j-defaults.properties
20/12/27 08:00:01 INFO SparkContext: Running Spark version 3.0.1
20/12/27 08:00:01 INFO ResourceUtils: ==============================================================
20/12/27 08:00:01 INFO ResourceUtils: Resources for spark.driver:
20/12/27 08:00:01 INFO ResourceUtils: ==============================================================
20/12/27 08:00:01 INFO SparkContext: Submitted application: test.py
20/12/27 08:00:01 INFO SecurityManager: Changing view acls to: ayubk
20/12/27 08:00:01 INFO SecurityManager: Changing modify acls to: ayubk
20/12/27 08:00:01 INFO SecurityManager: Changing view acls groups to:
20/12/27 08:00:01 INFO SecurityManager: Changing modify acls groups to:
20/12/27 08:00:01 INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(ayubk); groups with view permissions: Set(); users with modify permissions: Set(ayubk); groups with modify permissions: Set()
20/12/27 08:00:02 INFO Utils: Successfully started service 'sparkDriver' on port 51254.
20/12/27 08:00:02 INFO SparkEnv: Registering MapOutputTracker
20/12/27 08:00:02 INFO SparkEnv: Registering BlockManagerMaster
20/12/27 08:00:02 INFO BlockManagerMasterEndpoint: Using org.apache.spark.storage.DefaultTopologyMapper for getting topology information
20/12/27 08:00:02 INFO BlockManagerMasterEndpoint: BlockManagerMasterEndpoint up
20/12/27 08:00:02 INFO SparkEnv: Registering BlockManagerMasterHeartbeat
20/12/27 08:00:02 INFO DiskBlockManager: Created local directory at /private/var/folders/11/13mml0s91q39ckbt584szkp00000gn/T/blockmgr-a99e3df1-6d15-4158-8e09-568910c2b045
20/12/27 08:00:02 INFO MemoryStore: MemoryStore started with capacity 434.4 MiB
20/12/27 08:00:02 INFO SparkEnv: Registering OutputCommitCoordinator
20/12/27 08:00:02 INFO Utils: Successfully started service 'SparkUI' on port 4040.
20/12/27 08:00:02 INFO SparkUI: Bound SparkUI to 0.0.0.0, and started at http://192.168.1.101:4040
20/12/27 08:00:02 INFO Executor: Starting executor ID driver on host 192.168.1.101
20/12/27 08:00:02 INFO Utils: Successfully started service 'org.apache.spark.network.netty.NettyBlockTransferService' on port 51255.
20/12/27 08:00:02 INFO NettyBlockTransferService: Server created on 192.168.1.101:51255
20/12/27 08:00:02 INFO BlockManager: Using org.apache.spark.storage.RandomBlockReplicationPolicy for block replication policy
20/12/27 08:00:02 INFO BlockManagerMaster: Registering BlockManager BlockManagerId(driver, 192.168.1.101, 51255, None)
20/12/27 08:00:02 INFO BlockManagerMasterEndpoint: Registering block manager 192.168.1.101:51255 with 434.4 MiB RAM, BlockManagerId(driver, 192.168.1.101, 51255, None)
20/12/27 08:00:02 INFO BlockManagerMaster: Registered BlockManager BlockManagerId(driver, 192.168.1.101, 51255, None)
20/12/27 08:00:03 INFO BlockManager: Initialized BlockManager: BlockManagerId(driver, 192.168.1.101, 51255, None)
20/12/27 08:00:03 INFO MemoryStore: Block broadcast_0 stored as values in memory (estimated size 175.8 KiB, free 434.2 MiB)
20/12/27 08:00:03 INFO MemoryStore: Block broadcast_0_piece0 stored as bytes in memory (estimated size 27.1 KiB, free 434.2 MiB)
20/12/27 08:00:03 INFO BlockManagerInfo: Added broadcast_0_piece0 in memory on 192.168.1.101:51255 (size: 27.1 KiB, free: 434.4 MiB)
20/12/27 08:00:03 INFO SparkContext: Created broadcast 0 from textFile at NativeMethodAccessorImpl.java:0
20/12/27 08:00:04 INFO FileInputFormat: Total input files to process : 1
20/12/27 08:00:04 INFO SparkContext: Starting job: collect at /Users/ayubk/test.py:9
20/12/27 08:00:04 INFO DAGScheduler: Got job 0 (collect at /Users/ayubk/test.py:9) with 2 output partitions
20/12/27 08:00:04 INFO DAGScheduler: Final stage: ResultStage 0 (collect at /Users/ayubk/test.py:9)
20/12/27 08:00:04 INFO DAGScheduler: Parents of final stage: List()
20/12/27 08:00:04 INFO DAGScheduler: Missing parents: List()
20/12/27 08:00:04 INFO DAGScheduler: Submitting ResultStage 0 (test.txt MapPartitionsRDD[1] at textFile at NativeMethodAccessorImpl.java:0), which has no missing parents
20/12/27 08:00:04 INFO MemoryStore: Block broadcast_1 stored as values in memory (estimated size 4.0 KiB, free 434.2 MiB)
20/12/27 08:00:04 INFO MemoryStore: Block broadcast_1_piece0 stored as bytes in memory (estimated size 2.3 KiB, free 434.2 MiB)
20/12/27 08:00:04 INFO BlockManagerInfo: Added broadcast_1_piece0 in memory on 192.168.1.101:51255 (size: 2.3 KiB, free: 434.4 MiB)
20/12/27 08:00:04 INFO SparkContext: Created broadcast 1 from broadcast at DAGScheduler.scala:1223
20/12/27 08:00:04 INFO DAGScheduler: Submitting 2 missing tasks from ResultStage 0 (test.txt MapPartitionsRDD[1] at textFile at NativeMethodAccessorImpl.java:0) (first 15 tasks are for partitions Vector(0, 1))
20/12/27 08:00:04 INFO TaskSchedulerImpl: Adding task set 0.0 with 2 tasks
20/12/27 08:00:04 INFO TaskSetManager: Starting task 0.0 in stage 0.0 (TID 0, 192.168.1.101, executor driver, partition 0, PROCESS_LOCAL, 7367 bytes)
20/12/27 08:00:04 INFO TaskSetManager: Starting task 1.0 in stage 0.0 (TID 1, 192.168.1.101, executor driver, partition 1, PROCESS_LOCAL, 7367 bytes)
20/12/27 08:00:04 INFO Executor: Running task 0.0 in stage 0.0 (TID 0)
20/12/27 08:00:04 INFO Executor: Running task 1.0 in stage 0.0 (TID 1)
20/12/27 08:00:04 INFO HadoopRDD: Input split: file:/Users/ayubk/test.txt:52+52
20/12/27 08:00:04 INFO HadoopRDD: Input split: file:/Users/ayubk/test.txt:0+52
20/12/27 08:00:04 INFO Executor: Finished task 1.0 in stage 0.0 (TID 1). 956 bytes result sent to driver
20/12/27 08:00:04 INFO Executor: Finished task 0.0 in stage 0.0 (TID 0). 1003 bytes result sent to driver
20/12/27 08:00:04 INFO TaskSetManager: Finished task 0.0 in stage 0.0 (TID 0) in 156 ms on 192.168.1.101 (executor driver) (1/2)
20/12/27 08:00:04 INFO TaskSetManager: Finished task 1.0 in stage 0.0 (TID 1) in 142 ms on 192.168.1.101 (executor driver) (2/2)
20/12/27 08:00:04 INFO TaskSchedulerImpl: Removed TaskSet 0.0, whose tasks have all completed, from pool
20/12/27 08:00:04 INFO DAGScheduler: ResultStage 0 (collect at /Users/ayubk/test.py:9) finished in 0.241 s
20/12/27 08:00:04 INFO DAGScheduler: Job 0 is finished. Cancelling potential speculative or zombie tasks for this job
20/12/27 08:00:04 INFO TaskSchedulerImpl: Killing all running tasks in stage 0: Stage finished
20/12/27 08:00:04 INFO DAGScheduler: Job 0 finished: collect at /Users/ayubk/test.py:9, took 0.296115 s
This is a test file
This is the second line - TEST
This is the third line
this IS THE fourth LINE - tEsT
SparkContext version: 3.0.1
python version: 3.7
master URL: local[*]
path where spark is installed on worker nodes: None
name of spark user running SparkContext: ayubk
20/12/27 08:00:04 INFO SparkContext: Invoking stop() from shutdown hook
20/12/27 08:00:04 INFO SparkUI: Stopped Spark web UI at http://192.168.1.101:4040
20/12/27 08:00:04 INFO MapOutputTrackerMasterEndpoint: MapOutputTrackerMasterEndpoint stopped!
20/12/27 08:00:04 INFO MemoryStore: MemoryStore cleared
20/12/27 08:00:04 INFO BlockManager: BlockManager stopped
20/12/27 08:00:04 INFO BlockManagerMaster: BlockManagerMaster stopped
20/12/27 08:00:04 INFO OutputCommitCoordinator$OutputCommitCoordinatorEndpoint: OutputCommitCoordinator stopped!
20/12/27 08:00:04 INFO SparkContext: Successfully stopped SparkContext
20/12/27 08:00:04 INFO ShutdownHookManager: Shutdown hook called
20/12/27 08:00:04 INFO ShutdownHookManager: Deleting directory /private/var/folders/11/13mml0s91q39ckbt584szkp00000gn/T/spark-eb41b5d5-16e2-4938-8049-8f923e6cb46c
20/12/27 08:00:04 INFO ShutdownHookManager: Deleting directory /private/var/folders/11/13mml0s91q39ckbt584szkp00000gn/T/spark-76d186fb-cf42-4898-92db-050a73f9fcb7
20/12/27 08:00:04 INFO ShutdownHookManager: Deleting directory /private/var/folders/11/13mml0s91q39ckbt584szkp00000gn/T/spark-eb41b5d5-16e2-4938-8049-8f923e6cb46c/pyspark-ee1fe6ab-a27f-4be6-b8d8-06594704da12
```
**Edit:**
Tried to install Java8:
```sh
brew update
brew tap adoptopenjdk/openjdk
brew search jdk
brew install --cask adoptopenjdk8
```
Although when typing this `java -version`, I'm getting this:
```
openjdk version "13" 2019-09-17
OpenJDK Runtime Environment (build 13+33)
OpenJDK 64-Bit Server VM (build 13+33, mixed mode, sharing)
```
|
2020/12/27
|
[
"https://Stackoverflow.com/questions/65463877",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/9372996/"
] |
Install Java 8 instead of Java 11, which is known to give this sort of warnings with Spark.
|
If you run your PySpark code often and you are tired (like me) of looking through all these warnings again and again before you see **your** output, bash/zsh process substitution comes to the rescue:
```
$ spark-3.0.1-bin-hadoop3.2/bin/spark-submit test.py 2> >(tail -n +8 >&2) | cat
```
Here we redirect STDERR of our process to `tail`, it skips everything before line #8 and redirects the result back to STDERR. For more details see [this Stack Overflow answer](https://stackoverflow.com/a/52575087/3206908). The output is then piped to `cat` that helps to wait for both our process and `tail` to finish, see [this Stack Exchange answer](https://unix.stackexchange.com/a/458218/417629) for more details.
You can put it into shell script to avoid typing it each time you run `spark-submit`.
| 6,267
|
68,959,506
|
Lets say I have a CSV file that looks like this:
```
name,country,email
john,US,john@fake.com
brad,UK,brad@fake.com
James,US,james@fake.com
```
I want to search for any county that equals US and if its exists, then print their email address. How would I do this in python without using pandas?
|
2021/08/27
|
[
"https://Stackoverflow.com/questions/68959506",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/12260632/"
] |
You can do something like:
```
with open('file.csv', 'r') as f:
f.readline()
for line in f:
data = line.strip().split(',')
```
Then you can access the stuff inside of `data` to get what you need.
|
So to read an CSV as dataframe you need:
```
import pandas as pd
df = pd.read_csv("filename.csv")
```
Next, I will generate a dummy df and show you how to get the US users and then print their emails in a list:
```
df= pd.DataFrame({"Name":['jhon','brad','james'], "Country":['US','UK','US'],
"email":['john@fake.com','brad@fake.com','james@fake.com']})
US_USERS = df.loc[df['Country']=='US']
US_emails = df['email'].tolist()
print(US_USERS)
print(US_emails)
```
you should get:
```
Name Country email
0 jhon US john@fake.com
2 james US james@fake.com
['john@fake.com', 'brad@fake.com', 'james@fake.com']
```
| 6,270
|
64,167,192
|
I am a begginer programer in python and when i run this code
```
from PIL import Image
im = Image.open(r'C:\\images\\imagetest.png')
width, height = im.size
print(width, height)
im.show()
```
I get this error:
```
im = Image.open(r'C:\\images\\imagetest.png')
File "C:\Users\danie\AppData\Local\Programs\Python\Python38\lib\site-packages\PIL\Image.py", line 2878, in open
fp = builtins.open(filename, "rb")
FileNotFoundError: [Errno 2] No such file or directory: 'C:\\\\images\\\\imagetest.png'
```
PS C:\Users\danie\vscode projects>
|
2020/10/02
|
[
"https://Stackoverflow.com/questions/64167192",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/14378032/"
] |
r strings don't need their backslashes escaped
Remove the r before the string declaration or remove the double backslashes
|
The path of the image is not correct. You have to write it like that:
im = Image.open(r'C:\images\imagetest.png')
| 6,271
|
44,517,641
|
I want to connect to my database from Python shell
```
import MySQLdb
db = MySQLdb.connect(host="localhost",user="milenko",passwd="********",db="classicmodels")
```
But
```
File "/home/milenko/anaconda3/lib/python3.6/site-packages/MySQLdb/connections.py", line 204, in __init__
super(Connection, self).__init__(*args, **kwargs2)
_mysql_exceptions.OperationalError: (1045, "Access denied for user 'milenko'@'localhost' (using password: YES)")
```
I have created user
```
CREATE USER 'milenko'@'localhost' IDENTIFIED BY '8888888';
```
but still the problem is still there.
Databases
```
mysql> show databases;
+--------------------+
| Database |
+--------------------+
| information_schema |
| ap1 |
| classicmodels |
| mysql |
| performance_schema |
| sys |
+--------------------+
6 rows in set (0.00 sec)
```
What does this mean?How to resolve this problem?
|
2017/06/13
|
[
"https://Stackoverflow.com/questions/44517641",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/8006605/"
] |
this generally means you do not have permissions to access the server from that particular machine. to fix it, either create user 'milenko'@'localhost' or 'milenko'@'%' using your root server user
OR
grant your user privileges on that particular db
|
Make sure your local server (e.g WampServer) is turned on.
| 6,273
|
56,576,470
|
I am confused about the runtime of the binary operator of the set in Python.
e.g. -
`set1 | set2` Does it take the linear time as `set1 - set2` or it takes quadratic time as each element in set1 has to do bitwise or with each number of set2 or vice-versa.
I went through some websites but I am not able to find any clear view on this.
ref: <https://www.geeksforgeeks.org/sets-in-python/>
|
2019/06/13
|
[
"https://Stackoverflow.com/questions/56576470",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/9072927/"
] |
I had the same problem. Following command worked for me:
After the `./gradlew bundleRelease` command we get a *.aab* version of our app. To get APK, you should run the app with release version on any device with the below command.
* Make sure you have connected an android device
* For the production ready app, firstly you have to remove the previous app from the device
Run this command in `your-project/`:
```
react-native run-android --variant=release
```
Then APK can be found in `android/app/build/outputs/apk/release`
Hope this helps
|
step - 1) ./gradlew bundleRelease
step - 2) react-native run-android --variant=release
Make sure you have connected an android device
For the production-ready app, firstly you have to remove the previous app from the device
| 6,274
|
31,977,245
|
Let's say I have a web bot written in python that sends data via POST request to a web site. The data is pulled from a text file line by line and passed into an array. Currently, I'm testing each element in the array through a simple for-loop. How can I effectively implement multi-threading to iterate through the data quicker. Let's say the text file is fairly large. Would attaching a thread to each request be smart? What do you think the best approach to this would be?
```
with open("c:\file.txt") as file:
dataArr = file.read().splitlines()
dataLen = len(open("c:\file.txt").readlines())-1
def test(data):
#This next part is pseudo code
result = testData('www.example.com', data)
if result == 'whatever':
print 'success'
for i in range(0, dataLen):
test(dataArr[i])
```
I was thinking of something along the lines of this, but I feel it would cause issues depending on the size of the text file. I know there is software that exists which allows the end-user to specify the amount of the threads when working with large amounts of data. I'm not entirely sure of how that works, but that's something I'd like to implement.
```
import threading
with open("c:\file.txt") as file:
dataArr = file.read().splitlines()
dataLen = len(open("c:\file.txt").readlines())-1
def test(data):
#This next part is pseudo code
result = testData('www.example.com', data)
if result == 'whatever':
print 'success'
jobs = []
for x in range(0, dataLen):
thread = threading.Thread(target=test, args=(dataArr[x]))
jobs.append(thread)
for j in jobs:
j.start()
for j in jobs:
j.join()
```
|
2015/08/12
|
[
"https://Stackoverflow.com/questions/31977245",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2313602/"
] |
This sounds like a recipe for `multiprocessing.Pool`
See here: <https://docs.python.org/2/library/multiprocessing.html#introduction>
```
from multiprocessing import Pool
def test(num):
if num%2 == 0:
return True
else:
return False
if __name__ == "__main__":
list_of_datas_to_test = [0, 1, 2, 3, 4, 5, 6, 7, 8]
p = Pool(4) # create 4 processes to do our work
print(p.map(test, list_of_datas_to_test)) # distribute our work
```
Output looks like:
```
[True, False, True, False, True, False, True, False, True, False]
```
|
Threads are slow in python because of the [Global Interpreter Lock](https://wiki.python.org/moin/GlobalInterpreterLock). You should consider using multiple processes with the Python `multiprocessing` module instead of threads. Using multiple processes can increase the "ramp up" time of your code, as spawning a real process takes more time than a light thread, but due to the GIL, `threading` won't do what you're after.
[Here](http://pymotw.com/2/multiprocessing/basics.html) and [here](http://sebastianraschka.com/Articles/2014_multiprocessing_intro.html) are a couple of basic resources on using the `multiprocessing` module. Here's an example from the second link:
```
import multiprocessing as mp
import random
import string
# Define an output queue
output = mp.Queue()
# define a example function
def rand_string(length, output):
""" Generates a random string of numbers, lower- and uppercase chars. """
rand_str = ''.join(random.choice(
string.ascii_lowercase
+ string.ascii_uppercase
+ string.digits)
for i in range(length))
output.put(rand_str)
# Setup a list of processes that we want to run
processes = [mp.Process(target=rand_string, args=(5, output)) for x in range(4)]
# Run processes
for p in processes:
p.start()
# Exit the completed processes
for p in processes:
p.join()
# Get process results from the output queue
results = [output.get() for p in processes]
print(results)
```
| 6,284
|
31,458,813
|
I get a TemplateNotFound after I installed django-postman and django-messages. I obviously installed them separately - first django-postman, and then django-messages. This is so simple and yet I've spent hours trying to resolve this.
I'm using Django 1.8, a fresh base install using pip. I then installed the two above packages. The TEMPLATES portion of my settings.py file is as follows:
```
TEMPLATES = [
{
'BACKEND': 'django.template.backends.django.DjangoTemplates',
'DIRS': [
os.path.join(BASE_DIR, 'templates'),
#os.path.join(BASE_DIR, 'templates/django_messages'),
],
'APP_DIRS': True,
'OPTIONS': {
'context_processors': [
'django.template.context_processors.debug',
'django.template.context_processors.request',
'django.contrib.auth.context_processors.auth',
'django.contrib.messages.context_processors.messages',
],
},
},
]
```
Within my INSTALLED\_APPS tuple, I've also installed the above packages as well.
Here's my addition to urls.py:
```
url(r'^messages/', include('django_messages.urls')),
```
No other changes were made to the system and yet when I go to /messages I get the following error message:
```
TemplateDoesNotExist at /messages/inbox/
django_messages/inbox.html
Request Method: GET
Request URL: http://localhost:8000/messages/inbox/
Django Version: 1.8.3
Exception Type: TemplateDoesNotExist
Exception Value:
django_messages/inbox.html
Exception Location: /projects/.virtualenvs/blatter/lib/python2.7/site-packages/django/template/loader.py in render_to_string, line 138
Python Executable: /projects/.virtualenvs/blatter/bin/python
Python Version: 2.7.6
```
|
2015/07/16
|
[
"https://Stackoverflow.com/questions/31458813",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4431105/"
] |
The issue is because it extends from the site's base.html. It is also mentioned in postman documentation :- <https://django-postman.readthedocs.org/en/latest/quickstart.html#templates>
```
The postman/base.html template extends a base.html site template, in which some blocks are expected:
title: in <html><head><title>, at least for a part of the entire title string
extrahead: in <html><head>, to put some <script> and <link> elements
content: in <html><body>, to put the page contents
postman_menu: in <html><body>, to put a navigation menu
```
A possible solution can be found here :- [django-postman extends a base.html that does not exist](https://stackoverflow.com/questions/12832891/django-postman-extends-a-base-html-that-does-not-exist)
|
The problem was resolved for django-messages after reviewing a called template and changing the extends/inheritance parameter.
The file that was being called, inbox.html, inherited "django\_messages/base.html" ... which worked fine. "base.html" then inherited from "base.html," so there appeared to be some circular logic here causing the error. This is by default and wasn't added by me. When I removed the extends/inheritance declaration from "base.html" so that it didn't inherit from itself, django-messages worked.
Perhaps Django 1.8 changed some logic w/templates? Either way, issue resolved.
| 6,285
|
44,112,399
|
I run a Python Discord bot. I import some modules and have some events. Now and then, it seems like the script gets killed for some unknown reason. Maybe because of an error/exception or some connection issue maybe? I'm no Python expert but I managed to get my bot working pretty well, I just don't exactly understand how it works under the hood (since the program does nothing besides waiting for events). Either way, I'd like it to restart automatically after it stops.
I use Windows 10 and just start my program either by double-clicking on it or through pythonw.exe if I don't want the window. What would be the best approach to verify if my program is still running (it doesn't have to be instant, the verification could be done every X minutes)? I thought of using a batch file or another Python script but I have no idea how to do such thing.
Thanks for your help.
|
2017/05/22
|
[
"https://Stackoverflow.com/questions/44112399",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1689179/"
] |
You can write another `python code (B)` to call your original `python code (A)` using `Popen` from `subprocess`. In `python code (B)`, ask the program to `wait` for your `python code (A)`. If `'A'` exits with an `error code`, `recall` it from `B`.
I provide an example for python\_code\_B.py
```
import subprocess
filename = 'my_python_code_A.py'
while True:
"""However, you should be careful with the '.wait()'"""
p = subprocess.Popen('python '+filename, shell=True).wait()
"""#if your there is an error from running 'my_python_code_A.py',
the while loop will be repeated,
otherwise the program will break from the loop"""
if p != 0:
continue
else:
break
```
This will generally work well on Unix / Windows systems. Tested on Win7/10 with latest code update.
Also, please run `python_code_B.py` from a 'real terminal' which means running from a command prompt or terminal, and not in IDLE.
|
for problem you stated i prefer to use python [subprocess](https://docs.python.org/3.4/library/subprocess.html) call to rerun python script or use [try blocks](https://docs.python.org/3.4/tutorial/errors.html).
This might be helpful to you.
check this sample try block code:
```
try:
import xyz # consider it is not exist or any error code
except:
pass # go to next line of code to execute
```
| 6,286
|
55,290,527
|
How do I write python scrip to solve this?
```
l=[1,2,3] Length A
X=[one,two,three,.... ] length A
```
how do print/write to file
output should be
```
1=one 2=two 3=three ....
```
Trying to use something like but since the Length A is variable this won't work
```
logfile.write('%d=%s %d=%s %d=%s %d=%s \n' % (l[1], X[1],l[2,X[3],l[4],X[4]))
```
|
2019/03/21
|
[
"https://Stackoverflow.com/questions/55290527",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/7038853/"
] |
Use `zip`:
```
l = [1, 2, 3]
X = ['one', 'two', 'three']
' '.join('{}={}'.format(first, second) for first, second in zip(l, X))
```
Output:
```
'1=one 2=two 3=three'
```
|
You could also use an fString to make this more concise:
```
numbers = [1, 2, 3]
strings = ['one', 'two', 'three']
print(' '.join(f'{n}={s}' for n,s in zip(numbers,strings)))
```
| 6,287
|
3,929,096
|
A python program I created is IO bounded. The majority of the time (over 90%) is spent in a single loop which repeats ~10,000 times. In this loop, ~100KB data is generated and written to a temporary file; it is then read back out by another program and statistics about that data collected. This is the only way to pass data into the second program.
Due to this being the main bottleneck, I thought that moving the location of the temporary file from my main HDD to a (~40MB) RAMdisk (inside of over 2GB of free RAM) would greatly increase the IO speed for this file and so reduce the run-time. However, I obtained the following results (each averaged over 20 runs):
* Test data 1: Without RAMdisk - 72.7s, With RAMdisk - 78.6s
* Test data 2: Without RAMdisk - 223.0s, With RAMdisk - 235.1s
It would appear that the RAMdisk is slower that my HDD.
What could be causing this?
Are there any other alternative to using a RAMdisk in order to get faster file IO?
|
2010/10/14
|
[
"https://Stackoverflow.com/questions/3929096",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/227567/"
] |
Your operating system is almost certainly buffering/caching disk writes already. It's not surprising the RAM disk is so close in performance.
Without knowing exactly what you're writing or how, we can only offer general suggestions. Some ideas:
* If you have 2 GB RAM you probably have a decent processor, so you could write this data to a filesystem that has compression. That would trade I/O operations for CPU time, assuming your data is amenable to that.
* If you're doing many small writes, combine them to write larger pieces at once. (Can we see the source code?)
* Are you removing the 100 KB file after use? If you don't need it, then delete it. Otherwise the OS may be forced to flush it to disk.
|
I know that Windows is very aggressive about caching disk data in RAM, and 100K would fit easily. The writes are going directly to cache and then perhaps being written to disk via a non-blocking write, which allows the program to continue. The RAM disk probably wouldn't support non-blocking operations because it expects those operations to be quick and not worth the bother.
By reducing the amount of memory available to programs and caching, you're going to increase the amount of disk I/O for paging even if only slightly.
This is all speculation on my part, since I'm not familiar with the kernel or drivers. I also speculate that Linux would operate similarly.
| 6,289
|
4,149,274
|
Okay, I'm having one of those moments that makes me question my ability to use a computer. This is not the sort of question I imagined asking as my first SO post, but here goes.
Started on Zed's new "Learn Python the Hard Way" since I've been looking to get back into programming after a 10 year hiatus and python was always what I wanted. This book has really spoken to me. That being said, I'm having a serious issue with pydoc from the command. I've got all the directories in c:/python26 in my system path and I can execute pydoc from the command line just fine regardless of pwd - but it accepts no arguments. Doesn't matter what I type, I just get the standard pydoc output telling me the acceptable arguments.
Any ideas? For what it's worth, I installed ActivePython as per Zed's suggestion.
```
C:\Users\Chevee>pydoc file
pydoc - the Python documentation tool
pydoc.py <name> ...
Show text documentation on something. <name> may be the name of a
Python keyword, topic, function, module, or package, or a dotted
reference to a class or function within a module or module in a
package. If <name> contains a '\', it is used as the path to a
Python source file to document. If name is 'keywords', 'topics',
or 'modules', a listing of these things is displayed.
pydoc.py -k <keyword>
Search for a keyword in the synopsis lines of all available modules.
pydoc.py -p <port>
Start an HTTP server on the given port on the local machine.
pydoc.py -g
Pop up a graphical interface for finding and serving documentation.
pydoc.py -w <name> ...
Write out the HTML documentation for a module to a file in the current
directory. If <name> contains a '\', it is treated as a filename; if
it names a directory, documentation is written for all the contents.
C:\Users\Chevee>
```
EDIT: New information, pydoc works just fine in PowerShell. As a linux user, I have no idea why I'm trying to use cmd anyways--but I'd still love to figure out what's up with pydoc and cmd.
EDIT 2: More new information. In cmd...
```
c:\>python c:/python26/lib/pydoc.py file
```
...works just fine. Everything works just fine with just pydoc in PowerShell without me worrying about pwd, or extensions or paths.
|
2010/11/10
|
[
"https://Stackoverflow.com/questions/4149274",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/502486/"
] |
When you type the name of a file at the windows command prompt, cmd can check the windows registry for the default file association, and use that program to open it. So if the Inkscape installer associated .py files with its own version of python, cmd might preferentially run that and ignore the PATH entirely. See [this question](https://stackoverflow.com/questions/2199739/pydoc-fails-under-windows-and-python-2-6-4).
|
Based on your second edit, you may have more than one copy of pydoc.py in your path, with the 'wrong' one first such that when it starts up it doesn't have the correct environment in which to execute.
| 6,299
|
68,272,509
|
I'm a Python beginner, and trying to improve my skill.
Recently I read the source cord of some python packages, and found these codes.
```
while True:
x = string_variable != -1
if x:
time.sleep(1)
else:
break
```
So, what does the second line mean?
You can find the original cord in 149th line of `__init__.py` from [here](https://github.com/linouk23/youtube_uploader_selenium).
The original cord is below.
```
status_container = self.browser.find(By.XPATH,
Constant.STATUS_CONTAINER)
while True:
in_process = status_container.text.find(Constant.UPLOADED) != -1
if in_process:
time.sleep(Constant.USER_WAITING_TIME)
else:
break
```
|
2021/07/06
|
[
"https://Stackoverflow.com/questions/68272509",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/15340967/"
] |
Just to add here, the actual code you are looking at is not a simple string variable test like your example:
```
in_process = status_container.text.find(Constant.UPLOADED) != -1
```
is effectively doing this:
```
string_var.find("foo") != -1
```
which is looking for an occurrence of "foo" inside the string\_var. The string.find() function returns the index of the substring inside the string. See <https://www.w3schools.com/python/ref_string_find.asp>
So `if string_var.find("foo") != -1:` is basically saying "if foo is in string\_var".
A more common (and readable!) way to do this in python is simply:
```
if "foo" in string_var:
```
|
This line has two operators
```
x = string_variable != -1
```
`=` is assignment operator and `!=` is logical operator which means `NOT EQUALS`
so, `x` will hold a boolean value based on the evaluation of the logical operator `!=`.
| 6,306
|
33,107,224
|
I have 5 astronomy images in python, each for a different wavelength, therefore they are of different angular resolutions and grid sizes and in order to compare them so that i can create temperature maps i need them to be the same angular resolution and grid size.
I have managed to Gaussian convolve each image to the same angular resolution as the worst one, however i am having trouble finding a method to re-grid each image in python and wondered if anyone knew how to go about doing this?
I wish to re-grid the images to the same grid size as the worst quality image and so i can use that as a reference image if required. Thank you
|
2015/10/13
|
[
"https://Stackoverflow.com/questions/33107224",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4596311/"
] |
If the image headers have the correct World Coordinate System data, you can use the reproject package to resample the images:
<http://reproject.readthedocs.org/en/stable/>
|
You can use FITS\_tools (<https://pypi.python.org/pypi/FITS_tools>, it can also be installed via `$ pip install FITS_TOOLS` in the anaconda distribution of python).
Both images must have wcs in the header information.
```
import FITS_tools
to_be_projected = 'fits_file_to_be_projected.fits'
reference_fits = 'fits_file_serving_as_reference.fits'
im1,im2 = FITS_tools.match_fits(to_be_projected,reference_fits)
```
It returns: Two images projected into the same space, and optionally the header used to project them. Once installed, you can do help(FITS\_tools.match\_fits) for more information.
| 6,307
|
9,577,252
|
In ipython, if I press 'esc' followed by 'enter' (and possibly other characters?), readline breaks. I can no longer search through command history using the 'up' key, and some commands (e.g., control-K) fail.
Is there a way to reset readline within an ipython session? What is going on when I press these keys?
|
2012/03/06
|
[
"https://Stackoverflow.com/questions/9577252",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/814354/"
] |
The poster's suggested answer doesn't seem to work for me in iPython 0.12+. I can run:
```
get_ipython().init_readline()
```
but that doesn't seem to help.
However I noticed that I sometimes see similar problems in my iPython sessions. It appears that I had inadvertently switched from the default Emacs readline editing mode to vi-mode(vim-mode). According to the the [readline docs](https://www.gnu.org/software/bash/manual/html_node/Readline-vi-Mode.html) to switch between them you are supposed to be able to use the M-C-j key combination but that only seems to allow me to switch to vi-mode. To switch back to Emacs mode one can use C-e but that didn't appear to work for me - I had to instead do M-C-e - on my Mac (where `ESC` is used as the 'Meta' key) it is: `ESC`+`CTRL`+`e`
The contents of my ~/.inputrc is as follows:
```
set meta-flag on
set input-meta on
set convert-meta off
set output-meta on
```
|
Got impatient. Solution is:
```
IPython.InteractiveShell.init_readline(get_ipython())
```
Looks like this might be a known bug too: <http://www.catonmat.net/blog/bash-vi-editing-mode-cheat-sheet/>
| 6,308
|
40,327,136
|
[This is how I setted up my python3 envirnoment on Ubuntu 16.04.](https://www.digitalocean.com/community/tutorials/how-to-install-python-3-and-set-up-a-local-programming-environment-on-ubuntu-16-04)
And I installed TensorFlow 0.8 with Virtualenv installation.
[As I wanted to start TensorFlow tutorial MNIST For ML Beginners the The MNIST Data part.](https://www.tensorflow.org/versions/r0.8/tutorials/mnist/beginners/index.html#the-mnist-data)
This is the way I did it
```
$ cd environments
~/environments$ pyvenv my_env
~/environments$ ls my_env
bin include lib lib64 pyvenv.cfg share
~/environments$ source my_env/bin/activate
(my_env) :~/environments$ nano input_data.py
(my_env) :~/environments$ python input_data.py
Traceback (most recent call last):
File "input_data.py", line 10, in <module>
import numpy
ImportError: No module named 'numpy'
```
The input\_data.py is from Github tensorflow/tensorflow/examples/tutorials/mnist/input\_data.py.
So I installed numpy with
```
$ sudo apt-get install python3-numpy
```
But I still got the same output.
Maybe there's something wrong with my installation or I use the wrong way with Python.
I have stucked for days and need your help.
I had upgraded TensorFlow to 0.11 version.
I will try again later.
|
2016/10/30
|
[
"https://Stackoverflow.com/questions/40327136",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/7090901/"
] |
Sorry for the very late update, I had update tensorflow to the latest version.
And I think I had some mistakes back then.
Since I install the tensorflow via Virtualenv installation, so I should load tensorflow and other packages in Virtualenv environment.
```
source ~/tensorflow/bin/activate
```
Then run the exist python code by
```
python3 filename.py
```
or directly type
```
python3
```
even
```
ipython3
```
to write your own code, and test with
```
import tensorflow as tf
```
It shouldn't have any error messages, unless you are using the GPU support version, add path by type
```
sudo nano ~/.bash_profile
```
open the bash profile and add
```
export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/usr/local/cuda-8.0/lib64
export CUDA_HOME=/usr/local/cuda
export PATH=/usr/local/cuda-8.0/bin:$PATH
```
inside the file,aware this the default path for your cuda,then the last step.
```
source /.bash_profile
```
The ubuntu 16.04 really annoying sometimes.
And thanks for your reponses.
|
Try installing it with PIP using
```
sudo apt-get install python-pip
sudo pip install numpy==1.11.1
```
or in your case use pip3 instead of pip for python 3 like
```
sudo apt-get install python3-pip
sudo pip3 install numpy==1.11.1
```
this will help
| 6,309
|
12,197,806
|
I have this script which runs well in Python 2.7 but not in 2.6:
```
def main():
tempfile = '/tmp/tempfile'
stats_URI="http://x.x.x.x/stats.json"
hits_ = 0
advances_ = 0
requests = 0
failed = 0
as_data = urllib.urlopen(stats_URI).read()
data = json.loads(as_data)
for x, y in data['hits-seen'].iteritems():
hits_ += y
# Total of failed vtop requests
for x, y in data['vals-failed'].iteritems():
failed += y
requests = data['requests']
advances_ = requests - failed
f = open(tempfile,'w')
line1 = "hits: " + str(hits_) + "\n"
line2 = "advances: " + str(advances_) + "\n"
f.write(line1)
f.write(line2)
f.close()
return 0
```
The error message that I am getting says:
```
Traceback (most recent call last): File "./json.test.py", line 14, in <module>
main() File "./json.test.py", line 8, in main
as_data = urllib.urlopen(stats_URI).read() File "/usr/lib/python2.6/urllib.py", line 86, in urlopen
return opener.open(url) File "/usr/lib/python2.6/urllib.py", line 207, in open
return getattr(self, name)(url) File "/usr/lib/python2.6/urllib.py", line 346, in open_http
h.endheaders() File "/usr/lib/python2.6/httplib.py", line 908, in endheaders
self._send_output() File "/usr/lib/python2.6/httplib.py", line 780, in _send_output
self.send(msg) File "/usr/lib/python2.6/httplib.py", line 739, in send
self.connect() File "/usr/lib/python2.6/httplib.py", line 720, in connect
self.timeout) File "/usr/lib/python2.6/socket.py", line 561, in create_connection
raise error, msg IOError: [Errno socket error] [Errno 110] Connection timed out
```
What am I missing here? Searches on the internet are not helping much :-(
|
2012/08/30
|
[
"https://Stackoverflow.com/questions/12197806",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1579819/"
] |
When creating a `typedef` alias for a function pointer, the alias is in the function *name* position, so use:
```
typedef unsigned (__stdcall *task )(void *);
```
`task` is now a type alias for: *pointer to a function taking a `void` pointer and returning `unsigned`*.
|
Since hmjd's answer has been deleted...
In C++11 a whole newsome *alias* syntax has been developed, to make such things much easier:
```
using task = unsigned (__stdcall*)(void*);
```
is equivalent the to `typedef unsigned (__stdcall* task)(void*);` (note the position of the alias in the middle of the function signature...).
It can also be used for templates:
```
template <typename T>
using Vec = std::vector<T>;
int main() {
Vec<int> x;
}
```
This syntax is quite nicer than the old one (and for templates, actually makes template aliasing possible) but does require a quite newish compiler.
| 6,311
|
55,249,689
|
I am using this Docker (FROM lambci/lambda:python3.6) and I need to install a private repository package. The problem is the Docker does not have git and I can not install git using apt-get or apk install because the Docker is not Linux.
Is there any possible way to fix this installing git? Or is there any other better method I could use to install this private repository package?
|
2019/03/19
|
[
"https://Stackoverflow.com/questions/55249689",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6763224/"
] |
add this to makefile:
```
# makefile
git clone REPO
cd REPO_DIR; python setup.py bdist_wheel
cp REPO_DIR/dist/* .
rm -rf REPO_DIR/
```
add this to dockerfile:
```
# dockerfile
RUN pip install REPO*.whl
```
and then the package is successfully installed within docker
|
can you `pip install` the git repo next to your source code and mount it together with your code into the container?
```
cd WORKING_DIRECTORY
pip install --target ./ GIT_URL
```
| 6,314
|
36,712,967
|
I want to add search box to a single select drop down option.
**Code:**
```
<select id="widget_for" name="{{widget_for}}">
<option value="">select</option>
{% for key, value in dr.items %}
<input placeholder="This ">
<option value="{% firstof value.id key %}" {% if key in selected_value %}selected{% endif %}>{% firstof value.name value %}</option>
{% endfor %}
</select>
```
Adding a input tags as above does not work out.
I have tried using html5-datalist and it works.
I want some other options as html5-datalist does not support scrollable option in chrome.
Can anyone suggest any other searchbox options?
They should be compatible with django-python framework.
|
2016/04/19
|
[
"https://Stackoverflow.com/questions/36712967",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5324868/"
] |
Simply use select2 plugin to implement this feature
Plugin link: [Select2](https://select2.github.io/examples.html)
[](https://i.stack.imgur.com/6ora6.png)
|
you can use **semantic ui** to implement this feature
<https://semantic-ui.com/introduction/new.html>
| 6,322
|
53,174,590
|
Im currently making my own model and all works fine with the tensorflow-for-poets-2 demo. I trained multiple pictures in different folders, and the app recognized it.
Now I want to display a bounding box around the object. I found an example [here](https://github.com/tensorflow/tensorflow/tree/master/tensorflow/lite/examples/android)
My problem is that my app is returning following error when I am adding my own tflite model:
```
E/AndroidRuntime: FATAL EXCEPTION: inference
Process: org.tensorflow.lite.demo, PID: 3495
java.lang.IllegalArgumentException: Cannot copy between a TensorFlowLite tensor with shape [1, 6] and a Java object with shape [1, 10, 4].
at org.tensorflow.lite.Tensor.throwExceptionIfTypeIsIncompatible(Tensor.java:240)
at org.tensorflow.lite.Tensor.copyTo(Tensor.java:116)
at org.tensorflow.lite.NativeInterpreterWrapper.run(NativeInterpreterWrapper.java:157)
at org.tensorflow.lite.Interpreter.runForMultipleInputsOutputs(Interpreter.java:229)
at org.tensorflow.demo.TFLiteObjectDetectionAPIModel.recognizeImage(TFLiteObjectDetectionAPIModel.java:194)
at org.tensorflow.demo.DetectorActivity$3.run(DetectorActivity.java:247)
at android.os.Handler.handleCallback(Handler.java:789)
at android.os.Handler.dispatchMessage(Handler.java:98)
at android.os.Looper.loop(Looper.java:164)
at android.os.HandlerThread.run(HandlerThread.java:65)
```
How I trained them:
```
python3 scripts/retrain.py \
--bottleneck_dir=bottlenecks \
--how_many_training_steps=500 \
--model_dir=inception \
--output_graph=tf_files/retrained_graph.pb \
--output_labels=tf_files/retrained_labels.txt \
--image_dir=tf_files/ \
--architecture mobilenet_1.0_224
```
Generating:
```
toco \
--input_format=TENSORFLOW_GRAPHDEF \
--input_file=tf_files/retrained_graph.pb \
--output_format=TFLITE \
--output_file=tf_files/optimized_graph.lite \
--inference_type=FLOAT \
--inference_input_type=FLOAT \
--input_arrays=input \
--output_arrays=final_result \
--input_shapes=1,224,224,3\
--mean_values=128 \
--std_values=128 \
--default_ranges_min=0 \
--default_ranges_max=6
```
DetectorActivity.java
```
// Configuration values for the prepackaged SSD model.
private static final int TF_OD_API_INPUT_SIZE = 224; // 300
private static final boolean TF_OD_API_IS_QUANTIZED = false; // true
private static final String TF_OD_API_MODEL_FILE = "optimized_graph.lite"; //detect.tflite
private static final String TF_OD_API_LABELS_FILE = "file:///android_asset/retrained_labels.txt";
```
|
2018/11/06
|
[
"https://Stackoverflow.com/questions/53174590",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/8531947/"
] |
You need to set **tinymce.suffix = '.min';** before you init TinyMCE
```
tinymce.suffix = '.min';
tinymce.baseURL = '/js/tinymce';
tinymce.init({
selector: '#editor',
menubar: false,
plugins: 'code'
});
```
|
TinyMCE should work out to load minified (or non-minified) files based on which TinyMCE file you load (`tinymce.js` or `tinymce.min.js`).
Not sure what's happening in your case but that logic appears to be failing.
If you grab the DEV package from <https://www.tiny.cloud/get-tiny/self-hosted/> it would come with both the minified and non-minified versions of each file so the editor would find what it needs at runtime.
***Note***: The code in the minified and non-minified files functions just the same so from a functionality perspective it does not really matter if `theme.min.js` or `theme.js` code is loaded. It only add ~200K to the file size so even that is immaterial.
| 6,332
|
23,007,203
|
Write a program that prompts the user for the name of a file, opens the file for reading,
and then outputs how many times each character of the alphabet appears in the file.
```
#!/usr/local/bin/python
name=raw_input("Enter file name: ")
input_file=open(name,"r")
list=input_file.readlines()
count = 0
counter = 0
for i in range(65,91): #from A to Z
for j in range(0,len(list)):
if(i == ord(j)): #problem: ord() takes j as an int here, I want it to get the char at j
count = count + 1
print i, count
count = 0
for k in range(97,123): #from a to z
for l in range(0,len(list)):
if(k == ord(l)): #problem: ord() takes l as an int here, I want it to get the char at l
counter = counter + 1
print k, counter
count = 0
```
|
2014/04/11
|
[
"https://Stackoverflow.com/questions/23007203",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1836292/"
] |
Create style like this-
```
<style name="Theme.MyAppTheme" parent="Theme.Sherlock.Light">
<item name="android:actionBarStyle">@style/Theme.MyAppTheme.ActionBar</item>
<item name="actionBarStyle">@style/Theme.MyAppTheme.ActionBar</item>
</style>
<style name="Theme.MyAppTheme.ActionBar" parent="Widget.Sherlock.ActionBar">
<item name="android:background">#222222</item>
<item name = "background">#222222</item>
<item name="android:height">64dip</item>
<item name="height">64dip</item>
<item name="android:titleTextStyle">@style/Theme.MyAppTheme.ActionBar.TitleTextStyle</item>
<item name="titleTextStyle">@style/Theme.MyAppTheme.ActionBar.TitleTextStyle</item>
</style>
<style name="Theme.MyAppTheme.ActionBar.TitleTextStyle" parent="TextAppearance.Sherlock.Widget.ActionBar.Title">
<item name="android:textColor">#fff</item>
<item name="textColor">#fff</item>
<item name="android:textStyle">bold</item>
<item name="textStyle">bold</item>
<item name="android:textSize">32sp</item>
<item name="textSize">32sp</item>
</style>
```
Change height according to your requirement.
And In manifest use `android:theme="@style/Theme.MyAppTheme"`
|
Try to use this one:
```
<style name="Theme.white_style" parent="@android:style/Theme.Holo.Light.DarkActionBar">
<item name="android:actionBarSize">55dp</item>
<item name="actionBarSize">55dp</item>
</style>
```
| 6,334
|
12,735,852
|
The question is an attempt to get the exact instruction on how to do that. There were few attempts before, which don't seem to be full solutions:
[solution to move the file inside the package](https://stackoverflow.com/questions/3071327/problem-accessing-config-files-within-a-python-egg)
[solution to read as zip](https://stackoverflow.com/questions/3655352/how-to-access-files-inside-a-python-egg-file)
[accessing meta info via get\_distribution](https://stackoverflow.com/questions/177910/accessing-python-eggs-own-metadata)
The task at hand is to read the information about the egg the program is running from.
There are few ways as i understand:
1. hard code the location of the egg and treat it as zip archive - will work, but not flexible enough, because it will need to be edited and recompiled in case if file is moved to another location
2. use `ResourceManager().resource_filename(__name__, filename)` - this seems to be limited in the fact that i cannot access the file that is inside the egg, but not inside the package. notations like "../../EGG-INFO/PKG-INFO" in filename don't work giving KeyError. So no good either.
3. use `dist = pkg_resources.get_distribution("dist_name")` and then use dist object to get information, but I cannot understand from the docs how should i specify my distribution name? It can't find it.
So, i'm looking for correct solution about using `pkg_resources.get_distribution` plus it would be nice to finally have a full solution to read any file from inside the egg.
Thanks!
|
2012/10/04
|
[
"https://Stackoverflow.com/questions/12735852",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/673423/"
] |
Setuptools/distribute/pkg\_resources is designed to be a sort of transparent overlay to standard Python distutils, which are pretty limited and don't allow a good way of distributing code.
eggs are just a way of putting together a bunch of python files, data files, and metadata, somewhat similar to Java JARs - but python packages can be installed from source even without en egg (which is a concept which does not exist in the standard distribution).
So there two scenarios here: either you're a programmer which is trying to use some file inside a library, and in such case, in order to read any file from your distribution, you don't need its full path - you just need an open file object with its content, right? So you should do something like this:
```
from pkg_resources import resource_stream, Requirement
resource_stream(Requirement.parse("restez==0.3.2"), "restez/httpconn.py")
```
That will return an open, readable file of the file you requested from your package distribution. If it's a zipped egg, it will be automatically be extracted.
Please note that you should specify the package name inside (restez) because the distribution name may be different from the package (e.g. distribution Twisted then uses twisted package name). Requirements parsing use this syntax: <http://setuptools.readthedocs.io/en/latest/pkg_resources.html#requirements-parsing>
This should suffice - you shouldn't need to know the path of the egg once you know how to fetch files from inside the egg.
If you really want the full path and you're sure your egg is uncompressed, use resource\_filename instead of resource\_stream.
Otherwise, if you're building a "packaging tool" and need to access the contents of your package, be it an egg or anything, you'll have to do that yourself by hand, just like pkg\_resources does [(pkg\_resources source)](https://bitbucket.org/tarek/distribute/src/tip/pkg_resources.py?at=default) . There's not a precise API for "querying an egg content" because there's no use case for that. If you're a programmer just using a library, use pkg\_resources just like I suggested. If you're building a packaging tool, you should know where to put your hands, and that's it.
|
The [`zipimporter`](http://docs.python.org/library/zipimport.html#zipimporter-objects) used to load a module can be accessed using the [`__loader__`](http://www.python.org/dev/peps/pep-0302/#specification-part-1-the-importer-protocol) attribute on the module, so accessing a file within the egg should be as simple as:
```
__loader__.get_data('path/within/the/egg')
```
| 6,335
|
59,821,535
|
I have a JSON file name.json with below contents,
```
{
"Name": [{
"firstName": "John",
"lastName": "Stark"
}]
}
```
Using python how to add the members of Stark family to the JSON file using following list
`firstNameList=['Sansa','Arya','Brandon']`
Expected Output
```
{
"Name": [{
"firstName": "John",
"lastName": "Stark"
},
{
"firstName": "Sansa",
"lastName": "Stark"
},
{
"firstName": "Arya",
"lastName": "Stark"
},
{
"firstName": "Brandon",
"lastName": "Stark"
}
]
}
```
I tried:
```
firstNameList=['arya', 'sansa','brandon']
import json
with open('/name.json', 'r+') as f:
data = json.load(f)
for item in firstNameList:
Name['firstname']=item
f.seek(0) #reset file position to the beginning.
json.dump(data, f, indent=4)
f.truncate()
```
|
2020/01/20
|
[
"https://Stackoverflow.com/questions/59821535",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/10522495/"
] |
The following should do the trick:
```
import json
with open('name.json', 'r+') as f:
data = json.load(f)
for name in firstNameList:
data["Name"].append({"firstName": name, "lastName": "Stark"})
```
|
```
dicts = {
"Name": [{
"firstName": "John",
"lastName": "Stark"
}
]
}
firstNameList = ['Sansa','Arya','Brandon']
for j in firstNameList:
dicts["Name"].append({'firstName': j, 'lastName': 'Stark'})
print(dicts)
```
And you will get
```
{'Name': [{'firstName': 'John', 'lastName': 'Stark'}, {'firstName': 'Sansa', 'lastName': 'Stark'}, {'firstName': 'Arya', 'lastName': 'Stark'}, {'firstName': 'Brandon', 'lastName': 'Stark'}]}
```
| 6,336
|
7,610,001
|
Could you explain to me what the difference is between calling
```
python -m mymod1 mymod2.py args
```
and
```
python mymod1.py mymod2.py args
```
It seems in both cases `mymod1.py` is called and `sys.argv` is
```
['mymod1.py', 'mymod2.py', 'args']
```
So what is the `-m` switch for?
|
2011/09/30
|
[
"https://Stackoverflow.com/questions/7610001",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/621944/"
] |
The first line of the `Rationale` section of [PEP 338](http://www.python.org/dev/peps/pep-0338/) says:
>
> Python 2.4 adds the command line switch -m to allow modules to be located using the Python module namespace for execution as scripts. The motivating examples were standard library modules such as pdb and profile, and the Python 2.4 implementation is fine for this limited purpose.
>
>
>
So you can specify any module in Python's search path this way, not just files in the current directory. You're correct that `python mymod1.py mymod2.py args` has exactly the same effect. The first line of the `Scope of this proposal` section states:
>
> In Python 2.4, a module located using -m is executed just as if its filename had been provided on the command line.
>
>
>
With `-m` more is possible, like working with modules which are part of a package, etc. That's what the rest of PEP 338 is about. Read it for more info.
|
It's worth mentioning **this only works if the package has a file `__main__.py`** Otherwise, this package can not be executed directly.
```
python -m some_package some_arguments
```
The python interpreter will looking for a `__main__.py` file in the package path to execute. It's equivalent to:
```
python path_to_package/__main__.py somearguments
```
It will execute the content after:
```
if __name__ == "__main__":
```
| 6,338
|
11,021,405
|
I am designing a text processing program that will generate a list of keywords from a long itemized text document, and combine entries for words that are similar in meaning. There are metrics out there, however I have a new issue of dealing with words that are not in the dictionary that I am using.
I am currently using nltk and python, but my issues here are of a much more abstracted nature. Given a word that is not in a dictionary, what would be an efficient way of resolving it to a word that is within your dictionary? My only current solution involves running through the words in the dictionary and picking the word with the shortest Levenshtein distance (editing distance) from the inputted word.
Obviously this is a very slow and impractical method, and I don't actually need the absolute best match from within the dictionary, just so long as it is a contained word and it is pretty close. Efficiency is more important for me in the solution, but a basic level of accuracy would also be needed.
Any ideas on how to generally resolve some unknown word to a known one in a dictionary?
|
2012/06/13
|
[
"https://Stackoverflow.com/questions/11021405",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1287834/"
] |
Looks like you need a spelling corrector to match words in your dictionary.
The code below works and taken directly from this blog <http://norvig.com/spell-correct.html> written by Peter Norvig,
```
import re, collections
def words(text): return re.findall('[a-z]+', text.lower())
def train(features):
model = collections.defaultdict(lambda: 1)
for f in features:
model[f] += 1
return model
NWORDS = train(words(file('big.txt').read()))
alphabet = 'abcdefghijklmnopqrstuvwxyz'
def edits1(word):
splits = [(word[:i], word[i:]) for i in range(len(word) + 1)]
deletes = [a + b[1:] for a, b in splits if b]
transposes = [a + b[1] + b[0] + b[2:] for a, b in splits if len(b)>1]
replaces = [a + c + b[1:] for a, b in splits for c in alphabet if b]
inserts = [a + c + b for a, b in splits for c in alphabet]
return set(deletes + transposes + replaces + inserts)
def known_edits2(word):
return set(e2 for e1 in edits1(word) for e2 in edits1(e1) if e2 in NWORDS)
def known(words): return set(w for w in words if w in NWORDS)
def correct(word):
candidates = known([word]) or known(edits1(word)) or known_edits2(word) or [word]
return max(candidates, key=NWORDS.get)
```
big.txt is your dictionary containing known words.
|
Your task sounds like it's really just non-word spelling correction, so a relatively straight-forward solution would be to use an existing spell checker like aspell with a custom dictionary.
A quick and dirty approach would be to just use a phonetic mapping like metaphone (which is one the algorithms used by aspell). For each possible code derived from your dictionary, choose a representative word (e.g., the most frequent word in the group) to suggest as the correction, and pick a default correction for the case where no matches are found. But you'd probably get better results using aspell.
If you do want to calculate edit distances, you can do it relatively quickly by storing the dictionary and possible edit operations in tries, see [Brill and Moore (2000)](http://www.aclweb.org/anthology/P/P00/P00-1037.pdf). If you have a decent-sized corpus of spelling errors and their corrections and can implement Brill and Moore's whole approach, you would probably beat aspell by quite a bit, but it sounds like aspell (or any spell checker that lets you create your own dictionary) is sufficient for your task.
| 6,348
|
73,247,351
|
I found a way to query all global address in outlook with python,
```
import win32com.client
import csv
from datetime import datetime
# Outlook
outApp = win32com.client.gencache.EnsureDispatch("Outlook.Application")
outGAL = outApp.Session.GetGlobalAddressList()
entries = outGAL.AddressEntries
# Create a dateID
date_id = (datetime.today()).strftime('%Y%m%d')
# Create empty list to store results
data_set = list()
# Iterate through Outlook address entries
for entry in entries:
if entry.Type == "EX":
user = entry.GetExchangeUser()
if user is not None:
if len(user.FirstName) > 0 and len(user.LastName) > 0:
row = list()
row.append(date_id)
row.append(user.Name)
row.append(user.FirstName)
row.append(user.LastName)
row.append(user.JobTitle)
row.append(user.City)
row.append(user.PrimarySmtpAddress)
try:
row.append(entry.PropertyAccessor.GetProperty("http://schemas.microsoft.com/mapi/proptag/0x3a26001e"))
except:
row.append('None')
# Store the user details in data_set
data_set.append(row)
# Print out the result to a csv with headers
with open(date_id + 'outlookGALresults.csv', 'w', newline='', encoding='utf-8') as csv_file:
headers = ['DateID', 'DisplayName', 'FirstName', 'LastName', 'JobTitle', 'City', 'PrimarySmtp', 'Country']
wr = csv.writer(csv_file, delimiter=',')
wr.writerow(headers)
for line in data_set:
wr.writerow(line)
```
But it querys user one by one and it's very slow. I only need to query user from IT department out of 100,000 users. How can I write the filter to avoid querying all users?
|
2022/08/05
|
[
"https://Stackoverflow.com/questions/73247351",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/9821792/"
] |
Actually your program works fine except that one problem with removing element from list, look:
```
l = [1,2,3]
print(l.remove(1)) # None
print(l) # [2,3]
```
So `list.remove()` modifies **mutable** list and doesn't return new one like in case of e.g. **immutable** strings
```
s = "abc"
print(s.upper()) # ABC
print(s) # abc
```
Saying that, move removing element from the list into separate line
```
three.remove(smallest_1)
smallest_2 = min(three)
```
As a bonus, I am pretty sure you know it can be done better in many ways, but as simple code improvment - **why not to delete `max` from three?**
|
The only problem was the `three.remove(smallest_1)`.
Change this and it becomes:
```py
def sum_two_smallest_numbers(numbers, index=0, two=[]):
if index == 0:
two = [numbers[0], numbers[1]]
index += 2
if index < len(numbers) and index >= 2:
three = two + [numbers[index]]
smallest_1 = min(three)
three.remove(smallest_1)
smallest_2 = min(three)
two = [smallest_1, smallest_2]
return sum_two_smallest_numbers(numbers, index+1, two)
return sum(two)
print(sum_two_smallest_numbers([7, 15, 12, 18, 22]))
```
Output: `19`
---
Some improvements:
```py
def sum_two_smallest_numbers(nums, index=0, two=[]):
if not index: # equivalent to if index == 0
two = nums[:2] # rather than [nums[0], nums[1]]
index = 2
if index < len(nums):
three = two + [nums[index]]
three.remove(max(three)) # Remove max rather than add two min
return sum_two_smallest_numbers(nums, index+1, three)
return sum(two)
```
| 6,351
|
28,296,476
|
I finished installing pip on linux, the `pip list` command works. But when using the `pip install` command it got the following error:
```
Traceback (most recent call last):
File "/usr/local/lib/python2.7/site-packages/pip-6.0.7-py2.7.egg/pip/basecommand.py", line 232, in main
status = self.run(options, args)
File "/usr/local/lib/python2.7/site-packages/pip-6.0.7-py2.7.egg/pip/commands/install.py", line 339, in run
requirement_set.prepare_files(finder)
File "/usr/local/lib/python2.7/site-packages/pip-6.0.7-py2.7.egg/pip/req/req_set.py", line 333, in prepare_files
upgrade=self.upgrade,
File "/usr/local/lib/python2.7/site-packages/pip-6.0.7-py2.7.egg/pip/index.py", line 305, in find_requirement
page = self._get_page(main_index_url, req)
File "/usr/local/lib/python2.7/site-packages/pip-6.0.7-py2.7.egg/pip/index.py", line 783, in _get_page
return HTMLPage.get_page(link, req, session=self.session)
File "/usr/local/lib/python2.7/site-packages/pip-6.0.7-py2.7.egg/pip/index.py", line 872, in get_page
"Cache-Control": "max-age=600",
File "/usr/local/lib/python2.7/site-packages/pip-6.0.7-py2.7.egg/pip/_vendor/requests/sessions.py", line 473, in get
return self.request('GET', url, **kwargs)
File "/usr/local/lib/python2.7/site-packages/pip-6.0.7-py2.7.egg/pip/download.py", line 365, in request
return super(PipSession, self).request(method, url, *args, **kwargs)
File "/usr/local/lib/python2.7/site-packages/pip-6.0.7-py2.7.egg/pip/_vendor/requests/sessions.py", line 461, in request
resp = self.send(prep, **send_kwargs)
File "/usr/local/lib/python2.7/site-packages/pip-6.0.7-py2.7.egg/pip/_vendor/requests/sessions.py", line 573, in send
r = adapter.send(request, **kwargs)
File "/usr/local/lib/python2.7/site-packages/pip-6.0.7-py2.7.egg/pip/_vendor/cachecontrol/adapter.py", line 43, in send
resp = super(CacheControlAdapter, self).send(request, **kw)
File "/usr/local/lib/python2.7/site-packages/pip-6.0.7-py2.7.egg/pip/_vendor/requests/adapters.py", line 370, in send
timeout=timeout
File "/usr/local/lib/python2.7/site-packages/pip-6.0.7-py2.7.egg/pip/_vendor/requests/packages/urllib3/connectionpool.py", line 518, in urlopen
body=body, headers=headers)
File "/usr/local/lib/python2.7/site-packages/pip-6.0.7-py2.7.egg/pip/_vendor/requests/packages/urllib3/connectionpool.py", line 322, in _make_request
self._validate_conn(conn)
File "/usr/local/lib/python2.7/site-packages/pip-6.0.7-py2.7.egg/pip/_vendor/requests/packages/urllib3/connectionpool.py", line 727, in _validate_conn
conn.connect()
File "/usr/local/lib/python2.7/site-packages/pip-6.0.7-py2.7.egg/pip/_vendor/requests/packages/urllib3/connection.py", line 238, in connect
ssl_version=resolved_ssl_version)
File "/usr/local/lib/python2.7/site-packages/pip-6.0.7-py2.7.egg/pip/_vendor/requests/packages/urllib3/util/ssl_.py", line 254, in ssl_wrap_socket
return context.wrap_socket(sock)
File "/usr/local/lib/python2.7/ssl.py", line 350, in wrap_socket
_context=self)
File "/usr/local/lib/python2.7/ssl.py", line 537, in __init__
raise ValueError("check_hostname requires server_hostname")
ValueError: check_hostname requires server_hostname
```
How can I fix this?
|
2015/02/03
|
[
"https://Stackoverflow.com/questions/28296476",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1305814/"
] |
[pip 6.1.0](https://pypi.python.org/pypi/pip/6.1.0) has been released, fixing this issue. You can upgrade with:
```
pip --trusted-host pypi.python.org install -U pip
```
to self-upgrade.
---
*Original answer*:
This is caused by a change in Python 2.7.9, which `urllib3` needs to account for. See [issue #543](https://github.com/shazow/urllib3/issues/543) for that project. Your OpenSSL libraries do not support SNI, which means `urllib3` won't pass in the host name to the SSL socket wrapper, but Python 2.7.9 expects the hostname to be passed in anyway for different purposes.
`urllib3` is indirectly used by `requests` (see [`requests` issue 2435](https://github.com/kennethreitz/requests/issues/2435)), which in turn is being used by `pip`.
I've opened a [ticket to track this from `pip`'s perspective](https://github.com/pypa/pip/issues/2395).
The underlying issues have been fixed by the project maintainers, and awaiting a new release. You could install the current development version of `pip` if you are impatient:
```
pip install --trusted-host=github.com -U https://github.com/pypa/pip/archive/develop.zip
```
This'll install pip-6.1.0.dev0, when 6.1.0 is fully released you can upgrade again with `pip install -U pip` to get the final release from PyPI.
|
It is related to urllib3.
You can resolve it with urllib3 version 1.25.8.
Download that version of urllib3 manually and install it.
Even though you install thia version, pip will still use its own version.So you have to remove it and replace it.
Usually, installed module is on PythonXX/Lib/site-packages
1. Delete urllib3 in PythonXX/Lib/site-packages/pip/\_vendor
2. Move "PythonXX/Lib/site-packages/urllib3" to "PythonXX/Lib/site-packages/pip/\_vendor".
| 6,352
|
63,019,336
|
I am yet to learn the 'lambda' concept in python, I tried to look for answers and every answer includes lambda in it. This is my code, can you please suggest me a way to sort it by values.
```html
sorted_dict = {'sir': '113', 'to': '146', 'my': '9', 'jesus': '4', 'saving': '275', 'changing': '72', 'apologize': '285', 'pain': '308', 'sisters': '27', 'forgiving': '36', 'can': '62', 'family': '77', 'sorry': '8', 'is': '360', 'too': '15', 'her': '37', 'wanted': '18', 'being': '44', 'into': '208', 'are': '17', 'just': '97', 'so': '148', 'now': '112', 'be': '19', 'right': '189', 'been': '105', 'no': '56', 'because': '74', 'forgive': '52', 'keep': '88', 'wish': '12', "i'm": '67', 'always': '53', 'ask': '29'}
new_list = list()
for key,value in sorted_dict.items():
new_tup = (key, value)
new_list.append(new_tup)
new_list = sorted(new_list)
```
How do i proceed further?
|
2020/07/21
|
[
"https://Stackoverflow.com/questions/63019336",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/13967182/"
] |
lambda is often used as the key to sort every value in an iterator.
The same step from turning dictionaries to list of tuples, can be done using the dict method `dict.items()`.
and i used lambda in sorting, as a key to tell the sorted function that, i want to sort based on the value in each tuple located in the 1st index.
```
sorted_dict = {'sir': '113', 'to': '146', 'my': '9', 'jesus': '4', 'saving': '275', 'changing': '72', 'apologize': '285', 'pain': '308', 'sisters': '27', 'forgiving': '36', 'can': '62', 'family': '77', 'sorry': '8', 'is': '360', 'too': '15', 'her': '37', 'wanted': '18', 'being': '44', 'into': '208', 'are': '17', 'just': '97', 'so': '148', 'now': '112', 'be': '19', 'right': '189', 'been': '105', 'no': '56', 'because': '74', 'forgive': '52', 'keep': '88', 'wish': '12', "i'm": '67', 'always': '53', 'ask': '29'}
new_list = sorted_dict.items()
new_list = sorted(new_list, key=lambda x: int(x[1]))
print(new_list)
```
|
if you are familiar with other programming concepts, you may have heard of what is called an "inline function"...
Lambda is an "inline function" equivalent in Python..
its a function which doesnt have a function name, and is restricted to have only a single line of code.
now coming to the problem of sort, the sort function in python accepts two arguments,
1. the list to be sorted
2. a function where you can define how to sort a list.
Suppose if its a list of numbers , you dont need the 2nd argument at all..
But if its like your case, where its a list of tuples or say a list of dictionaries,you need to tell python how to sort that list..
That is accomplished with the help of the '`key`' argument in the sort function...
Below code is an illustration of that..
```
In [1]: l1 = [('a',1), ('b', 3), ('c', 2)]
In [2]: def sortHelper(x):
...: return x[1]
...:
In [3]: l1.sort(key=sortHelper)
In [4]: l1
Out[4]: [('a', 1), ('c', 2), ('b', 3)]
In [5]:
```
Now as you see, the `sortHelper` method is just a single line function, which can very well be written with a `lambda` function.
```
lambda x: x[1]
```
So its common to use lambda functions, but its not a compulsion.. you can accomplish the same functionality with normal python functions also..
| 6,357
|
3,351,218
|
If a string contains `*SUBJECT123`, how do I determine that the string has `subject` in it in python?
|
2010/07/28
|
[
"https://Stackoverflow.com/questions/3351218",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/277603/"
] |
```
if "subject" in mystring.lower():
# do something
```
|
If you want to have `subject` match `SUBJECT`, you could use [`re`](http://docs.python.org/library/re.html)
```
import re
if re.search('subject', your_string, re.IGNORECASE)
```
Or you could transform the string to lower case first and simply use:
```
if "subject" in your_string.lower()
```
| 6,358
|
29,276,668
|
I have created a new project in django. when I run **python manage.py runserver**, I am getting the msg in command prompt like below.
```
/var/www/samplepro/myapp$ python manage.py runserver
Performing system checks...
System check identified no issues (0 silenced).
March 26, 2015 - 10:52:31
Django version 1.7.7, using settings 'myapp.settings'
Starting development server at http://127.0.0.1:8000/
Quit the server with CONTROL-C.
```
when I test in browser, I am getting the page can't be displayed error. Can anyone help me to do get the django page.
|
2015/03/26
|
[
"https://Stackoverflow.com/questions/29276668",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2522688/"
] |
You stated that you are running the server on Ubuntu while you are trying to connect it from another PC running Windows. In that case, you should replace the `127.0.0.1` with the actual IP address of Ubuntu server (the one you use for PuTTY).
|
I am also developing in a virtual machine.
Use `ifconfig` to figure out the IP address that has been assigned to your virtual machine.
Then start the server with:
```
python manage.py runserver <IP>:8000
```
Of course you could also use a different port instead of `8000`. Then use that address and port in your host browser to access your server.
Note: At least in my VM, the virtual machine gets a new address from time to time, so check that in case you have trouble starting your server with the same command.
If this does not help, check your hosts file as mentioned in one of the comments.
| 6,364
|
27,595,162
|
I am looking for an algorithm or modification of an efficient way to get the sum of run through of a tree depth, for example:
```
Z
/ \
/ \
/ \
/ \
X Y
/ \ / \
/ \ / \
A B C D
```
The number of final leaves is four so us have four final sums and they would be this respectively.
[Z+X+A] [Z+X+B] [Z+Y+C] [Z+Y+D]
If someone could guide me in the right direction into getting the sums of all possible depths, that would be great.
This will be done in python with fairly large trees.
|
2014/12/22
|
[
"https://Stackoverflow.com/questions/27595162",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3947332/"
] |
You can recurse over the nodes of the tree keeping the sum from the root down to this point. When you reach a leaf node, you return the current sum in a list of one element. In the internal nodes, you concatenate the lists returned from children.
Sample Code:
```
class Node:
def __init__(self, value, children):
self.value = value
self.children = children
def tree_sums(root, current_sum):
current_sum += root.value
if len(root.children) == 0:
return [current_sum]
subtree_sums = []
for child in root.children:
subtree_sums += tree_sums(child, current_sum)
return subtree_sums
tree = Node(1, [Node(2, []), Node(3, [])])
assert tree_sums(tree, 0) == [3, 4]
```
|
Here is what you are looking for. In this example, trees are stored as dicts with a "value" and a "children" keys, and "children" maps to a list.
```
def triesum(t):
if not t['children']:
return [t['value']]
return [t['value'] + n for c in t['children'] for n in triesum(c)]
```
Example
```
trie = {'value': 5, 'children': [
{'value': 7, 'children': [
{'value': 8, 'children': []},
{'value': 2, 'children': []}
]},
{'value': 4, 'children': [
{'value': 3, 'children': []},
{'value': 6, 'children': []}
]}
]}
print sorted(triesum(trie)) == sorted([5 + 7 + 8, 5 + 7 + 2, 5 + 4 + 3, 5 + 4 + 6])
# prints True
```
| 6,365
|
24,828,771
|
I am in the process of writing a python script that will (1) obtain a list of y-values for each subplot to plot against a common set of x-values, (2) make each of these subplots a scatter-plot and put it in the appropriate location in the subplot grid, and (3) complete these tasks for different sizes of subplot grids. What I mean by the third statement is this: the test case I'm using results in an array of 64 plots, 8 rows and 8 columns. I would like for the code to be able to handle any size array (roughly between 50 and 80 plots) for various grid dimensions without me having to go back in each time I run the code and say "Okay, here's the number of rows and columns I need."
Right now, I'm using an exec command to obtain the y-values, and that's working fine. I'm able to make each of the subplots and get it to populate the grid, but only if I type everything in by hand (64 times of doing the same thing is just dumb, so I know there's got to be a way to automate this).
Could anyone suggest a way in which this might be accomplished? I cannot provide data or my code, as this is research material and is not mine to release. Please excuse me if this question is very basic or is something that I should be able to determine from existing documentation. I am very new to programming, and could use a little guidance!
|
2014/07/18
|
[
"https://Stackoverflow.com/questions/24828771",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3421870/"
] |
A useful function for things like this is `plt.subplots(nrows, ncols)` which will return an array (a numpy object array) of subplots on a regular grid.
As an example:
```
import matplotlib.pyplot as plt
import numpy as np
fig, axes = plt.subplots(nrows=4, ncols=4, sharex=True, sharey=True)
# "axes" is a 2D array of axes objects. You can index it as axes[i,j]
# or iterate over all items with axes.flat
# Plot on all axes
for ax in axes.flat:
x, y = 10 * np.random.random((2, 20))
colors = np.random.random((20, 3))
ax.scatter(x, y, s=80, facecolors=colors, edgecolors='')
ax.set(xticks=np.linspace(0, 10, 6), yticks=np.linspace(0, 10, 6))
# Operate on just the top row of axes:
for ax, label in zip(axes[0, :], ['A', 'B', 'C', 'D']):
ax.set_title(label, size=20)
# Operate on just the first column of axes:
for ax, label in zip(axes[:, 0], ['E', 'F', 'G', 'H']):
ax.set_ylabel(label, size=20)
plt.show()
```

|
If you are going to use Python, you will want to take a stroll through the [Python Tutorial](https://docs.python.org/2.7/tutorial/index.html) to learn the the language and data structures available to you, there is good instructional material online, you might want to consider some of the *computer science 101* courses that use Python - MIT OCW, edX, [How To Think Like A Computer Scientist](http://www.greenteapress.com/thinkpython/html/index.html)...
Without knowing the full details, I imagine it could be something like this which is a mixture of *real* code and pseudocode:
```
yvalues = list()
while yvalues_exist:
yvalues.append(get_yvalue)
#limit plot to eight columns
columns = 8
quotient, remainder = divmod(len(yvalues), columns)
rows = quotient
if remainder:
rows = rows + 1
for n, yvalue in enumerate(yvalues):
plt.subplot(rows, columns, n)
plt.plot(yvalue)
```
list(), append(), while, divmod(), len(), if, for, enumerate() are all Python statements, methods, or built-in functions.
| 6,366
|
43,159,565
|
I've been using the script below to download technical videos for later analysis. The script has worked well for me and retrieves the highest resolution version available for the videos that I have needed.
Now I've come across a [4K YouTube video](https://youtu.be/vzS1Vkpsi5k), and my script only saves an mp4 with 1280x720.
I'd like to know if there is a way to adjust my current script to download higher resolution versions of this video. I understand there are python packages that might address this, but right now I would like stick to this step-by-step method if possible.
[](https://i.stack.imgur.com/0XvsY.png) [](https://i.stack.imgur.com/Ihmly.png)
**above:** info from Quicktime and OSX
```
"""
length: 175 seconds
quality: hd720
type: video/mp4; codecs="avc1.64001F, mp4a.40.2"
Last-Modified: Sun, 21 Aug 2016 10:41:48 GMT
Content-Type: video/mp4
Date: Sat, 01 Apr 2017 16:50:16 GMT
Expires: Sat, 01 Apr 2017 16:50:16 GMT
Cache-Control: private, max-age=21294
Accept-Ranges: bytes
Content-Length: 35933033
Connection: close
Alt-Svc: quic=":443"; ma=2592000
X-Content-Type-Options: nosniff
Server: gvs 1.
"""
import urlparse, urllib2
vid = "vzS1Vkpsi5k"
save_title = "YouTube SpaceX - Booster Number 4 - Thaicom 8 06-06-2016"
url_init = "https://www.youtube.com/get_video_info?video_id=" + vid
resp = urllib2.urlopen(url_init, timeout=10)
data = resp.read()
info = urlparse.parse_qs(data)
title = info['title']
print "length: ", info['length_seconds'][0] + " seconds"
stream_map = info['url_encoded_fmt_stream_map'][0]
vid_info = stream_map.split(",")
mp4_filename = save_title + ".mp4"
for video in vid_info:
item = urlparse.parse_qs(video)
print 'quality: ', item['quality'][0]
print 'type: ', item['type'][0]
url_download = item['url'][0]
resp = urllib2.urlopen(url_download)
print resp.headers
length = int(resp.headers['Content-Length'])
my_file = open(mp4_filename, "w+")
done, i = 0, 0
buff = resp.read(1024)
while buff:
my_file.write(buff)
done += 1024
percent = done * 100.0 / length
buff = resp.read(1024)
if not i%1000:
percent = done * 100.0 / length
print str(percent) + "%"
i += 1
break
```
|
2017/04/01
|
[
"https://Stackoverflow.com/questions/43159565",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3904031/"
] |
In Windows you will need to right click a .py, and press Edit to edit the file using IDLE. Since the default action of double clicking a .py is executing the file with python on a shell prompt.
To open just IDLE:
Click on that. `C:\Python36\Lib\idlelib\idle.bat`
|
If your using Windows 10 just type in `idle` where it says: "Type here for search"
| 6,367
|
49,327,630
|
I am new to python 3 and I'm working on sentiment analysis of tweets. My code begins with a for loop that takes in 50 tweets, which i clean and pre-process. After this (still inside the for loop) i want to save each tweet in a text file(every tweet on a new line)
Here's how the code goes:
```
for loop:
..
print statments
..
if loop:
filename=open("withnouns.txt","a")
sys.stdout = filename
print(new_words)#tokenised tweet that i want to save in txt file
print("\n")
sys.stdout.close()#i close it because i dont want to save print statements OUTSIDE if loop to be saved in txt file
..
..
print statements
```
After running this its showing error: I/O operation on closed file on line 71 (the first print statement after if loop)
My question is, is there any way I can temporarily close and then open `sys.stdout` and have it active only inside the if loop?
|
2018/03/16
|
[
"https://Stackoverflow.com/questions/49327630",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/9504637/"
] |
I'm not sure if this is exactly what you want to do but you can change this
```
filename=open("withnouns.txt","a")
sys.stdout = filename
print(new_words)
print("\n")
sys.stdout.close()
```
to
```
filename=open("withnouns.txt","a")
filename.write(new_words + "\n")
filename.write("\n\n")
filename.close()
```
alternatively, you can get the original value of `sys.stdout` from `sys.__stdout__`, so your code becomes
```
filename=open("withnouns.txt","a")
sys.stdout = filename
print(new_words)
print("\n")
filename.close()
sys.stdout = sys.__stdout__
```
|
You're confusing two different kinds of writing to file.
`sys.stdout` pipes your output to the console/terminal. This can be written to file but it's very roundabout.
Writing to file is different. In python you should look at the [`csv` module](https://docs.python.org/3/library/csv.html) if you're writing lists of values of all the same length (and maybe even if you're not, it's very easy to use).
Open your file outside the loop. In the loop, write to file line by line. Close the file outside the loop. This is automatically done for you if you use the following "with" syntax:
```
import csv
with open('file.csv') as f:
writer = csv.writer(f, delimiter=',', quotechar='|', quoting=csv.QUOTE_MINIMAL)
for loop:
# tokenize tweet
writer.writerow(tweet)
```
Alternatively, loop through and save tokenized-tweets to a list-of-lists. Then, outside and after the loop, write the entire thing to a file:
```
import csv
tweets = []
for loop:
# tokenize tweet
tweets.append(tweet)
with open('file.csv') as f:
writer = csv.writer(f, delimiter=',', quotechar='|', quoting=csv.QUOTE_MINIMAL)
writer.writerows(tweets)
```
| 6,376
|
67,293,952
|
In short, I have to scrape Flipkart and store the data in Mongodb.
Firstly, use [MongoDB Atlas](https://www.mongodb.com/cloud/atlas/lp/try2-in) to get yourself a free managed Mongodb server. Test if you are able to connect to it using python's library pymongo.
Secondly, install [Scrapy](https://docs.scrapy.org/en/latest/) and use its documentation to get yourself friendly with scraping using the Scrapy Framework.
Then, go to the following 2 urls
Men's Topwear <https://www.flipkart.com/clothing-and-accessories/topwear/pr?sid=clo%2Cash&otracker=categorytree&p%5B%5D=facets.ideal_for%255B%255D%3DMen>
Women's Footwear <https://www.flipkart.com/womens-footwear/pr?sid=osp,iko&otracker=nmenu_sub_Women_0_Footwear>
Each page has 40 products and you have to scrape upto 25 pages from each starting Url (approx. 2000 products) and store the data in Mongodb (database: , collection: flipkart). The data should get inserted in Mongodb directly from the Scrapy framework using Scrapy Mongodb Pipelines.
Each product you scrape should have the following data:
* `name` [store as string]
* `brand` [store as string]
* `original_price` [store as float]
* `sale_price` [store as float]
* `image_url` [store as string]
* `product_page_url` [store as string]
* `product_category` [store as string] [ it can contain 2 values "women footwear" or "men topwear" ]
But I am able to scrape only brand, title sale price and product url the original price has two string and its getting mismatch and i am not able to save the data in mongodb is anyone can help me in this.
```
from ..items import FlipkartItem
import json
import scrapy
import re
class FlipkartscrapySpider(scrapy.Spider):
name = 'flipkartscrapy'
def start_requests(self):
urls = ['https://www.flipkart.com/clothing-and-accessories/topwear/pr?sid=clo%2Cash&otracker=categorytree&p%5B%5D=facets.ideal_for%255B%255D%3DMen&page={}',
'https://www.flipkart.com/womens-footwear/pr?sid=osp%2Ciko&otracker=nmenu_sub_Women_0_Footwear&page={}']
for url in urls:
for i in range(1,25):
x = url.format(i)
yield scrapy.Request(url=x, callback=self.parse)
def parse(self, response):
items = FlipkartItem()
name = response.xpath('//*[contains(concat( " ", @class, " " ), concat( " ", "IRpwTa", " " ))]').xpath('text()').getall()
brand = response.xpath('//*[contains(concat( " ", @class, " " ), concat( " ", "_2WkVRV", " " ))]').xpath('text()').getall()
original_price = response.xpath('//*[contains(concat( " ", @class, " " ), concat( " ", "_3I9_wc", " " ))]').xpath('text()').getall()
sale_price = response.xpath('//*[contains(concat( " ", @class, " " ), concat( " ", "_30jeq3", " " ))]').xpath('text()').getall()
image_url = response.css('._1a8UBa').css('::attr(src)').getall()
product_page_url =response.css('._13oc-S > div').css('::attr(href)').getall()
items['name'] = name
items['brand'] = brand
items['original_price'] = original_price
items['sale_price'] = sale_price
items['image_url'] = image_url
items['product_page_url'] = 'https://www.flipkart.com' + str(product_page_url)
yield items
```
Original price output coming like this `original_price`:
```
['₹', '999', '₹', '1,499', '₹', '1,888', '₹', '2,199', '₹', '1,499', '₹', '1,069', '₹', '1,099', '₹', '1,999', '₹', '2,598', '₹', '1,299', '₹', '1,999', '₹', '899', '₹', '1,099', '₹', '1,699', '₹', '1,399', '₹', '999', '₹', '999', '₹', '1,999', '₹', '1,099', '₹', '1,199', '₹', '999', '₹', '999', '₹', '1,999', '₹', '1,287', '₹', '999', '₹', '1,199', '₹', '899', '₹', '999', '₹', '1,849', '₹', '1,499', '₹', '999', '₹', '999', '₹', '899', '₹', '1,999', '₹', '1,849', '₹', '3,499', '₹', '2,397', '₹', '899', '₹', '1,999']
```
|
2021/04/28
|
[
"https://Stackoverflow.com/questions/67293952",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/15781486/"
] |
I had the same issue, and here's what I figured out:
You need to include `Store` from redux, and use it as your type definition for your own `store` return value. Short answer:
```
import {combineReducers, Store} from 'redux';
[...]
const store:Store = configureStore([...])
[...]
export default store;
```
Longer answer:
As I understand it, what was happening is that `Store` type uses `$CombinedState` as part of its definition. When `configureStore()` returns, it inherits the `State` type. However since `State` is not explicitly used in your code, that now means that your code includes a value that references `$CombinedState`, despite it not existing anywhere in your code either. When you then try to export it out of your module, you're exporting a value with a type that doesn't exist within your module, and so you get an error.
You can import `State` from redux (which will in turn explicity cause your code to bring in `$CombinedState`), then use it to explicitly define `store` that gets assigned the return of `configureStore()`. You should then no longer be exporting unknown named types.
You can also be more specific with your `Store` type, as it is a generic:
```
const store:Store<RootState>
```
Although I'm not entirely sure if that would be a circular dependency, since your RootState depends on store.
|
A potential solution would be to extend the RootState type to include $CombinedState as follows:
```
import { $CombinedState } from '@reduxjs/toolkit';
export type RootState = ReturnType<typeof store.getState> & {
readonly [$CombinedState]?: undefined;
};
```
| 6,378
|
50,847,000
|
I have a csv file which has contents like the following:
**stores.csv**
```
Site, Score, Rank
roolee.com,100,125225
piperandscoot.com,29.3,222166
calledtosurf.com,23.8,361542
cladandcloth.com,17.9,208670
neeseesdresses.com,9.6,251016
...
```
Here's my model.
**models.py**
```
class SimilarStore(models.Model):
store = models.ForeignKey(Store)
csv = FileField()
domain = models.CharField(max_length=100, blank=True)
score = models.IntegerField(blank=True)
rank = models.IntegerField(blank=True)
```
So, I wanna upload the `stores.csv` file into `csv` field so that each column data goes into each `domain`, `score`, and `rank`.
I found out some resources making python file to parse data and run it through a command like `python parse.py'. However, I need it to be done by uploading the csv file in Django admin. Can anyone explain how I can do that?
|
2018/06/13
|
[
"https://Stackoverflow.com/questions/50847000",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/9393766/"
] |
You can read the csv file through pandas and then convert it to your desired object ny using pandas funtion 'infer\_objects()'. I hope the code below helps you in this regard,
```
import pandas as pd
SimilarStore_df = pd.read_csv('./store.csv') #importing csv file as pandas DataFrame
SimilarStore_df.columns = ['Domain', 'Score','Rank']
SimilarStore=SimilarStore_df.infer_objects() #converting DataFrame to object
print(SimilarStore)
print(SimilarStore.columns)
```
Output:
```
Domain Score Rank
0 roolee.com 100.0 125225
1 piperandscoot.com 29.3 222166
2 calledtosurf.com 23.8 361542
3 cladandcloth.com 17.9 208670
4 neeseesdresses.com 9.6 251016
Index(['Domain', 'Score', 'Rank'], dtype='object')
```
|
Just find the table name in your Database also find the corresponding column names with the table and use the to\_sql command in pandas
| 6,379
|
6,906,515
|
Okay, so I'm writing a very simplistic password cracker in python that brute forces a password with alphanumeric characters. Currently this code only supports 1 character passwords and a password file with a md5 hashed password inside. It will eventually include the option to specify your own character limits (how many characters the cracker tries until it fails). Right now I cannot kill this code when I want it to die. I have included a try and except snippit, however it's not working. What did I do wrong?
Code: <http://pastebin.com/MkJGmmDU>
```
import linecache, hashlib
alphaNumeric = ["a","b","c","d","e","f","g","h","i","j","k","l","m","n","o","p","q","r","s","t","u","v","w","x","y","z","A","B","C","D","E","F","G","H","I","J","K","L","M","N","O","P","Q","R","S","T","U","V","W","X","Y","Z",1,2,3,4,5,6,7,8,9,0]
class main:
def checker():
try:
while 1:
if hashlib.md5(alphaNumeric[num1]) == passwordHash:
print "Success! Your password is: " + str(alphaNumeric[num1])
break
except KeyboardInterrupt:
print "Keyboard Interrupt."
global num1, passwordHash, fileToCrack, numOfChars
print "What file do you want to crack?"
fileToCrack = raw_input("> ")
print "How many characters do you want to try?"
numOfChars = raw_input("> ")
print "Scanning file..."
passwordHash = linecache.getline(fileToCrack, 1)[0:32]
num1 = 0
checker()
main
```
|
2011/08/02
|
[
"https://Stackoverflow.com/questions/6906515",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/856255/"
] |
The way to allow a `KeyboardInterrupt` to end your program is to **do nothing**. They work by depending on nothing catching them in an `except` block; when an exception bubbles all the way out of a program (or thread), it terminates.
What you have done is to trap the `KeyboardInterrupt`s and handle them by printing a message and then continuing.
As for why the program gets stuck, there is nothing that ever causes `num1` to change, so the md5 calculation is the same calculation every time. If you wanted to iterate over the symbols in `alphaNumeric`, then do that: `for symbol in alphaNumeric: # do something with 'symbol'`.
Of course, that will still only consider every possible one-character password. You're going to have to try harder than that... :)
I think you're also confused about the use of classes. Python **does not** require you to wrap everything inside a class. The `main` at the end of your program does nothing useful; your code runs because it is evaluated when the compiler tries to figure out what a `main` class is. This is an abuse of syntax. What you want to do is put this code in a main **function**, and **call** the function (the same way you call `checker` currently).
|
This is what worked for me...
```
import sys
try:
....code that hangs....
except KeyboardInterrupt:
print "interupt"
sys.exit()
```
| 6,380
|
28,168,732
|
Here is my JSON string which I need to parse.
```
{
"192.168.1.2_151b3a32-ce00-114a-e000-0004a50c1c6d_stream1" : {
"@class" : "com.barco.compose.media.transcode.internal.h264.H264Gateway",
"id" : "192.168.1.2_151b3a32-ce00-114a-e000-0004a50c1c6d_stream1",
"uri" : {
"high" : "rtp://239.1.1.2:5006",
"low" : "rtp://239.1.1.1:5006"
},
"owner" : {
"@class" : "com.barco.compose.media.transcode.Acma",
"source" : "dvi1-1-mna-1890322558",
"stream" : "udp://239.1.1.7:5004",
"type" : "video",
"resolution" : [ 1, 1 ],
"framerateDivider" : [ 1, 1 ],
"profile" : [ "baseline" ],
"output" : [ "high", "low" ],
"destination" : [ "", "" ],
"ipvsProfile" : "high",
"ipvsSDP" : [ "" ],
"ipvsTitle" : "DiORStream",
"ipvsTagName" : [ "" ],
"ipvsTagValue" : [ "" ],
"ipvsDescription" : "NMSDesc",
"ipvsHLS" : true
},
"type" : "video"
},
"192.168.1.2_151b3a32-ce00-114a-e000-0004a50c1c6d_stream2" : {
"@class" : "com.barco.compose.media.transcode.internal.h264.H264Gateway",
"id" : "192.168.1.2_151b3a32-ce00-114a-e000-0004a50c1c6d_stream2",
"uri" : {
"high" : "rtp://239.1.1.4:5006",
"low" : "rtp://239.1.1.3:5006"
},
"owner" : {
"@class" : "com.barco.compose.media.transcode.Acma",
"source" : "dvi1-1-mna-1890322558",
"stream" : "udp://239.1.1.7:5004",
"type" : "video",
"resolution" : [ 1, 1 ],
"framerateDivider" : [ 1, 1 ],
"profile" : [ "baseline" ],
"output" : [ "high", "low" ],
"destination" : [ "", "" ],
"ipvsProfile" : "high",
"ipvsSDP" : [ "" ],
"ipvsTitle" : "nikhil",
"ipvsTagName" : [ "" ],
"ipvsTagValue" : [ "" ],
"ipvsDescription" : "nikhilDesc",
"ipvsHLS" : true
},
"type" : "video"
}
}
```
Now I want to get value of "id". I have used [jsonpath-rw library](https://pypi.python.org/pypi/jsonpath-rw) for Python, but that is not working. If I use `*` whole response gets printed. Looks like the whole response is root. I have used different combinations on <http://jsonpath.curiousconcept.com/>, such as `*.id`, `$[0].id`.
|
2015/01/27
|
[
"https://Stackoverflow.com/questions/28168732",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4498089/"
] |
Your `JSON` object contains several root (or top) objects indexed with keys such as `"192.168.1.2_151b3a32-ce00-114a-e000-0004a50c1c6d_stream1"`. From your question it seems you want to access the `id` field of these elements. For that the simple way is probably using the `json` module - included in the standard library - to load the string and then access the `id` of each element.
```
import json
my_json_string = "..."
my_json_dict = json.loads(my_json_string)
for key, value in my_json_dict.items():
print("Id is {} for item {}".format(value["id"], key))
```
However if you just want the keys of your `JSON` object all you need is
```
import json
my_json_string = "..."
my_json_dict = json.loads(my_json_string)
print(["Got item with id '{}'".format(key) for key in d.keys()])
```
|
The following expression is what you need:
```
$..id
```
Proof:
```
In [12]: vv = json.loads(my_input_string)
In [13]: jsonpath_expr = parse('$..id')
In [14]: [x.value for x in jsonpath_expr.find(vv)]
Out[14]:
[u'192.168.1.2_151b3a32-ce00-114a-e000-0004a50c1c6d_stream1',
u'192.168.1.2_151b3a32-ce00-114a-e000-0004a50c1c6d_stream2']
```
However, *please be careful*, when using `..` operator, since it does deep scan. It means if there is any other object inside your JSON with `id` field, value of this field will also be added to result of query.
| 6,384
|
12,730,524
|
On this sample code i want to use the variables on the `function db_properties` at the function `connect_and_query`. To accomplish that I choose the `return`. So, using that strategy the code works perfectly. But, in this example the db.properties files only has 4 variables. That said, if the properties file had 20+ variables, should I continue using `return`? Or is there a most elegant/cleaner/correct way to do that?
```
import psycopg2
import sys
from ConfigParser import SafeConfigParser
class Main:
def db_properties(self):
cfgFile='c:\test\db.properties'
parser = SafeConfigParser()
parser.read(cfgFile)
dbHost = parser.get('database','db_host')
dbName = parser.get('database','db_name')
dbUser = parser.get('database','db_login')
dbPass = parser.get('database','db_pass')
return dbHost,dbName,dbUser,dbPass
def connect_and_query(self):
try:
con = None
dbHost=self.db_properties()[0]
dbName=self.db_properties()[1]
dbUser=self.db_properties()[2]
dbPass=self.db_properties()[3]
con = None
qry=("select star from galaxy")
con = psycopg2.connect(host=dbHost,database=dbName, user=dbUser,
password=dbPass)
cur = con.cursor()
cur.execute(qry)
data = cur.fetchall()
for result in data:
qryResult = result[0]
print "the test result is : " +qryResult
except psycopg2.DatabaseError, e:
print 'Error %s' % e
sys.exit(1)
finally:
if con:
con.close()
operation=Main()
operation.connect_and_query()
```
Im using python 2.7
Regards
|
2012/10/04
|
[
"https://Stackoverflow.com/questions/12730524",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1323826/"
] |
If there are a lot of variables, or if you want to easily change the variables being read, return a dictionary.
```
def db_properties(self, *variables):
cfgFile='c:\test\db.properties'
parser = SafeConfigParser()
parser.read(cfgFile)
return {
variable: parser.get('database', variable) for variable in variables
}
def connect_and_query(self):
try:
con = None
config = self.db_properties(
'db_host',
'db_name',
'db_login',
'db_pass',
)
#or you can use:
# variables = ['db_host','db_name','db_login','db_pass','db_whatever','db_whatever2',...]
# config = self.db_properties(*variables)
#now you can use any variable like: config['db_host']
# ---rest of the function here---
```
Edit: I refactored the code so you can specify the variables you want to load in the calling function itself.
|
You certainly don't want to call `db_properties()` 4 times; just call it once and store the result.
It's also almost certainly better to return a dict rather than a tuple, since as it is the caller needs to know what the method returns in order, rather than just having access to the values by their names. As the number of values getting passed around grows, this gets even harder to maintain.
e.g.:
```
class Main:
def db_properties(self):
cfgFile='c:\test\db.properties'
parser = SafeConfigParser()
parser.read(cfgFile)
configDict= dict()
configDict['dbHost'] = parser.get('database','db_host')
configDict['dbName'] = parser.get('database','db_name')
configDict['dbUser'] = parser.get('database','db_login')
configDict['dbPass'] = parser.get('database','db_pass')
return configDict
def connect_and_query(self):
try:
con = None
conf = self.db_properties()
con = None
qry=("select star from galaxy")
con = psycopg2.connect(host=conf['dbHost'],database=conf['dbName'],
user=conf['dbUser'],
password=conf['dbPass'])
```
| 6,391
|
405,282
|
I am trying to write a life simulation in python with a variety of animals. It is impossible to name each instance of the classes I am going to use because I have no way of knowing how many there will be.
So, my question:
How can I automatically give a name to an object?
I was thinking of creating a "Herd" class which could be all the animals of that type alive at the same time...
|
2009/01/01
|
[
"https://Stackoverflow.com/questions/405282",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/-1/"
] |
Hm, well you normally just stuff all those instances in a list and then iterate over that list if you want to do something with them. If you want to automatically keep track of each instance created you can also make the adding to the list implicit in the class' constructor or create a factory method that keeps track of the created instances.
|
you could make an 'animal' class with a name attribute.
Or
you could programmically define the class like so:
```
from new import classobj
my_class=classobj('Foo',(object,),{})
```
Found this:
<http://www.gamedev.net/community/forums/topic.asp?topic_id=445037>
| 6,396
|
57,814,535
|
I figured out this is a popular question, but still I couldn't find a solution for that.
I'm trying to run a simple repo [Here](https://github.com/swathikirans/violence-recognition-pytorch) which uses `PyTorch`. Although I just upgraded my Pytorch to the latest CUDA version from pytorch.org (`1.2.0`), it still throws the same error. I'm on Windows 10 and use conda with python 3.7.
```
raise AssertionError("Torch not compiled with CUDA enabled")
AssertionError: Torch not compiled with CUDA enabled
```
How to fix the problem?
Here is my `conda list`:
```
# Name Version Build Channel
_ipyw_jlab_nb_ext_conf 0.1.0 py37_0 anaconda
_pytorch_select 1.1.0 cpu anaconda
_tflow_select 2.3.0 mkl anaconda
absl-py 0.7.1 pypi_0 pypi
alabaster 0.7.12 py37_0 anaconda
anaconda 2019.07 py37_0 anaconda
anaconda-client 1.7.2 py37_0 anaconda
anaconda-navigator 1.9.7 py37_0 anaconda
anaconda-project 0.8.3 py_0 anaconda
argparse 1.4.0 pypi_0 pypi
asn1crypto 0.24.0 py37_0 anaconda
astor 0.8.0 pypi_0 pypi
astroid 2.2.5 py37_0 anaconda
astropy 3.2.1 py37he774522_0 anaconda
atomicwrites 1.3.0 py37_1 anaconda
attrs 19.1.0 py37_1 anaconda
babel 2.7.0 py_0 anaconda
backcall 0.1.0 py37_0 anaconda
backports 1.0 py_2 anaconda
backports-csv 1.0.7 pypi_0 pypi
backports-functools-lru-cache 1.5 pypi_0 pypi
backports.functools_lru_cache 1.5 py_2 anaconda
backports.os 0.1.1 py37_0 anaconda
backports.shutil_get_terminal_size 1.0.0 py37_2 anaconda
backports.tempfile 1.0 py_1 anaconda
backports.weakref 1.0.post1 py_1 anaconda
beautifulsoup4 4.7.1 py37_1 anaconda
bitarray 0.9.3 py37he774522_0 anaconda
bkcharts 0.2 py37_0 anaconda
blas 1.0 mkl anaconda
bleach 3.1.0 py37_0 anaconda
blosc 1.16.3 h7bd577a_0 anaconda
bokeh 1.2.0 py37_0 anaconda
boto 2.49.0 py37_0 anaconda
bottleneck 1.2.1 py37h452e1ab_1 anaconda
bzip2 1.0.8 he774522_0 anaconda
ca-certificates 2019.5.15 0 anaconda
certifi 2019.6.16 py37_0 anaconda
cffi 1.12.3 py37h7a1dbc1_0 anaconda
chainer 6.2.0 pypi_0 pypi
chardet 3.0.4 py37_1 anaconda
cheroot 6.5.5 pypi_0 pypi
cherrypy 18.1.2 pypi_0 pypi
click 7.0 py37_0 anaconda
cloudpickle 1.2.1 py_0 anaconda
clyent 1.2.2 py37_1 anaconda
colorama 0.4.1 py37_0 anaconda
comtypes 1.1.7 py37_0 anaconda
conda 4.7.11 py37_0 anaconda
conda-build 3.18.9 py37_3 anaconda
conda-env 2.6.0 1 anaconda
conda-package-handling 1.3.11 py37_0 anaconda
conda-verify 3.4.2 py_1 anaconda
console_shortcut 0.1.1 3 anaconda
constants 0.6.0 pypi_0 pypi
contextlib2 0.5.5 py37_0 anaconda
cpuonly 1.0 0 pytorch
cryptography 2.7 py37h7a1dbc1_0 anaconda
cudatoolkit 10.0.130 0 anaconda
curl 7.65.2 h2a8f88b_0 anaconda
cycler 0.10.0 py37_0 anaconda
cython 0.29.12 py37ha925a31_0 anaconda
cytoolz 0.10.0 py37he774522_0 anaconda
dask 2.1.0 py_0 anaconda
dask-core 2.1.0 py_0 anaconda
decorator 4.4.0 py37_1 anaconda
defusedxml 0.6.0 py_0 anaconda
distributed 2.1.0 py_0 anaconda
docutils 0.14 py37_0 anaconda
entrypoints 0.3 py37_0 anaconda
et_xmlfile 1.0.1 py37_0 anaconda
ez-setup 0.9 pypi_0 pypi
fastcache 1.1.0 py37he774522_0 anaconda
fasttext 0.9.1 pypi_0 pypi
feedparser 5.2.1 pypi_0 pypi
ffmpeg 4.1.3 h6538335_0 conda-forge
filelock 3.0.12 py_0 anaconda
first 2.0.2 pypi_0 pypi
flask 1.1.1 py_0 anaconda
freetype 2.9.1 ha9979f8_1 anaconda
future 0.17.1 py37_0 anaconda
gast 0.2.2 py37_0 anaconda
get 2019.4.13 pypi_0 pypi
get_terminal_size 1.0.0 h38e98db_0 anaconda
gevent 1.4.0 py37he774522_0 anaconda
glob2 0.7 py_0 anaconda
google-pasta 0.1.7 pypi_0 pypi
graphviz 2.38.0 4 anaconda
greenlet 0.4.15 py37hfa6e2cd_0 anaconda
grpcio 1.22.0 pypi_0 pypi
h5py 2.9.0 py37h5e291fa_0 anaconda
hdf5 1.10.4 h7ebc959_0 anaconda
heapdict 1.0.0 py37_2 anaconda
html5lib 1.0.1 py37_0 anaconda
http-client 0.1.22 pypi_0 pypi
hypothesis 4.34.0 pypi_0 pypi
icc_rt 2019.0.0 h0cc432a_1 anaconda
icu 58.2 ha66f8fd_1 anaconda
idna 2.8 py37_0 anaconda
imageio 2.4.1 pypi_0 pypi
imageio-ffmpeg 0.3.0 pypi_0 pypi
imagesize 1.1.0 py37_0 anaconda
importlib_metadata 0.17 py37_1 anaconda
imutils 0.5.2 pypi_0 pypi
intel-openmp 2019.0 pypi_0 pypi
ipykernel 5.1.1 py37h39e3cac_0 anaconda
ipython 7.6.1 py37h39e3cac_0 anaconda
ipython_genutils 0.2.0 py37_0 anaconda
ipywidgets 7.5.0 py_0 anaconda
isort 4.3.21 py37_0 anaconda
itsdangerous 1.1.0 py37_0 anaconda
jaraco-functools 2.0 pypi_0 pypi
jdcal 1.4.1 py_0 anaconda
jedi 0.13.3 py37_0 anaconda
jinja2 2.10.1 py37_0 anaconda
joblib 0.13.2 py37_0 anaconda
jpeg 9b hb83a4c4_2 anaconda
json5 0.8.4 py_0 anaconda
jsonschema 3.0.1 py37_0 anaconda
jupyter 1.0.0 py37_7 anaconda
jupyter_client 5.3.1 py_0 anaconda
jupyter_console 6.0.0 py37_0 anaconda
jupyter_core 4.5.0 py_0 anaconda
jupyterlab 1.0.2 py37hf63ae98_0 anaconda
jupyterlab_server 1.0.0 py_0 anaconda
keras 2.2.4 0 anaconda
keras-applications 1.0.8 py_0 anaconda
keras-base 2.2.4 py37_0 anaconda
keras-preprocessing 1.1.0 py_1 anaconda
keyring 18.0.0 py37_0 anaconda
kiwisolver 1.1.0 py37ha925a31_0 anaconda
krb5 1.16.1 hc04afaa_7
lazy-object-proxy 1.4.1 py37he774522_0 anaconda
libarchive 3.3.3 h0643e63_5 anaconda
libcurl 7.65.2 h2a8f88b_0 anaconda
libiconv 1.15 h1df5818_7 anaconda
liblief 0.9.0 ha925a31_2 anaconda
libmklml 2019.0.5 0 anaconda
libpng 1.6.37 h2a8f88b_0 anaconda
libprotobuf 3.8.0 h7bd577a_0 anaconda
libsodium 1.0.16 h9d3ae62_0 anaconda
libssh2 1.8.2 h7a1dbc1_0 anaconda
libtiff 4.0.10 hb898794_2 anaconda
libxml2 2.9.9 h464c3ec_0 anaconda
libxslt 1.1.33 h579f668_0 anaconda
llvmlite 0.29.0 py37ha925a31_0 anaconda
locket 0.2.0 py37_1 anaconda
lxml 4.3.4 py37h1350720_0 anaconda
lz4-c 1.8.1.2 h2fa13f4_0 anaconda
lzo 2.10 h6df0209_2 anaconda
m2w64-gcc-libgfortran 5.3.0 6
m2w64-gcc-libs 5.3.0 7
m2w64-gcc-libs-core 5.3.0 7
m2w64-gmp 6.1.0 2
m2w64-libwinpthread-git 5.0.0.4634.697f757 2
make-dataset 1.0 pypi_0 pypi
markdown 3.1.1 py37_0 anaconda
markupsafe 1.1.1 py37he774522_0 anaconda
matplotlib 3.1.0 py37hc8f65d3_0 anaconda
mccabe 0.6.1 py37_1 anaconda
menuinst 1.4.16 py37he774522_0 anaconda
mistune 0.8.4 py37he774522_0 anaconda
mkl 2019.0 pypi_0 pypi
mkl-service 2.0.2 py37he774522_0 anaconda
mkl_fft 1.0.12 py37h14836fe_0 anaconda
mkl_random 1.0.2 py37h343c172_0 anaconda
mock 3.0.5 py37_0 anaconda
more-itertools 7.0.0 py37_0 anaconda
moviepy 1.0.0 pypi_0 pypi
mpmath 1.1.0 py37_0 anaconda
msgpack-python 0.6.1 py37h74a9793_1 anaconda
msys2-conda-epoch 20160418 1
multipledispatch 0.6.0 py37_0 anaconda
mysqlclient 1.4.2.post1 pypi_0 pypi
navigator-updater 0.2.1 py37_0 anaconda
nbconvert 5.5.0 py_0 anaconda
nbformat 4.4.0 py37_0 anaconda
networkx 2.3 py_0 anaconda
ninja 1.9.0 py37h74a9793_0 anaconda
nltk 3.4.4 py37_0 anaconda
nose 1.3.7 py37_2 anaconda
notebook 6.0.0 py37_0 anaconda
numba 0.44.1 py37hf9181ef_0 anaconda
numexpr 2.6.9 py37hdce8814_0 anaconda
numpy 1.16.4 pypi_0 pypi
numpy-base 1.16.4 py37hc3f5095_0 anaconda
numpydoc 0.9.1 py_0 anaconda
olefile 0.46 py37_0 anaconda
opencv-contrib-python 4.1.0.25 pypi_0 pypi
opencv-python 4.1.0.25 pypi_0 pypi
openpyxl 2.6.2 py_0 anaconda
openssl 1.1.1c he774522_1 anaconda
packaging 19.0 py37_0 anaconda
pandas 0.24.2 py37ha925a31_0 anaconda
pandoc 2.2.3.2 0 anaconda
pandocfilters 1.4.2 py37_1 anaconda
parso 0.5.0 py_0 anaconda
partd 1.0.0 py_0 anaconda
path.py 12.0.1 py_0 anaconda
pathlib2 2.3.4 py37_0 anaconda
patsy 0.5.1 py37_0 anaconda
pattern 3.6 pypi_0 pypi
pdfminer-six 20181108 pypi_0 pypi
pep8 1.7.1 py37_0 anaconda
pickleshare 0.7.5 py37_0 anaconda
pillow 6.1.0 py37hdc69c19_0 anaconda
pip 19.1.1 py37_0 anaconda
pkginfo 1.5.0.1 py37_0 anaconda
pluggy 0.12.0 py_0 anaconda
ply 3.11 py37_0 anaconda
portend 2.5 pypi_0 pypi
post 2019.4.13 pypi_0 pypi
powershell_shortcut 0.0.1 2 anaconda
proglog 0.1.9 pypi_0 pypi
prometheus_client 0.7.1 py_0 anaconda
prompt_toolkit 2.0.9 py37_0 anaconda
protobuf 3.7.1 pypi_0 pypi
psutil 5.6.3 py37he774522_0 anaconda
public 2019.4.13 pypi_0 pypi
py 1.8.0 py37_0 anaconda
py-lief 0.9.0 py37ha925a31_2 anaconda
pybind11 2.3.0 pypi_0 pypi
pycodestyle 2.5.0 py37_0 anaconda
pycosat 0.6.3 py37hfa6e2cd_0 anaconda
pycparser 2.19 py37_0 anaconda
pycrypto 2.6.1 py37hfa6e2cd_9 anaconda
pycryptodome 3.8.2 pypi_0 pypi
pycurl 7.43.0.3 py37h7a1dbc1_0 anaconda
pydot 1.4.1 pypi_0 pypi
pyflakes 2.1.1 py37_0 anaconda
pygments 2.4.2 py_0 anaconda
pylint 2.3.1 py37_0 anaconda
pyodbc 4.0.26 py37ha925a31_0 anaconda
pyopenssl 19.0.0 py37_0 anaconda
pyparsing 2.4.0 py_0 anaconda
pyqt 5.9.2 py37h6538335_2 anaconda
pyreadline 2.1 py37_1 anaconda
pyrsistent 0.14.11 py37he774522_0 anaconda
pysocks 1.7.0 py37_0 anaconda
pytables 3.5.2 py37h1da0976_1 anaconda
pytest 5.0.1 py37_0 anaconda
pytest-arraydiff 0.3 py37h39e3cac_0 anaconda
pytest-astropy 0.5.0 py37_0 anaconda
pytest-doctestplus 0.3.0 py37_0 anaconda
pytest-openfiles 0.3.2 py37_0 anaconda
pytest-remotedata 0.3.1 py37_0 anaconda
python 3.7.3 h8c8aaf0_1 anaconda
python-dateutil 2.8.0 py37_0 anaconda
python-docx 0.8.10 pypi_0 pypi
python-graphviz 0.11.1 pypi_0 pypi
python-libarchive-c 2.8 py37_11 anaconda
pytorch 1.2.0 py3.7_cpu_1 [cpuonly] pytorch
pytube 9.5.1 pypi_0 pypi
pytz 2019.1 py_0 anaconda
pywavelets 1.0.3 py37h8c2d366_1 anaconda
pywin32 223 py37hfa6e2cd_1 anaconda
pywinpty 0.5.5 py37_1000 anaconda
pyyaml 5.1.1 py37he774522_0 anaconda
pyzmq 18.0.0 py37ha925a31_0 anaconda
qt 5.9.7 vc14h73c81de_0 [vc14] anaconda
qtawesome 0.5.7 py37_1 anaconda
qtconsole 4.5.1 py_0 anaconda
qtpy 1.8.0 py_0 anaconda
query-string 2019.4.13 pypi_0 pypi
request 2019.4.13 pypi_0 pypi
requests 2.22.0 py37_0 anaconda
rope 0.14.0 py_0 anaconda
ruamel_yaml 0.15.46 py37hfa6e2cd_0 anaconda
scikit-image 0.15.0 py37ha925a31_0 anaconda
scikit-learn 0.21.2 py37h6288b17_0 anaconda
scipy 1.3.0 pypi_0 pypi
scipy-stack 0.0.5 pypi_0 pypi
seaborn 0.9.0 py37_0 anaconda
send2trash 1.5.0 py37_0 anaconda
setuptools 41.1.0 pypi_0 pypi
simplegeneric 0.8.1 py37_2 anaconda
singledispatch 3.4.0.3 py37_0 anaconda
sip 4.19.8 py37h6538335_0 anaconda
six 1.12.0 py37_0 anaconda
snappy 1.1.7 h777316e_3 anaconda
snowballstemmer 1.9.0 py_0 anaconda
sortedcollections 1.1.2 py37_0 anaconda
sortedcontainers 2.1.0 py37_0 anaconda
soupsieve 1.8 py37_0 anaconda
sphinx 2.1.2 py_0 anaconda
sphinxcontrib 1.0 py37_1 anaconda
sphinxcontrib-applehelp 1.0.1 py_0 anaconda
sphinxcontrib-devhelp 1.0.1 py_0 anaconda
sphinxcontrib-htmlhelp 1.0.2 py_0 anaconda
sphinxcontrib-jsmath 1.0.1 py_0 anaconda
sphinxcontrib-qthelp 1.0.2 py_0 anaconda
sphinxcontrib-serializinghtml 1.1.3 py_0 anaconda
sphinxcontrib-websupport 1.1.2 py_0 anaconda
spyder 3.3.6 py37_0 anaconda
spyder-kernels 0.5.1 py37_0 anaconda
sqlalchemy 1.3.5 py37he774522_0 anaconda
sqlite 3.29.0 he774522_0 anaconda
statsmodels 0.10.0 py37h8c2d366_0 anaconda
summa 1.2.0 pypi_0 pypi
sympy 1.4 py37_0 anaconda
tbb 2019.4 h74a9793_0 anaconda
tblib 1.4.0 py_0 anaconda
tempora 1.14.1 pypi_0 pypi
tensorboard 1.14.0 py37he3c9ec2_0 anaconda
tensorboardx 1.8 pypi_0 pypi
tensorflow 1.14.0 mkl_py37h7908ca0_0 anaconda
tensorflow-base 1.14.0 mkl_py37ha978198_0 anaconda
tensorflow-estimator 1.14.0 py_0 anaconda
tensorflow-mkl 1.14.0 h4fcabd2_0 anaconda
termcolor 1.1.0 pypi_0 pypi
terminado 0.8.2 py37_0 anaconda
testpath 0.4.2 py37_0 anaconda
tk 8.6.8 hfa6e2cd_0 anaconda
toolz 0.10.0 py_0 anaconda
torchvision 0.4.0 py37_cpu [cpuonly] pytorch
tornado 6.0.3 py37he774522_0 anaconda
tqdm 4.32.1 py_0 anaconda
traitlets 4.3.2 py37_0 anaconda
typing 3.6.6 pypi_0 pypi
typing-extensions 3.6.6 pypi_0 pypi
unicodecsv 0.14.1 py37_0 anaconda
urllib3 1.24.2 py37_0 anaconda
validators 0.13.0 pypi_0 pypi
vc 14.1 h0510ff6_4 anaconda
vs2015_runtime 14.15.26706 h3a45250_4 anaconda
wcwidth 0.1.7 py37_0 anaconda
webencodings 0.5.1 py37_1 anaconda
werkzeug 0.15.4 py_0 anaconda
wheel 0.33.4 py37_0 anaconda
widgetsnbextension 3.5.0 py37_0 anaconda
win_inet_pton 1.1.0 py37_0 anaconda
win_unicode_console 0.5 py37_0 anaconda
wincertstore 0.2 py37_0 anaconda
winpty 0.4.3 4 anaconda
wrapt 1.11.2 py37he774522_0 anaconda
xlrd 1.2.0 py37_0 anaconda
xlsxwriter 1.1.8 py_0 anaconda
xlwings 0.15.8 py37_0 anaconda
xlwt 1.3.0 py37_0 anaconda
xz 5.2.4 h2fa13f4_4 anaconda
yaml 0.1.7 hc54c509_2 anaconda
youtube-dl 2019.8.2 pypi_0 pypi
zc-lockfile 1.4 pypi_0 pypi
zeromq 4.3.1 h33f27b4_3 anaconda
zict 1.0.0 py_0 anaconda
zipp 0.5.1 py_0 anaconda
zlib 1.2.11 h62dcd97_3 anaconda
zstd 1.3.7 h508b16e_0 anaconda
```
|
2019/09/06
|
[
"https://Stackoverflow.com/questions/57814535",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3204706/"
] |
try this:
```
conda install pytorch torchvision cudatoolkit=10.2 -c pytorch
```
|
Uninstalling the packages and reinstalling it with pip instead solved it for me.
1.`conda remove pytorch torchvision torchaudio cudatoolkit`
2.`pip3 install torch torchvision torchaudio --extra-index-url https://download.pytorch.org/whl/cu116`
| 6,404
|
10,891,670
|
I'm using the [runwithfriends](http://apps.facebook.com/runwithfriends) example app to learn canvas programming and GAE. I can upload the sample code to GAE without any errors. Here are my config.py and app.yaml files:
### conf.py:
```
# Facebook Application ID and Secret.
FACEBOOK_APP_ID = ''
FACEBOOK_APP_SECRET = ''
# Canvas Page name.
FACEBOOK_CANVAS_NAME = 'blah'
# A random token for use with the Real-time API.
FACEBOOK_REALTIME_VERIFY_TOKEN = 'RANDOM TOKEN'
# The external URL this application is available at where the Real-time API will
# send it's pings.
EXTERNAL_HREF = 'http://blah.appspot.com'
# Facebook User IDs of admins. The poor mans admin system.
ADMIN_USER_IDS = ['']
```
### app.yaml
```
application: blah
version: 1
runtime: python
api_version: 1
handlers:
- url: /(.*\.(html|css|js|gif|jpg|png|ico))
static_files: static/\1
upload: static/.*
expiration: "1d"
- url: .*
script: main.py
- url: /task/.*
script: main.py
login: admin
```
Accessing the demo app on their GAE works just fine. When I take the [exact same code](https://github.com/fbsamples/runwithfriends), except for the changes I need to run under my own GAE account, the app won't work. I can login to the app using my account and the app shows up under my Apps menu. So, OAuth is good. Every time I go to access the main page, I'm always redirected to the iframe showing I use the app (can't show that image, runwithfriends app is over quota as I type this) but won't go to this iframe:

at all.
I've looked at and understand how url routing works:
```
def main():
routes = [
(r'/', RecentRunsHandler),
(r'/user/(.*)', UserRunsHandler),
(r'/run', RunHandler),
(r'/realtime', RealtimeHandler),
(r'/task/refresh-user/(.*)', RefreshUserHandler),
]
application = webapp.WSGIApplication(routes,
debug=os.environ.get('SERVER_SOFTWARE', '').startswith('Dev'))
util.run_wsgi_app(application)
```
All the handlers are there with what looks like correct post/get methods. There are no errors logged in my GAE instance either, such as 404 or 405. When I first use `http://localhost:8080`, I see plenty of 200s and nothing else.
I started out using dev\_appengine.py but had to move development to GAE because of my HTTPS security setting. I disabled HTTPS temporarily but still always get redirected to the apps.facebook.com/path no matter what and can't keep all my development within dev\_appengine.py. Don't know if that's related to my issue or not.
Since the demo works (when not over quota), I'm sure the problem is with my own GAE instance, or configuration within FB to use my GAE, I just for the life of me can't figure out. I'm using Eclipse with PyDev and GAE plugins.
### Update
Adding the app's FB configuration and the window that's displayed after I login to the app.
Sandbox:

Redirects:

Running under my GAE, this is the only page that is returned:

|
2012/06/05
|
[
"https://Stackoverflow.com/questions/10891670",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/137527/"
] |
Read the [documentation for `__del__`](http://docs.python.org/reference/datamodel.html#object.__del__) and [for the garbage collector](http://docs.python.org/library/gc.html#gc.garbage). `__del__` doesn't do what you probably think it does, nor does `del`. `__del__` is not necessarily called when you do a `del`, and may never be called in case of circular references. All `del` does is decrement the reference count by 1.
|
Because the garbage collector has no way of knowing which can safely be deleted first.
| 6,414
|
53,784,485
|
I am using python to collect temperature data but only want to store the last 24 hours of data.
I am currently generating my .csv file with this
```
while True:
tempC = mcp.temperature
tempF = tempC * 9 / 5 + 32
timestamp = datetime.datetime.now().strftime("%y-%m-%d %H:%M ")
f = open("24hr.csv", "a")
f.write(timestamp)
f.write(',{}'.format(tempF))
f.write("\n")
f.close()
```
The .csv looks like this
The .csv this outputs looks like this
```
18-12-13 10:58 ,44.7125
18-12-13 11:03 ,44.6
18-12-13 11:08 ,44.6
18-12-13 11:13 ,44.4875
18-12-13 11:18 ,44.6
18-12-13 11:23 ,44.4875
18-12-13 11:28 ,44.7125
```
I don't want to roll over, just keep the last 24 hours of data. Since I am sampling data every 5 minutes I should end up with 144 lines in my CSV after 24 hours. so if I use readlines() I can tell how many lines I have but how do I get rid of any lines that are older than 24 hours? This is what I came up with which obviously doesn't work. Suggestions?
```
f = open("24hr.csv","r")
lines = f.readlines()
f.close()
if lines => 144:
f = open("24hr.csv","w")
for line in lines:
if line <= "timestamp"+","+"tempF"+\n":
f.write(line)
f.close()
```
|
2018/12/14
|
[
"https://Stackoverflow.com/questions/53784485",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/10792002/"
] |
You've done most of the work already. I've got a couple of suggestions.
1. Use `with`. This will mean that if there's an error mid-way through your program and an exception is raised, the file will be closed properly.
2. Parse the timestamp from the file and compare it with the current time.
3. Use `len` to check the length of a `list`.
Here's the amended program:
```
import datetime
with open("24hr.csv","r") as f:
lines = f.readlines() # read out the contents of the file
if len(lines) >= 144:
yesterday = datetime.datetime.now() - datetime.timedelta(days=1)
with open("24hr.csv","w") as f:
for line in lines:
line_time_string = line.split(",")[0]
line_time = datetime.datetime.strptime(line_time_string, "%y-%m-%d %H:%M ")
if line_time > yesterday: # if the line's time is after yesterday
f.write(line) # write it back into the file
```
This code's not very clean (doesn't conform to PEP-8) but you see the general process.
|
Are u using linux ? If u jus need last 144 lines u can try
```
tail -n 144 file.csv
```
U can find tail for windows too, I got one with CMDer.
If u have to use python and u have small file which fit in RAM, load it with readlines() into list, cut it (lst = lst[:144]) and rewrite. If u dont shure how many lines u have - parse it with <https://docs.python.org/3.7/library/csv.html> , parse time into python datetime (its similar like u write time originaly) and write lines by condition
| 6,418
|
47,128,570
|
I have a set of data that looks like the following:
```
index 902.4 909.4 915.3
n 0.6 0.3 1.4
n.1 0.4 0.3 1.3
n.2 0.3 0.2 1.1
n.3 0.2 0.2 1.3
n.4 0.4 0.3 1.4
DCIS 0.3 1.6
DCIS.1 0.3 1.2
DCIS.2 1.1
DCIS.3 0.2 1.2
DCIS.4 0.2 1.3
DCIS.5 0.2 0.1 1.5
br_1 0.5 0.4 1.4
br_1.1 0.2 1.3
br_1.2 0.5 0.2 1.4
br_1.3 0.5 0.2 1.6
br_1.4 1.4
```
with the regular python indexing for the column[0]. The below is a code that I've written with lots of help from members of Stackoverflow:
```
nh = pd.ExcelFile(file)
df = pd.read_excel(nh)
df = df.set_index('Samples').transpose()
df = df.reset_index()
df_n = df.loc[df['index'].str.startswith('n')]
df_DCIS = df.loc[df['index'].str.startswith('DCIS')]
df_br1234 = df.loc[df['index'].str.startswith('br')]
#plt.tight_layout()
for i in range(1, df.shape[1]):
plt.figure()
df_n.iloc[:, i].hist(histtype='step', color='k', label='N')
df_DCIS.iloc[:, i].hist(histtype='step', color='r', label='DCIS')
df_br1234.iloc[:, i].hist(histtype='step', color='orange', label='IDC')
plt.legend(loc='center left', bbox_to_anchor=(1, 0.5), fancybox=True, shadow=True)
plt.title("Histograms for " + df.columns[i], loc='center')
plt.show()
```
This creates multiple figures with cut-off legend (this was not cut off when the figure was made by pycharm). However, the plt.title gives an error message saying TypeError: must be str, not float. I do understand that the columns of the different dataframes are floating numbers, and when I type print(df.columns), it says dtype is object. Is there a way to convert the float object to str? I tried using
```
plt.title("Histograms for " + df.columns[i].astype('str'))
```
but it said float object has no attribute astype.
|
2017/11/06
|
[
"https://Stackoverflow.com/questions/47128570",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/8147329/"
] |
You can use this:
```
plt.title("Histograms for " + str(df.columns[i]))
```
|
Try
```
plt.title("Histograms for {0:.2f}".format(df.columns[i]))
```
The characters inside the curly brackets are from the [Format Specification Mini-Language](https://docs.python.org/3/library/string.html#format-specification-mini-language). This example formats a float with 2 decimal places. If you follow the link you'll see lots of other options.
| 6,421
|
62,741,775
|
With my python script below, I wanted to check if a cron job is defined in my linux (centOS 7.5) server, and if it doesn't exist, I will add one by using python-crontab module.. It was working well until I gave CRONTAB -R to delete existing cron jobs and when I re-execute my python script, it is saying cronjob exists even after they were removed using crontab -r..
```
import os
from crontab import CronTab
cron = CronTab(user="ansible")
job = cron.new(command='echo hello_world')
job.minute.every(1)
basic_command = "* * * * * echo hello_world"
basic_iter = cron.find_command("hello_world")
for item in basic_iter:
if str(item) == basic_command:
print("crontab job already exist", item)
break
else:
job.enable()
cron.write()
print("cronjob does not exist and added successfully.. please see \"crontab -l\" ")
break
```
list of current cron jobs
```
[ansible@node1 ansible]$ crontab -l
no crontab for ansible
```
[user - ansible]
`python code results:`
```
crontab job already exist * * * * * echo hello_world
```
It was working until I removed cron jobs using command `crontab -r` and now my python output is saying that cron job already exists.
Not sure what my mistake was - please help.. (or if there is any better way to find cron jobs in local user, please help with that).
|
2020/07/05
|
[
"https://Stackoverflow.com/questions/62741775",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/11999777/"
] |
The problem is that you have initialized a new Cron job before checking if it exists. You assumed that `Cron.find_command()` is only identifying enabled cron jobs. But it also identifies cronjobs that are created, but not enabled yet.
So, you have to check if the cronjob exists before creating a new job. Then, if it does not exist, you can create a new cron job and enable it. You can try the below code:
```
import os
from crontab import CronTab
cron = CronTab("ansible")
basic_command = "* * * * * echo hello_world"
basic_iter = cron.find_command("hello_world")
exsist=False
for item in basic_iter:
if str(item) == basic_command:
print("crontab job already exist", item)
exsist=True
break
if not exsist:
job = cron.new(command='echo hello_world')
job.minute.every(1)
job.enable()
cron.write()
print("cronjob does not exist and added successfully.. please see \"crontab -l\" ")
```
|
Another Solution might be to add the items to an list as the output of the find command is an generator object but by putting the items into an list makes it easier to work on. This is what I did to solve the problem you had
Below here based on everything else already being initialized
```
List_A=[]
basic_iter = cron.find_command("hello world")
for item in basic_iter:
List_A.append(item)
#From there you can do more but here are 2 examples
if len(List_A) == 0:
#create cronjob
else:
#don't create cron job
#or you could do a for loop comparing if you want to iterate
for i in List_A:
if i =="hello world": don't create cron job
else: create cron job
```
Hope this helps sorry for formatting this is my first time
| 6,423
|
46,418,897
|
I have a list as shown below which contain some dictionaries.
```
dlist=[
{
"a":1,
"b":[1,2]
},
{
"a":3,
"b":[4,5]
},
{
"a":1,
"b":[1,2,3]
}
]
```
I want the result to be as in this form as shown below
```
dlist=[
{
"a":1,
"b":[1,2,3]
},
{
"a":3,
"b":[4,5]
}
]
```
I can solve this using multiple iteration of loops and comparison, but is there a pythonic way?
|
2017/09/26
|
[
"https://Stackoverflow.com/questions/46418897",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6858122/"
] |
Here is a solution that uses a temporary defaultdict:
```python
from collections import defaultdict
dd = defaultdict(set) # create temporary defaultdict
for d in dlist: dd[d["a"]] |= set(d["b"]) # union set(b) for each a
l = [{"a":k, "b":list(v)} for k,v in dd.items()] # generate result list
```
[Try it online!](https://tio.run/##XY7BCoMwDIbvfYrgqYUyULeL4JOUHtRUVqytaCYM57O76g4Tc0ry5//zDW96Bp9tGzo7UakWBrGSKilS@WvrpFCpzDRbJfur@Um9y8dFvXhlHnXNWDuGHprgnGnIBj@B7YcwEqBpq5cjtA0xRCjPCz4ZEqwNIyBYDwdmAYgKVfykNXxKiCc8jnWiBXPRrpYdopMHwG7gs1hhz@jkfKTgzZLpJy4i1TBaT@C27Qs "Python 2 – Try It Online")
|
if My understanding is right
```
uniques, theNewList = set(), []
for d in dlist:
cur = d["a"] # Avoid multiple lookups of the same thing
if cur not in uniques:
theNewList.append(d)
uniques.add(cur)
print(theNewList)
```
| 6,424
|
18,305,026
|
I want to monitor a dir , and the dir has sub dirs and in subdir there are somes files with `.md`. (maybe there are some other files, such as \*.swp...)
I only want to monitor the .md files, I have read the doc, and there is only a `ExcludeFilter`, and in the issue : <https://github.com/seb-m/pyinotify/issues/31> says, only dir can be filter but not files.
Now what I do is to filter in `process_*` functions to check the `event.name` by `fnmatch`.
So if I only want to monitor the specified suffix files, is there a better way? Thanks.
This is the main code I have written:
```
!/usr/bin/env python
# -*- coding: utf-8 -*-
import pyinotify
import fnmatch
def suffix_filter(fn):
suffixes = ["*.md", "*.markdown"]
for suffix in suffixes:
if fnmatch.fnmatch(fn, suffix):
return False
return True
class EventHandler(pyinotify.ProcessEvent):
def process_IN_CREATE(self, event):
if not suffix_filter(event.name):
print "Creating:", event.pathname
def process_IN_DELETE(self, event):
if not suffix_filter(event.name):
print "Removing:", event.pathname
def process_IN_MODIFY(self, event):
if not suffix_filter(event.name):
print "Modifing:", event.pathname
def process_default(self, event):
print "Default:", event.pathname
```
|
2013/08/19
|
[
"https://Stackoverflow.com/questions/18305026",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1276501/"
] |
I think you basically have the right idea, but that it could be implemented more easily.
The `ProcessEvent` class in the **pyinotify** module already has a hook you can use to filter the processing of events. It's specified via an optional `pevent` keyword argument given on the call to the constructor and is saved in the instance's `self.pevent` attribute. The default value is `None`. It's value is used in the class' `__call__()` method as shown in the following snippet from the `pyinotify.py` source file:
```
def __call__(self, event):
stop_chaining = False
if self.pevent is not None:
# By default methods return None so we set as guideline
# that methods asking for stop chaining must explicitly
# return non None or non False values, otherwise the default
# behavior will be to accept chain call to the corresponding
# local method.
stop_chaining = self.pevent(event)
if not stop_chaining:
return _ProcessEvent.__call__(self, event)
```
So you could use it only allow events for files with certain suffixes (aka extensions) with something like this:
```
SUFFIXES = {".md", ".markdown"}
def suffix_filter(event):
# return True to stop processing of event (to "stop chaining")
return os.path.splitext(event.name)[1] not in SUFFIXES
processevent = ProcessEvent(pevent=suffix_filter)
```
|
There's nothing particularly wrong with your solution, but you want your inotify handler to be as fast as possible, so there are a few optimizations you can make.
You should move your match suffixes out of your function, so the compiler only builds them once:
```
EXTS = set([".md", ".markdown"])
```
I made them a set so you can do a more efficient match:
```
def suffix_filter(fn):
ext = os.path.splitext(fn)[1]
if ext in EXTS:
return False
return True
```
I'm only presuming that `os.path.splitext` and a set search are faster than an iterative `fnmatch`, but this may not be true for your really small list of extensions - you should test it.
(Note: I've mirrored your code above where you return False when you make a match, but I'm not convinced that's what you want - it is at the very least not very clear to someone reading your code)
| 6,425
|
40,613,590
|
I would like to design a function `f(x : float, up : bool)` with these input/output:
```
# 2 decimals part rounded up (up = True)
f(142.452, True) = 142.46
f(142.449, True) = 142.45
# 2 decimals part rounded down (up = False)
f(142.452, False) = 142.45
f(142.449, False) = 142.44
```
Now, I know about Python's `round` built-in function but it will always round `142.449` up, which is not what I want.
Is there a way to do this in a nicer pythonic way than to do a bunch of float comparisons with epsilons (prone to errors)?
|
2016/11/15
|
[
"https://Stackoverflow.com/questions/40613590",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1974842/"
] |
Have you considered a mathematical approach using `floor` and `ceil`?
If you always want to round to 2 digits, then you could premultiply the number to be rounded by 100, then perform the rounding to the nearest integer and then divide again by 100.
```
from math import floor, ceil
def rounder(num, up=True):
digits = 2
mul = 10**digits
if up:
return ceil(num * mul)/mul
else:
return floor(num*mul)/mul
```
|
`math.ceil()` rounds up, and `math.floor()` rounds down. So, the following is an example of how to use it:
```
import math
def f(x, b):
if b:
return (math.ceil(100*x) / 100)
else:
return (math.floor(100*x) / 100)
```
This function should do exactly what you want.
| 6,427
|
5,104,366
|
users,
I have a basic question concerning inheritance (in python). I have two classes and one of them is inherited from the other like
```
class p:
def __init__(self,name):
self.pname = name
class c(p):
def __init__(self,name):
self.cname = name
```
Is there any possibility that I can create a parent object and several child objects which refer to the SAME parent object? It should work like that that the parent object contains several variables and whenever I access the corresponding variables from a child I actually access the variable form the parent. I.e. if I change it for one child it is changed also for all other childes and the data are only stored once in memory (and not copied for each child...)
Thank you in advance.
Here is a possible workaround which I do not consider as so nice
```
class P:
def __init__(self, name):
self.pname = name
class C:
def __init__(self, name,pobject):
self.pobject = pobject
self.cname = name
```
Is this really the state of the art or do there exist other concepts?
Sebastian
Thank you all for helping me, also with the name conventions :) But I am still not very satisfied. Maybe I give a more advanced example to stress what I really want to do.
```
class P:
data = "shareddata"
def __init__(self,newdata):
self.data = newdata
def printname(self):
print self.name
class C(P):
def __init__(self,name):
self.name = name
```
Now I can do the following
```
In [33]: c1 = test.C("name1")
In [34]: c2 = test.C("name2")
In [35]: c1.printname()
name1
In [36]: c2.printname()
name2
In [37]: c1.data
Out[37]: 'shareddata'
In [38]: c2.data
Out[38]: 'shareddata'
```
And this is so far exactly what I want. There is a variable name which is different for every child and the parent class accesses the individual variables. Normal inheritance.
Then there is the variable data which comes from the parent class and every child access it. However, now the following does not work any more
```
In [39]: c1.data = "tst"
In [40]: c2.data
Out[40]: 'shareddata'
In [41]: c1.data
Out[41]: 'tst'
```
I want the change in c1.data to affect also c2.data since I want the variable to be shared, somehow a global variable of this parent class.
And more than that. I also want to create different instances of P, each having its own data variable. And when I create a new C object I want to specify from which P object data should be inhetited i.e. shared....
UPDATE:
remark to the comment of @eyquem: Thanks for this, it is going into the direction I want. However, now the `__class__.pvar` is shared among all objects of the class. What I want is that several instances of P may have a different pvar. Lets assume P1 has pvar=1 and P2 has pvar=2. Then I want to create children C1a, C1b, C1c which are related to P1, i.e. if I say C1a.pvar it should acess pvar from P1. Then I create C2a, C2b, C2c and if I access i.e. C2b.pvar I want to access pvar from P2. Since the class C inherits pvar from the class P pvar is known to C. My naive idea is that if I create a new instance of C I should be able to specify which (existing) P object should be used as the parent object and not to create a completely new P object as it is done when calling `P.__init__` inside of the `__init__` of C... It sounds simple to me, maybe I forget something...
UPDATE:
So I found [this discussion](http://www.velocityreviews.com/forums/t356880-can-you-create-an-instance-of-a-subclass-with-an-existing-instance-of-the-base-class.html) which is pretty much my question
Any suggestions?
UPDATE:
The method .**class**.\_*subclasses*\_ seems to be not existing any more..
UPDATE:
Here is onother link:
[link to discussion](http://bytes.com/topic/python/answers/834541-using-existing-instance-parent)
There it is solved by copying. But I do not want to copy the parent class since I would like that it exists only once...
UPDATE:
Sorry for leaving the discussion yesterday, I am a bit ill... And thank you for the posts! I will now read through them. I thought about it a bit more and here is a possible solution I found
```
class P(object):
def __init__(self,pvar):
self.pobject = None
self._pvar = pvar
@property
def pvar(self):
if self.pobject != None:
return self.pobject.pvar
else:
return self._pvar
@pvar.setter
def pvar(self,val):
if self.pobject != None:
self.pobject.pvar = val
else:
self._pvar=val
def printname(self):
print self.name
class C(P):
def __init__(self,name,pobject): #<-- The same default `P()` is
# used for all instances of `C`,
# unless pobject is explicitly defined.
P.__init__(self,None)
self.name = name
self.pobject = pobject
p1 = P("1")
p2 = P("2")
c1a = C("c1a",p1)
c1b = C("c1b",p1)
c1c = C("c1c",p1)
c2a = C("c2a",p2)
c2b = C("c2b",p2)
c2c = C("c2c",p2)
print id(c1a.pvar)
print id(c1b.pvar)
print id(c1c.pvar)
print id(c2a.pvar)
print id(c2b.pvar)
print id(c2c.pvar)
print id(p1.pvar)
print id(p2.pvar)
print id(c1a.name)
print id(c1b.name)
print id(c1c.name)
print id(c2a.name)
print id(c2b.name)
print id(c2c.name)
```
It is a bit cumbersome and I hope that there is a simpler way to achieve this. But it has the feature that pvar is only mentioned in the class P and the class C does not know about pvar as it should be according to my understanding of inheritance. Nevertheless when I create a new instance of C I can specify an existing instance of P which will be stored in the variable pobject. When the variable pvar is accessed actually pvar of the P-instance stored in this variable is accessed...
The output is given by
```
3078326816
3078326816
3078326816
3074996544
3074996544
3074996544
3078326816
3074996544
156582944
156583040
156583200
156583232
156583296
156583360
```
I will read now through your last comments,
all the best, Sebastian
UPDATE:
I think the most elegant way would be the following (which DOES NOT work)
```
class P(object):
def __init__(self,pvar):
self.pvar = pvar
def printname(self):
print self.name
class C(P):
def __init__(self,name,pobject):
P = pobject
self.name = name
```
I think python should allow for this...
UPDATE:
Ok, now I found a way to achieve this, due to the explanations by eyquem. But Since this is really a hack there should be an official version for the same...
```
def replaceinstance(parent,child):
for item in parent.__dict__.items():
child.__dict__.__setitem__(item[0],item[1])
print item
class P(object):
def __init__(self,pvar):
self.pvar = pvar
def printname(self):
print self.name
class C(P):
def __init__(self,name,pobject):
P.__init__(self,None)
replaceinstance(pobject,self)
self.name = name
p1 = P("1")
p2 = P("2")
c1a = C("c1a",p1)
c1b = C("c1b",p1)
c1c = C("c1c",p1)
c2a = C("c2a",p2)
c2b = C("c2b",p2)
c2c = C("c2c",p2)
print id(c1a.pvar)
print id(c1b.pvar)
print id(c1c.pvar)
print id(c2a.pvar)
print id(c2b.pvar)
print id(c2c.pvar)
print id(p1.pvar)
print id(p2.pvar)
print id(c1a.name)
print id(c1b.name)
print id(c1c.name)
print id(c2a.name)
print id(c2b.name)
print id(c2c.name)
```
the output is the same as above
```
3077745184
3077745184
3077745184
3074414912
3074414912
3074414912
3077745184
3074414912
144028416
144028448
144028480
144028512
144028544
144028576
```
UPDATE: Even if the id's seem to be right, the last code does not work as is clear from this test
```
c1a.pvar = "newpvar1"
print c1a.pvar
print c1b.pvar
print c1c.pvar
print c2a.pvar
print c2b.pvar
print c2c.pvar
print p1.pvar
print p2.pvar
```
it has the output
```
newpvar1
1
1
2
2
2
1
2
```
However the version I posted first works:
```
class P(object):
def __init__(self,pvar):
self.pobject = None
self._pvar = pvar
@property
def pvar(self):
if self.pobject != None:
return self.pobject.pvar
else:
return self._pvar
@pvar.setter
def pvar(self,val):
if self.pobject != None:
self.pobject.pvar = val
else:
self._pvar=val
def printname(self):
print self.name
class C(P):
def __init__(self,name,pobject): #<-- The same default `P()` is
# used for all instances of `C`,
# unless pobject is explicitly defined.
P.__init__(self,None)
self.name = name
self.pobject = pobject
p1 = P("1")
p2 = P("2")
c1a = C("c1a",p1)
c1b = C("c1b",p1)
c1c = C("c1c",p1)
c2a = C("c2a",p2)
c2b = C("c2b",p2)
c2c = C("c2c",p2)
print id(c1a.pvar)
print id(c1b.pvar)
print id(c1c.pvar)
print id(c2a.pvar)
print id(c2b.pvar)
print id(c2c.pvar)
print id(p1.pvar)
print id(p2.pvar)
print id(c1a.name)
print id(c1b.name)
print id(c1c.name)
print id(c2a.name)
print id(c2b.name)
print id(c2c.name)
print "testing\n"
c1a.printname()
c1b.printname()
c1c.printname()
c2a.printname()
c2b.printname()
c2c.printname()
print "\n"
c1a.name = "c1anewname"
c2b.name = "c2bnewname"
c1a.printname()
c1b.printname()
c1c.printname()
c2a.printname()
c2b.printname()
c2c.printname()
print "pvar\n"
print c1a.pvar
print c1b.pvar
print c1c.pvar
print c2a.pvar
print c2b.pvar
print c2c.pvar
print p1.pvar
print p2.pvar
print "\n"
c1a.pvar = "newpvar1"
print c1a.pvar
print c1b.pvar
print c1c.pvar
print c2a.pvar
print c2b.pvar
print c2c.pvar
print p1.pvar
print p2.pvar
print "\n"
c2c.pvar = "newpvar2"
print c1a.pvar
print c1b.pvar
print c1c.pvar
print c2a.pvar
print c2b.pvar
print c2c.pvar
print p1.pvar
print p2.pvar
```
with the output
```
3077745184
3077745184
3077745184
3074414912
3074414912
3074414912
3077745184
3074414912
144028416
144028448
144028480
144028512
144028544
144028576
testing
c1a
c1b
c1c
c2a
c2b
c2c
c1anewname
c1b
c1c
c2a
c2bnewname
c2c
pvar
1
1
1
2
2
2
1
2
newpvar1
newpvar1
newpvar1
2
2
2
newpvar1
2
newpvar1
newpvar1
newpvar1
newpvar2
newpvar2
newpvar2
newpvar1
newpvar2
```
Does anybody know why it is like that? I probably do not understand the internal way python works with this `__dict__` so well...
|
2011/02/24
|
[
"https://Stackoverflow.com/questions/5104366",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/632263/"
] |
>
> It should work like that that the parent object contains several variables and whenever I access the corresponding variables from a child I actually access the variable form the parent. I.e. if I change it for one child it is changed also for all other childes and the data are only stored once in memory (and not copied for each child...)
>
>
>
That's not inheritance.
That's a completely different concept.
Your "shared variables" are simply objects that can be mutated and have references in other objects. Nothing interesting.
Inheritance is completely different from this.
|
I am lost in all these diverse answers.
But I think that what you need is expressed in the following code:
```
class P:
pvar=1 # <--- class attribute
def __init__(self,name):
self.cname = name
class C(P):
def __init__(self,name):
self.cname = name
c1=C('1')
c2=C('2')
print
print "C.pvar ==",C.pvar,' id(C.pvar) ==',id(C.pvar)
print "c1.pvar==",c1.pvar,' id(c1.pvar)==',id(c1.pvar)
print "c2.pvar==",c2.pvar,' id(c2.pvar)==',id(c2.pvar)
print
C.pvar = [1,2]
print "instruction C.pvar = [1,2] executed"
print "C.pvar ==",C.pvar,' id(C.pvar) ==',id(C.pvar)
print "c1.pvar==",c1.pvar,' id(c1.pvar)==',id(c1.pvar)
print "c2.pvar==",c2.pvar,' id(c2.pvar)==',id(c2.pvar)
print
c2.__class__.pvar = 'sun'
print "instruction c2.__class__.pvar = 'sun' executed"
print "C.pvar ==",C.pvar,' id(C.pvar) ==',id(C.pvar)
print "c1.pvar==",c1.pvar,' id(c1.pvar)==',id(c1.pvar)
print "c2.pvar==",c2.pvar,' id(c2.pvar)==',id(c2.pvar)
print
c2.pvar = 145
print "instruction c2.pvar = 145 executed"
print "C.pvar ==",C.pvar,' id(C.pvar) ==',id(C.pvar)
print "c1.pvar==",c1.pvar,' id(c1.pvar)==',id(c1.pvar)
print "c2.pvar==",c2.pvar,' id(c2.pvar)==',id(c2.pvar)
```
result
```
C.pvar == 1 id(C.pvar) == 10021768
c1.pvar== 1 id(c1.pvar)== 10021768
c2.pvar== 1 id(c2.pvar)== 10021768
instruction C.pvar = [1,2] executed
C.pvar == [1, 2] id(C.pvar) == 18729640
c1.pvar== [1, 2] id(c1.pvar)== 18729640
c2.pvar== [1, 2] id(c2.pvar)== 18729640
instruction c2.__class__.pvar = 'sun' executed
C.pvar == sun id(C.pvar) == 18579136
c1.pvar== sun id(c1.pvar)== 18579136
c2.pvar== sun id(c2.pvar)== 18579136
instruction c2.pvar = 145 executed
C.pvar == sun id(C.pvar) == 18579136
c1.pvar== sun id(c1.pvar)== 18579136
c2.pvar== 145 id(c2.pvar)== 10022024
```
I mean that what you must know is that to change , through an instruction implying directly the name of an instance (and not through a change implying only the parent class's name) the class attribute **pvar** while it continues to be shared by all the **P** 's instances, you must write
```
c2.__class__.pvar = something
```
and not
```
c2.pvar =something
```
Note that C is a class effectively inheriting from a parent class P
| 6,430
|
72,509,585
|
I have information about places and purchases in a table, and I need to find the name of all the places where, for all the clients who purchased in that place, the total of their purchases is at least 70%.
I've already found the answer on python, I've sum the number of purchases per client, then the purchases per client and place, and I've created a new column with the percentage.
So I got something like this:
| client\_id | place\_name | total purchase | detail purchase | percent |
| --- | --- | --- | --- | --- |
| 1 | place1 | 10 | 7 | 0.7 |
| 1 | place2 | 10 | 3 | 0.3 |
| 2 | place1 | 5 | 4 | 0.8 |
| 2 | place3 | 5 | 1 | 0.2 |
So, my answer should be place1, since all the purchases in that place all the percentage is
>
> = 70%.
>
>
>
I've developed this python code to solve it:
```
places = []
for i in place name:
if (c[c["place_name"]==i]["percent"]>=0.7).all():
places.append(i)
```
but now I need to do it in SQl, but I'm not sure if there's a way to get a similar behavior with the function all in SQL
I've been trying this:
```
SELECT place_name
FROM c
GROUP BY place_name
HAVING total_purchase/detail_purchase >=0.7
```
But, It doesn't work :c
Any help?
|
2022/06/05
|
[
"https://Stackoverflow.com/questions/72509585",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/9535697/"
] |
Schema and insert statements:
```
create table c(client_id int, place_name varchar(50), total_purchase int, detail_purchase int);
insert into c values(1 ,'place1', 10, 7);
insert into c values(1 ,'place2', 10, 3);
insert into c values(2 ,'place1', 5, 4);
insert into c values(2 ,'place3', 5, 1);
```
Query:
```
with cte as
(
select client_id,place_name,total_purchase,detail_purchase,detail_purchase*1.0/total_purchase percent,
count( client_id)over (partition by place_name) total_client
from c a
)
select place_name
from cte where percent>=0.7
group by place_name
having count(client_id)=max(total_client)
```
Output:
| place\_name |
| --- |
| place1 |
*db<>fiddle [here](https://dbfiddle.uk/?rdbms=postgres_14&fiddle=e458704de14ad6ba8de2ebeae631a7f0)*
|
If I understand your question correctly, you could just use a where statement
```
SELECT place_name
FROM purchases
where (detail_purchase/total_purchase) >=0.7
GROUP BY place_name
```
[db fiddle](https://www.db-fiddle.com/f/w4EVh4WrdPJWJBKzqJ8RQb/1)
| 6,440
|
39,448,135
|
I have an application generating a weird config file
```
app_id1 {
key1 = val
key2 = val
...
}
app_id2 {
key1 = val
key2 = val
...
}
...
```
And I am struggling on how to parse this in python. The keys of each app may vary too.
I can't change the application to generate the configuration file in some easily parsable format :)
Any suggestions on how to do this pythonically ?
I am thinking along the lines of dict of dict
```
conf = {'app_id1': {'key1' : 'val', 'key2' : 'val'},
'app_id2' : {'key1' : 'val', 'key2' : 'val'}
}
```
|
2016/09/12
|
[
"https://Stackoverflow.com/questions/39448135",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3821298/"
] |
Based on the fact you said it had a button where you can view source, this sounds like a WYSIWIG (What you see is what you get) editor like CKeditor, TinyMCE, Froala, etc. They take standard HTML textarea elements and using Javascript and CSS convert them into more robust editors. They allow you to do simple text formatting in the textarea, upload images, view source, etc.
They are used a lot in blogs and for content editing for people that don't write code but want to be able to manage and maintain content in web sites. For instance if you type a "paragraph" of text in one of these it will automatically wrap it with the appropriate `<p>` tags using Javascript.
In your case you're adding content in this box, and it's simply applying the formatting to it with Javascript. It will do the same if you just type in the box, vs. copy/paste.
Here are some links to WYSIWIG editors so you can learn more about how they function:
<http://ckeditor.com/>
<https://www.tinymce.com/>
<https://www.froala.com/wysiwyg-editor>
Fun Fact: The editor you used when you typed your question on Stack Overflow uses one of these. <https://meta.stackexchange.com/questions/121981/stackoverflow-official-wmd-editor>
|
It`s not much information, so I‘ll take a guess:
For `<strong><em>`: The website could eventually use a div with the `contenteditable="true"` attribute ([more info on mdn](https://developer.mozilla.org/en-US/docs/Web/HTML/Global_attributes/contenteditable)) as the input method. When you then paste in text from another application that already has markup like bold or italic, it‘s converted to html tags.
The `<span lang="en-gb">` could come from the browser, another application or the website through analyzing the text and adding this.
| 6,442
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.