qid
int64 46k
74.7M
| question
stringlengths 54
37.8k
| date
stringlengths 10
10
| metadata
listlengths 3
3
| response_j
stringlengths 29
22k
| response_k
stringlengths 26
13.4k
| __index_level_0__
int64 0
17.8k
|
|---|---|---|---|---|---|---|
72,565,884
|
I have a following code that handle with sockets in Python:
```py
import socket
HOST = "127.0.0.1"
PORT = 4999
TIMEOUT = 1
with socket.socket(socket.AF_INET, socket.SOCK_STREAM) as s:
s.bind((HOST, PORT)) #
s.listen()
conn, addr = s.accept()
conn.settimeout(TIMEOUT)
with conn:
print(f"Connected by {addr}")
while True:
recieved = conn.recv(1024)
print(recieved)
```
I open a terminal and run this python script using `python3 server.py`. It works, but when the first connection is closed, my script ends. I would like to be able to handle with multiple connections. In other words, when the first connection closes, I would like to wait for the second, third connection etc.
I try to do this with `threading`. See below:
```py
import socket
import threading
HOST = "127.0.0.1"
PORT = 4999
TIMEOUT = 1
with socket.socket(socket.AF_INET, socket.SOCK_STREAM) as s:
s.bind((HOST, PORT)) #
s.listen()
conn, addr = s.accept()
conn.settimeout(TIMEOUT)
threading.Lock()
with conn:
print(f"Connected by {addr}")
while True:
recieved = conn.recv(1024)
print(recieved)
```
but the situation is the same. What do I do wrong, please?
|
2022/06/09
|
[
"https://Stackoverflow.com/questions/72565884",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/14667788/"
] |
You don't need threads to handle serial connections, and your example "with threads" didn't use threads at all. It just created (and didn't save) a `threading.Lock` object. Use a `while` loop to accept the next connection:
```py
import socket
HOST = ''
PORT = 4999
with socket.socket() as s:
s.bind((HOST, PORT))
s.listen()
while True:
conn, addr = s.accept()
with conn:
print(f"Connected by {addr}")
while True:
received = conn.recv(1024)
if not received: break # connection was closed
print(received)
print(f"Disconnected by {addr}")
```
Manual 2-client demo:
```py
>>> from socket import *
>>> s=socket()
>>> s.connect(('localhost',4999))
>>> s.sendall(b'hello')
>>> s.close()
>>> s=socket()
>>> s.connect(('localhost',4999))
>>> s.sendall(b'goodbye')
>>> s.close()
```
Server output:
```none
Connected by ('127.0.0.1', 19639)
b'hello'
Disconnected by ('127.0.0.1', 19639)
Connected by ('127.0.0.1', 19640)
b'goodbye'
Disconnected by ('127.0.0.1', 19640)
```
If you need parallel connections, start a thread for each connection:
```py
import socket
import threading
HOST = '' # typically listen on any interface on the server. "127.0.0.1" will only accept connections on the *same* computer.
PORT = 4999
def handler(conn,addr):
with conn:
print(f"Connected by {addr}")
while True:
received = conn.recv(1024)
if not received: break # connection was closed
print(f'{addr}: {received}')
print(f"Disconnected by {addr}")
with socket.socket() as s:
s.bind((HOST, PORT))
s.listen()
while True:
conn, addr = s.accept()
threading.Thread(target=handler, args=(conn,addr)).start()
```
Manual demo of 2 clients connecting at the same time:
```py
>>> from socket import *
>>> s=socket()
>>> s.connect(('localhost',4999))
>>> s2=socket()
>>> s2.connect(('localhost',4999))
>>> s.sendall(b'hello')
>>> s2.sendall(b'goodbye')
>>> s.close()
>>> s2.close()
```
Server output:
```none
Connected by ('127.0.0.1', 19650)
Connected by ('127.0.0.1', 19651)
('127.0.0.1', 19650): b'hello'
('127.0.0.1', 19651): b'goodbye'
Disconnected by ('127.0.0.1', 19650)
Disconnected by ('127.0.0.1', 19651)
```
|
Something like this:
Main thread receives data and append to list.
Thread monitors list for data to do whatever.
```
def receive_tcp_packets(self):
server_socket.bind(socket.AF_INET, socket.SOCK_STREAM) as server_socket:
server_socket.listen()
server_conn, address = server_socket.accept()
# Wait for connection
with server_conn:
print("Connected by {}".format(str(address)))
# Wait to receive data
while True:
data = server_conn.recv(self.packet_size)
# Just keep appending the data
self.recv_data.append(data)
if not data:
break
# Close connection
server_socket.close()
def run_server(self):
while True:
self.receive_tcp_packets()
# Run the function that process incoming data
thread = threading.Thread(target=check_for_data, args = (xxx))
thread.start
# Run the main function that waits for incoming data
server_socket.run_server()
```
| 9,990
|
65,047,449
|
I am trying to fetch data from a MySQL database using python connector. I want to fetch records matching the `ID`. The ID has an integer data type. This is the code:
```py
custID = int(input("Customer ID: "))
executeStr = "SELECT * FROM testrec WHERE ID=%d"
cursor.execute(executeStr, custID)
custData = cursor.fetchall()
if bool(custData):
pass
else:
print("Wrong Id")
```
But the code is raising an error which says:
```
mysql.connector.errors.ProgrammingError: 1064 (42000): You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version
for the right syntax to use near '%d' at line 1
```
Any ideas why the integer placeholder `%d` is producing this?
|
2020/11/28
|
[
"https://Stackoverflow.com/questions/65047449",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/14631135/"
] |
The string only accepts %s or %(name)s , not %d.
```
executeStr = "SELECT * FROM testrec WHERE ID=%s"
```
and the variables are supposed to be in a tuple, although that varies by implementation depending on what version you are using, you might need to use:
```
cursor.execute(executeStr, (custID,))
```
More information here <https://dev.mysql.com/doc/connector-python/en/connector-python-api-mysqlcursor-execute.html>
The [current version](https://github.com/mysql/mysql-connector-python/tree/403535dbdd45d894c780da23ba96b1a6c81e3866) supports variables of types: `int`,`long`,`float`,`string`,`unicode`,`bytes`,`bytearray`,`bool`,`none`,`datetime.datetime`,`datetime.date`,`datetime.time`,`time.struct_time`,`datetime.timedelta`, and `decimal.Decimal`
|
Your should try %s not %d. This is because your data type of MYSQL may be int but the parameter you should pass to cursor must be string.
Try this:
```
custID = int(input("Customer ID: "))
executeStr = 'SELECT * FROM testrec WHERE ID=%s'%(custID,)
cursor.execute(executeStr, custID)
custData = cursor.fetchall()
if bool(custData):
pass
else:
print("Wrong Id")
```
| 9,991
|
50,807,953
|
I am trying to write a code that finds repeated characters in a word using python 3.x.
For instance,
```
"abcde" -> 0 # no characters repeats more than once
"aabbcde" -> 2 # 'a' and 'b'
"aabBcde" -> 2 # 'a' occurs twice and 'b' twice (bandB)
"indivisibility" -> 1 # 'i' occurs six times
```
Here is the code I have so far,
```
def duplicate_count(text):
my_list=list(text)
final=[]
for i in my_list:
if i in final:
return(len(final[i]))
else:
return(0)
```
I want to grow in my python skills so I want to understand what's going wrong with it and why.
Any ideas?
|
2018/06/12
|
[
"https://Stackoverflow.com/questions/50807953",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/7852656/"
] |
I don't understand what your code is supposed to do: you never put anything in `final` and you `return` immediately.
The below solution keeps two `set`s as it iterates over the input. One is every character we've seen, the other is every character that occurs more than once. We use `set`s because they have very fast membership checking, and because when you try to add an item to a `set` that already contains that item, nothing happens:
```
def duplicate_count(text):
seen = set()
more_than_one = set()
for letter in text:
if letter in seen:
more_than_one.add(letter)
else:
seen.add(letter)
return len(more_than_one)
```
|
>
> I want to grow in my python skills so I want to understand what's going wrong with it and why.
>
>
> Any ideas?
>
>
>
Well, for one thing, since you never actually put anything in "final" you will never have a case for which "i" is in "final"...
Here's a function that will do what you want:
```
def do_stuff(test_str):
my_list=list(test_str)
my_dict={}
count=0
for i in my_list:
if i in my_dict:
my_dict[i]+=1
else:
my_dict[i]=1
for k in my_dict:
if my_dict[k]>1:
count+=1
return count
```
This function can be improved (by you).
| 9,993
|
50,721,735
|
I have both the version of python.I have also installed jupyter notebook individually and when i open the jupyter notebook and go to new section it is showing python 2
I want to use python3 for newer packages.So how can we upgrade the python version.
|
2018/06/06
|
[
"https://Stackoverflow.com/questions/50721735",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/8722059/"
] |
Creating an instance that uses a service account requires you have the compute.instances.setServiceAccount permission on that service account. To make this work, grant the [iam.serviceAccountUser](https://cloud.google.com/compute/docs/access/iam#the_serviceaccountuser_role) role to your service account (either on the entire project or on the specific service account you want to be able to create instances with).
|
### Find out who you are first
* if you are using Web UI: what **email** address did you use to login?
* if you are using local `gcloud` or `terraform`: find the json file that contains your credentials for gcloud (often named similarly to `myproject*.json`) and see if it contains the **email**: `grep client_email myproject*.json`
### GCP IAM change
1. Go to <https://console.cloud.google.com>
2. Go to IAM
3. Find your **email** address
4. Member -> Edit -> Add Another Role -> type in the role name `Service Account User` -> Add
(You can narrow it down with a Condition, but lets keep it simple for a while).
| 9,994
|
22,883,337
|
Hey all am new to python programming and i have noticed some code which is really confusing me.
```
import collectors
s = 'mississippi'
d = collectors.defaultdict(int)
for k in s:
d[k] += 1
d.items()
```
The thing i need to know is the use of d[k] here ..I know k is the value in the string s.But i didnt understood what d[k] returns.In defaultdict(int) new value is created if dictonary has no values..
Please help me any help would be appreciated ..Thanks ..
|
2014/04/05
|
[
"https://Stackoverflow.com/questions/22883337",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3501595/"
] |
Dictionaries in Python are "mapping" types. (This applies to both regular `dict` dictionaries and the more specialized variations like `defaultdict`.) A mapping takes a key and "maps" it to a value. The syntax `d[k]` is used to look up the key `k` in the dictionary `d`. Depending on where it appears in your code, it can have slightly different semantics (either returning the existing value for the key or setting a new one).
In your example, you're using `d[k] += 1`, which increments the value under key `k` in the dictionary. Since integers are immutable, it actually breaks out into `d[k] = d[k] + 1`. The right side `d[k]` does a look up of the value in the dictionary. Then it adds one and, using the `d[k]` on the left side, assigns the result into the dictionary as a new value.
`defaultdict` changes things a bit in that keys that don't yet exist in the dictionary are treated as if they did exist. The argument to its constructor is a "factory" object which will be called to create the new values when an unknown key is requested.
|
Here you go
```
d[key]
Return the item of d with key key. Raises a KeyError if key is not in the map.
```
Straigth from the [python docs](https://docs.python.org/2/library/stdtypes.html), under mapping types.
Go to <https://docs.python.org/> and bookmark it. It will become your best friend.
| 9,997
|
45,471,477
|
I need the sacred package for a new code base I downloaded. It requires sacred.
<https://pypi.python.org/pypi/sacred>
conda install sacred fails with
PackageNotFoundError: Package missing in current osx-64 channels:
- sacred
The instruction on the package site only explains how to install with pip. What do you do in this case?
|
2017/08/02
|
[
"https://Stackoverflow.com/questions/45471477",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1058511/"
] |
That package is not available as a conda package at all. You can search for packages on anaconda.org: <https://anaconda.org/search?q=sacred> You can see the type of package in the 4th column. Other Python packages may be available as conda packages, for instance, NumPy: <https://anaconda.org/search?q=numpy>
As you can see, the conda package numpy is available from a number of different channels (the channel is the name before the slash). If you wanted to install a package from a different channel, you can add the option to the install/create command with the `-c`/`--channel` option, or you can add the channel to your configuration `conda config --add channels channel-name`.
If no conda package exists for a Python package, you can either install via pip (if available) or [build your own conda package](https://docs.conda.io/projects/conda-build/en/latest/user-guide/tutorials/building-conda-packages.html). This isn't usually too difficult to do for pure Python packages, especially if one can use `skeleton` to build a recipe from a package on PyPI.
|
It happens some issue to me before. If your system default Python environment is Conda, then you could download those files from <https://pypi.python.org/pypi/sacred#downloads>
and manually install by
```
pip install C:/Destop/some-file.whl
```
| 9,998
|
60,699,328
|
I'm getting the following error when trying to install pychalk:
```
pip install pychalk --user
```
```
FileNotFoundError: [Errno 2] No such file or directory: 'c:\\users\\jokzc\\appdata\\roaming\\python\\python38\\site-packages\\MarkupSafe-1.1.1.dist-info\\METADATA'
```
|
2020/03/16
|
[
"https://Stackoverflow.com/questions/60699328",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/12868860/"
] |
You are using Python 3.8.
>
> ...python\\python38\\site-packages\...
>
>
>
The [last release for pychalk](https://pypi.org/project/pychalk/#history) was 2018, before the [releases for Python3.8](https://www.python.org/dev/peps/pep-0569/#schedule) came out.
The package is probably not updated yet to support Python 3.8.
You will have to downgrade your Python version.
From the repo's README on [**Testing**](https://github.com/anthonyalmarza/chalk#testing), it seems they only tested on 2.7, 3.5, 3.6.
I tried it and it seems it can also install successfully on Python 3.7.
So...
* Use Python3.7
* [Report the issue](https://github.com/anthonyalmarza/chalk/issues) to the package authors so that they can support Python 3.8
|
you just need to delete the OpenCV-python folder from the(path=**C:\Users\rahul\AppData\Local\Programs\Python\Python37\Lib\site-packages\opencv\_python-3.4.2.17.dist-info**) and then again you have to install that library
you can see the picture below.
here I have attached two pics in which first is showing error messages
and the second one is not showing any errors
[first image](https://i.stack.imgur.com/PDhV3.png)
[second image](https://i.stack.imgur.com/SZe18.png)
| 9,999
|
52,425,065
|
How can I use Python 3.7 in my command line?
My Python interpreter is active on Python 3.7 and my python.pythonPath is `/usr/local/bin/python3`, though my Python version in my command line is 2.7. I've already installed Python 3.7 on my mac.
```
lorenzs-mbp:Python lorenzkort$ python --version
Python 2.7.13
```
|
2018/09/20
|
[
"https://Stackoverflow.com/questions/52425065",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/7866979/"
] |
If you open tab "Terminal" in your VSCode then your default python is called (in your case python2).
If you want to use python3.7 try to use following command:
```
python3 --version
```
Or just type the full path to the python folder:
```
/usr/local/bin/python3 yourscript.py
```
In case you want to change default python interpreter, then follow this accepted anwser: [How to set Python's default version to 3.x on OS X?](https://stackoverflow.com/questions/18425379/how-to-set-pythons-default-version-to-3-x-on-os-x)
|
You should use `python3` instead of `python` for running any commands you want to run then it will use the python3 version interpreter.
Also if you are using `pip` then use `pip3`.
Hope this helps.
| 10,000
|
44,628,186
|
I encounter many tasks in which I need to filter python (2.7) list to keep only ordered unique values. My usual approach is by using `odereddict` from collections:
```
from collections import OrderedDict
ls = [1,2,3,4,1,23,4,12,3,41]
ls = OrderedDict(zip(ls,['']*len(ls))).keys()
print ls
```
the output is:
>
> [1, 2, 3, 4, 23, 12, 41]
>
>
>
is there any other state of the art method to do it in Python?
* Note - the input and the output should be given as `list`
**edit** - a comparison of the methods can be found here:
<https://www.peterbe.com/plog/uniqifiers-benchmark>
the best solution meanwhile is:
```
def get_unique(seq):
seen = set()
seen_add = seen.add
return [x for x in seq if not (x in seen or seen_add(x))]
```
|
2017/06/19
|
[
"https://Stackoverflow.com/questions/44628186",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3861108/"
] |
If you need to preserve the order **and** get rid of the duplicates, you can do it like:
```
ls = [1, 2, 3, 4, 1, 23, 4, 12, 3, 41]
lookup = set() # a temporary lookup set
ls = [x for x in ls if x not in lookup and lookup.add(x) is None]
# [1, 2, 3, 4, 23, 12, 41]
```
This should be **considerably** faster than your approach.
|
You could use a [set](https://docs.python.org/2/library/stdtypes.html#set) like this:
```
newls = []
seen = set()
for elem in ls:
if not elem in seen:
newls.append(elem)
seen.add(elem)
```
| 10,002
|
38,521,553
|
I have two files representing records with intervals.
file1.txt
```none
a 5 10
a 13 19
a 27 39
b 4 9
b 15 19
c 20 33
c 39 45
```
and
file2.txt
```none
something id1 a 4 9 commentx
something id2 a 14 18 commenty
something id3 a 1 4 commentz
something id5 b 3 9 commentbla
something id6 b 16 18 commentbla
something id7 b 25 29 commentblabla
something id8 c 5 59 hihi
something id9 c 40 45 hoho
something id10 c 32 43 haha
```
What I would like to do is to make a file representing only records of the file2 for which, if the column 3 of the file2 is identical to the column 1 of the file1, the range (column 4 and 5) is not overlapping with that of the file1 (column 2 and 3).
The expected output file should be in a file
test.result
```none
something id3 a 1 4 commentz
something id7 b 25 29 commentblabla
```
I have tried to use the following python code:
```
import csv
with open ('file2') as protein, open('file1') as position, open ('test.result',"r+") as fallout:
writer = csv.writer(fallout, delimiter=' ')
for rowinprot in csv.reader(protein, delimiter=' '):
for rowinpos in csv.reader(position, delimiter=' '):
if rowinprot[2]==rowinpos[0]:
if rowinprot[4]<rowinpos[1] or rowinprot[3]>rowinpos[2]:
writer.writerow(rowinprot)
```
This did not seem to work...I had the following result:
```none
something id1 a 4 9 commentx
something id1 a 4 9 commentx
something id1 a 4 9 commentx
```
which apparently is not what I want.
What did I do wrong? It seems to be in the conditional loops. Still, I couldn't figure it out...
|
2016/07/22
|
[
"https://Stackoverflow.com/questions/38521553",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6613786/"
] |
Do looping in a loop is not a good way. Try to avoid to do such things.
I think you could cache the content of file1 first use a dict object. Then, when looping file2 you can use the dict object to find your need things.So, i will code like bellow:
```
with open("file1.csv", "r") as protein, open("file2.csv", "r") as postion, open("result.csv", "w") as fallout:
writer = csv.writer(fallout, delimiter=' ')
protein_dict = {}
for rowinprt in csv.reader(protein, delimiter=' '):
key = rowinprt[0]
sub_value = (int(rowinprt[1]), int(rowinprt[2]))
protein_dict.setdefault(key, [])
protein_dict[key].append(sub_value)
for pos in csv.reader(postion, delimiter=' '):
id_key = pos[2]
id_min = int(pos[3])
id_max = int(pos[4])
if protein_dict.has_key(id_key) and all([ id_max < _min or _max < id_min for _min, _max in protein_dict[id_key]]):
writer.writerow(pos)
```
|
For your code to work, you will have to change three things:
1. Your loop over the position reader is only run once, because the reader runs through the file, the first time the loop is run. Afterwards, he searches for the next item at the end of the file. You'll have to rewind the file each time, before the loop is run. See e.g. ([Python csv.reader: How do I return to the top of the file?](https://stackoverflow.com/questions/431752/python-csv-reader-how-do-i-return-to-the-top-of-the-file))
2. The csv reader returns string objects. If you compare them via smaller/larger signs, it compares them as if they were words. So it checks first digit vs first digit, and so on. To get the comparison right, you'll have to cast to integer first.
3. Lastly, your program now would return a line out of the protein file every time it does not overlap with a line in the position file that has the same identifier. So for example the first protein line would be printed twice for the second and third position line. As I understand, this is not what you want, so you'll have to check ALL position lines to be valid, before printing.
Generally, the code is probably not the most elegant way to do it but to get correct results with something close to what you wrote, you might want to try something along the lines of:
```
with open ('file2.txt') as protein, open('file1.txt') as position, open ('test.result',"r+") as fallout:
writer = csv.writer(fallout, delimiter=' ')
for rowinprot in csv.reader(protein, delimiter=' '):
position.seek(0)
valid = True
for rowinpos in csv.reader(position, delimiter=' '):
if rowinprot[2]==rowinpos[0]:
if not (int(rowinprot[4])<int(rowinpos[1]) or int(rowinprot[3])>int(rowinpos[2])):
valid = False
if valid:
writer.writerow(rowinprot)
```
| 10,007
|
65,891,227
|
this is probably a pretty dumb question..
I was fooling around some in python and decided to make two version of prime number apps, one that just indefinately counts up all prime-numbers from start until you fry ur pc and one that lets the user input a number and then checks if the number is a prime number or not.
So i made one that worked fine but it was quite slow. took 41 sec to check if 269996535 was a prime. i guess my algorithm is pretty bad.. but thats not my concern! i then read up on multiprocessing (multithreading) and decided to give it a go, after some minutes of tinkering i got it to work on smaller numbers and it was fast, so i decided to compare with the previously big number i had (269996535)
and i looked at my resmon and the cpu spiked, then the music stopped, all my programs started to crash one by one and then finally bluescreen.
Can someone explain why this happened and why i cant get this to work. im not looking for a better algorithm im just simply trying to figure out why my pc crashed when trying to multithread.
code that works
```
import time
def prime(x):
start = time.time()
num = x+1
range_num = []
two = x/2
five = x/5
ten = x/10
if not two.is_integer() or five.is_integer() or ten.is_integer():
for i in range(1, num):
y = x/i
if y.is_integer():
range_num.append(True)
else:
range_num.append(False)
total = 0
for ele in range(0, len(range_num)):
total = total + range_num[ele]
if num == 1:
print(1, " is a prime number")
elif total == 2:
print(num-1, " is a prime number")
else:
print(num-1, " is not a prime number")
else:
print(num - 1, " is not a prime number")
print("This took ", round((time.time() - start), 2), "Seconds to complete")
prime(269996535)
```
code that fried my pc
```
import time
import multiprocessing
global range_num
range_num = []
def part_one(x, denom):
if not denom == 0:
for i in range(1, x):
y = x/denom
if y.is_integer():
range_num.append(True)
else:
range_num.append(False)
if __name__ == '__main__':
start = time.time()
x = int(input("Enter number: "))
num = x+1
range_num = []
if num-1 == 1:
print("1 is a prime number")
for i in range(0, x):
p = multiprocessing.Process(target=part_one, args=(x, i,))
p.start()
for process in range_num:
process.join()
total = 0
for ele in range(0, len(range_num)):
total = total + range_num[ele]
if total == 2:
print(num-1, " is a prime number")
else:
print(num - 1, " is not a prime number")
```
|
2021/01/25
|
[
"https://Stackoverflow.com/questions/65891227",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/15040876/"
] |
You need to fix `scripts/start.js` file.
There is a line in this file:
```
const compiler = createCompiler(webpack, config, appName, urls, useYarn);
```
You need to change it to
```
const compiler = createCompiler({ webpack, config, appName, urls, useYarn });
```
This change will fix your current error but probably you'll get the next one :)
|
Try updating your `node` part for webpack 5 as well. What was
```
node: {
dgram: 'empty',
fs: 'empty',
net: 'empty',
tls: 'empty',
child_process: 'empty',
}
```
has to be under `resolve.fallback`.
Source: <https://webpack.js.org/migrate/5/#clean-up-configuration>
| 10,012
|
12,243,891
|
How do i achieve line 9 and 10 i.e the two `for` loops in python
```
matrix = [
[1, 2, 3, 4],
[5, 6, 7, 8],
[9, 10, 11, 12]
]
newMatrix = []
for (i=0; i < len(matrix); i++):
for (j=0; j < len(matrix[i]); j++):
newMatrix[j][i] = matrix[i][j]
print newMatrix
```
PS: i know i can do `[[row[i] for row in matrix] for i in range(4)]` but how i do only using for loops
|
2012/09/03
|
[
"https://Stackoverflow.com/questions/12243891",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/406659/"
] |
Use `range` (or `xrange`).
```
for i in range(len(matrix)):
for j in range(len(matrix[i])):
```
FYI, assigning to `newMatrix[i][j]` will fail, because `newMatrix` is an empty `list`. You need to add a new empty `list` to `newMatrix` for every row and then append the new value to that list in every iteration.
|
You can use the [`enumerate()` function](http://docs.python.org/library/functions.html#enumerate) to loop over both the matrix values and give you indices:
```
newMatrix = [[0] * len(matrix) for _ in xrange(len(matrix[0]))]
for i, row in enumerate(matrix):
for j, value in enumerate(row):
newMatrix[j][i] = value
```
This outputs:
```
[[1, 5, 9], [2, 6, 10], [3, 7, 11], [4, 8, 12]]
```
Because you are addressing rows and columns in the new matrix directly, you need to have initialized that new matrix with empty values first. The list comprehension on the first line of my example does this for you. It creates a new matrix that is (*y*, *x*) in size, given an input matrix of size (*x*, *y*).
Alternatively, you can avoid explicit looping altogether by (ab)using the [`zip` function](http://docs.python.org/library/functions.html#zip):
```
newMatrix = zip(*matrix)
```
which takes each row of `matrix` and groups each column in those rows into new rows.
In the generic case, python `for` constructs loop over sequences. The [`range()`](http://docs.python.org/library/functions.html#range) and [`xrange()`](http://docs.python.org/library/functions.html#xrange) functions can produce numerical sequences for you, these generate a number sequence in much the same way that a `for` loop in C or JavaScript would produce:
```
>>> for i in range(5):
>>> print i,
0 1 2 3 4
```
but the construct is far more powerful than the C-style `for` construct. In most such loops your goal is to provide indices into some sequence, the python construct bypasses the indices and goes straight to the values in the sequence instead.
| 10,014
|
1,478,178
|
Usually, the best practice in python, when using international languages, is to use unicode and to convert early any input to unicode and to convert late to a string encoding (UTF-8 most of the times).
But when I need to do RegEx on unicode I don't find the process really friendly. For example, if I need to find the 'é' character follow by one ore more spaces I have to write (Note: my shell or python file are set to UTF-8):
```
re.match('(?u)\xe9\s+', unicode)
```
So I have to write the unicode code of 'é'. That's not really convenient and if I need to built the RegEx from a variable, things start to come ugly. Example:
```
word_to_match = 'Élisa™'.decode('utf-8') # that return a unicode object
regex = '(?u)%s\s+' % word_to_match
re.match(regex, unicode)
```
And this is a simple example. So if you have a lot of Regexs to do one after another with special characters in it, I found more easy and natural to do the RegEx on a string encoded in UTF-8. Example:
```
re.match('Élisa\s+', string)
re.match('Geneviève\s+', string)
re.match('DrØshtit\s+', string)
```
Is there's something I'm missing ? What are the drawbacks of the UTF-8 approach ?
UPDATE
------
Ok, I find the problem. I was doing my tests in ipython but unfortunately it seems to mess the encoding. Example:
In the python shell
```
>>> string_utf8 = 'Test « with theses » quotes Éléments'
>>> string_utf8
'Test \xc2\xab with theses \xc2\xbb quotes \xc3\x89l\xc3\xa9ments'
>>> print string_utf8
Test « with theses » quotes Éléments
>>>
>>> unicode_string = u'Test « with theses » quotes Éléments'
>>> unicode_string
u'Test \xab with theses \xbb quotes \xc9l\xe9ments'
>>> print unicode_string
Test « with theses » quotes Éléments
>>>
>>> unicode_decoded_from_utf8 = string_utf8.decode('utf-8')
>>> unicode_decoded_from_utf8
u'Test \xab with theses \xbb quotes \xc9l\xe9ments'
>>> print unicode_decoded_from_utf8
Test « with theses » quotes Éléments
```
In ipython
```
In [1]: string_utf8 = 'Test « with theses » quotes Éléments'
In [2]: string_utf8
Out[2]: 'Test \xc2\xab with theses \xc2\xbb quotes \xc3\x89l\xc3\xa9ments'
In [3]: print string_utf8
Test « with theses » quotes Éléments
In [4]: unicode_string = u'Test « with theses » quotes Éléments'
In [5]: unicode_string
Out[5]: u'Test \xc2\xab with theses \xc2\xbb quotes \xc3\x89l\xc3\xa9ments'
In [6]: print unicode_string
Test « with theses » quotes Ãléments
In [7]: unicode_decoded_from_utf8 = string_utf8.decode('utf-8')
In [8]: unicode_decoded_from_utf8
Out[8]: u'Test \xab with theses \xbb quotes \xc9l\xe9ments'
In [9]: print unicode_decoded_from_utf8
Test « with theses » quotes Éléments
```
As you can see, ipython is messing with encoding when using the u'' notation. That was the source of my problems. The bug is mentionned here: <https://bugs.launchpad.net/ipython/+bug/339642>
|
2009/09/25
|
[
"https://Stackoverflow.com/questions/1478178",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/146481/"
] |
I haven't used a JDBC-ODBC bridge in quite a few years, but I would expect that the ODBC driver's fetch size would be controlling. What ODBC driver are you using and what version of the ODBC driver are you using? What is the fetch size specified in the DSN?
As a separate issue, I would seriously question your decision to use a JDBC-ODBC bridge in this day and age. A JDBC driver can use a TNS alias rather than explicitly specifying a host and port which is no harder to configure than an ODBC DSN. And getting rid of the requirement to install, configure, and maintain an ODBC driver and DSN vastly improves performance an maintainability.
|
Use JDBC. There is no advantage in using ODBC bridge. In PreparedStatement or CallableStatement you can call setFetchSize() to restrict the rowset size.
| 10,017
|
59,357,940
|
The below piece of code uses openCV module to identify lanes on road. I use the python 3.6 for coding (I use atom IDE for development. This info is being provided because stackoverflow isn't letting me post the info without unnecessary lines of info. so please ignore the comments in bracket)
The code runs fine with a given sample video. But when I run it for another video it throws the following error:
```
(base) D:\Self-Driving course\finding-lanes>RayanFindingLanes.py
C:\Users\Tarun\Anaconda3\lib\site-packages\numpy\lib\function_base.py:392: RuntimeWarning: Mean of empty slice.
avg = a.mean(axis)
C:\Users\Tarun\Anaconda3\lib\site-packages\numpy\core\_methods.py:85: RuntimeWarning: invalid value encountered in double_scalars
ret = ret.dtype.type(ret / rcount)
Traceback (most recent call last):
File "D:\Self-Driving course\finding-lanes\RayanFindinglanes.py", line 81, in <module>
averaged_lines = average_slope_intercept(frame, lines)
File "D:\Self-Driving course\finding-lanes\RayanFindinglanes.py", line 51, in average_slope_intercept
right_line = make_points(image, right_fit_average)
File "D:\Self-Driving course\finding-lanes\RayanFindinglanes.py", line 56, in make_points
slope, intercept = line
TypeError: cannot unpack non-iterable numpy.float64 object
```
What does the error mean and how to solve it?
code:
```
import cv2
import numpy as np
def canny(img):
gray = cv2.cvtColor(img, cv2.COLOR_RGB2GRAY)
kernel = 5
blur = cv2.GaussianBlur(gray,(kernel, kernel),0)
canny = cv2.Canny(blur, 50, 150)
return canny
def region_of_interest(canny):
height = canny.shape[0]
width = canny.shape[1]
mask = np.zeros_like(canny)
triangle = np.array([[
(200, height),
(550, 250),
(1100, height),]], np.int32)
cv2.fillPoly(mask, triangle, 255)
masked_image = cv2.bitwise_and(canny, mask)
return masked_image
def display_lines(img,lines):
line_image = np.zeros_like(img)
if lines is not None:
for line in lines:
for x1, y1, x2, y2 in line:
cv2.line(line_image,(x1,y1),(x2,y2),(255,0,0),10)
return line_image
def average_slope_intercept(image, lines):
left_fit = []
right_fit = []
if lines is None:
return None
for line in lines:
for x1, y1, x2, y2 in line:
fit = np.polyfit((x1,x2), (y1,y2), 1)
slope = fit[0]
intercept = fit[1]
if slope < 0: # y is reversed in image
left_fit.append((slope, intercept))
else:
right_fit.append((slope, intercept))
# add more weight to longer lines
left_fit_average = np.average(left_fit, axis=0)
right_fit_average = np.average(right_fit, axis=0)
left_line = make_points(image, left_fit_average)
right_line = make_points(image, right_fit_average)
averaged_lines = [left_line, right_line]
return averaged_lines
def make_points(image, line):
slope, intercept = line
y1 = int(image.shape[0])# bottom of the image
y2 = int(y1*3/5) # slightly lower than the middle
x1 = int((y1 - intercept)/slope)
x2 = int((y2 - intercept)/slope)
return [[x1, y1, x2, y2]]
cap = cv2.VideoCapture("test3.mp4")
while(cap.isOpened()):
_, frame = cap.read()
canny_image = canny(frame)
cropped_canny = region_of_interest(canny_image)
lines = cv2.HoughLinesP(cropped_canny, 2, np.pi/180, 100, np.array([]), minLineLength=40,maxLineGap=5)
averaged_lines = average_slope_intercept(frame, lines)
line_image = display_lines(frame, averaged_lines)
combo_image = cv2.addWeighted(frame, 0.8, line_image, 1, 1)
cv2.imshow("result", combo_image)
if cv2.waitKey(1) == ord('q'):
break
cap.release()
cv2.destroyAllWindows()
```
|
2019/12/16
|
[
"https://Stackoverflow.com/questions/59357940",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/12181181/"
] |
In some of the frames all the slope are > 0 hence left\_fit list is empty. Because of that you are getting error when you are calculating left\_fit average. One of the way to solve this problem to use left\_fit average from previous frame. I have solved it using the same approach. Please see the code below and let me know if it has solved your issue.
```
global_left_fit_average = []
global_right_fit_average = []
def average_slope_intercept(image, lines):
left_fit = []
right_fit = []
global global_left_fit_average
global global_right_fit_average
if lines is not None:
for line in lines:
x1, y1, x2, y2 = line.reshape(4)
parameters = np.polyfit((x1, x2), (y1,y2), 1)
slope = parameters[0]
intercept = parameters[1]
if (slope < 0):
left_fit.append((slope, intercept))
else:
right_fit.append((slope, intercept))
if (len(left_fit) == 0):
left_fit_average = global_left_fit_average
else:
left_fit_average = np.average(left_fit, axis=0)
global_left_fit_average = left_fit_average
right_fit_average = np.average(right_fit, axis=0)
global_right_fit_average = right_fit_average
left_line = make_corordinates(image, left_fit_average)
right_line = make_corordinates(image, right_fit_average)
return np.array([left_line, right_line])
```
|
`HoughLinesP` returns a list and that can be an empty list and not necessarily `None`
So the lines in the function `average_slope_intercept`
```
if lines is None:
return None
```
is not much of use.
You need to check `len(lines) == 0`
| 10,019
|
55,554,816
|
In python2.7, I have one list
```
['A','B','C','D','E','F','G','H','I','J','K','L','M','N','O','P','Q']
```
and I need to transform into a dict like
```
{
'A':['B','C'],
'B':['D','E'],
'C':['F','G'],
'D':['H','I'],
'E':['J','K'],
'F':['L','M'],
'G':['N','O'],
'H':['P','Q'],
'I':[],
'J':[],
'K':[],
'L':[],
'M':[],
'N':[],
'O':[],
'P':[],
'Q':[]
}
```
|
2019/04/07
|
[
"https://Stackoverflow.com/questions/55554816",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/11322996/"
] |
With `zip()` and `itertools.izip_longest()` you can do that like:
### Code:
```
import itertools as it
in_data = list('ABCDEFGHIJKLMNOPQ')
out_data = {k: list(v) if v else [] for k, v in
it.izip_longest(in_data, zip(in_data[1::2], in_data[2::2]))}
import json
print(json.dumps(out_data, indent=2))
```
### Results:
```
{
"A": [
"B",
"C"
],
"C": [
"F",
"G"
],
"B": [
"D",
"E"
],
"E": [
"J",
"K"
],
"D": [
"H",
"I"
],
"G": [
"N",
"O"
],
"F": [
"L",
"M"
],
"I": [],
"H": [
"P",
"Q"
],
"K": [],
"J": [],
"M": [],
"L": [],
"O": [],
"N": [],
"Q": [],
"P": []
}
```
|
You can use `itertools`:
```
from itertools import chain, repeat
data = ['A', 'B', 'C', 'D', 'E', 'F', 'G', 'H', 'I', 'J', 'K', 'L', 'M', 'N', 'O', 'P', 'Q']
lists = [[first, second] for first, second in zip(data[1::2], data[2::2])]
result = {char: list(value) for char, value in zip(data, chain(lists, repeat([])))}
result
```
Output:
```
{'A': ['B', 'C'],
'B': ['D', 'E'],
'C': ['F', 'G'],
'D': ['H', 'I'],
'E': ['J', 'K'],
'F': ['L', 'M'],
'G': ['N', 'O'],
'H': ['P', 'Q'],
'I': [],
'J': [],
'K': [],
'L': [],
'M': [],
'N': [],
'O': [],
'P': [],
'Q': []}
```
| 10,020
|
51,961,416
|
i am trying to copy line from text file in zip folder by matching partial string,
zip folders are in a shared folder
is there a way to copy strings from text files and send it to one output text file.
how to do it using python..
is it possible with zip\_archive?
i tried using this, with no luck.
```
zf = zipfile.ZipFile('C:/Users/Analytics Vidhya/Desktop/test.zip')
# having First.csv zipped file.
df = pd.read_csv(zf.open('First.csv'))
```
|
2018/08/22
|
[
"https://Stackoverflow.com/questions/51961416",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/10258075/"
] |
Unlike *@strava* answer, you don't actually have to extract... `zipfile` gives you excellent [API](https://docs.python.org/3/library/zipfile.html) for manipulating files. Here is a simple example of reading each file inside a simple zip (I zipped only one `.txt` file):
```
import zipfile
zip_path = r'C:\Users\avi_na\Desktop\a.zip'
niddle = '2'
zf = zipfile.ZipFile(zip_path)
for file_name in zf.namelist():
print(file_name, zf.read(file_name))
if(niddle in str(zf.read(file_name))):
print('found substring!!')
```
output:
```
a.txt b'1\r\n2\r\n3\r\n'
found substring!!
```
Using this example you can easily elaborate and read each one of the files, search the text for your string, and write it to output file.
For more info, check `printdir, read, write, open, close` members of `zipfile.ZipFile`
If you just want to extract and then use `pd.read_csv`, that's also fine:
```
import zipfile
zip_path = r'...\a.zip'
unzip_dir = "unzip_dir"
zf = zipfile.ZipFile(zip_path)
for file_name in zf.namelist():
if '.csv' in file_name: # make sure file is .csv
zf.extract(file_name, unzip_dir)
df = pd.read_csv(open("{}/{}".format(unzip_dir,file_name)))
```
|
You could try extracting them first, and then treating them as normal csv files
```
zf = zipfile.ZipFile( path to zip )
zf.extract('first.csv', path to save directory )
file = open('path\first.csv')
```
| 10,029
|
69,979,362
|
I have a csv that I need to convert to XML using Python. I'm a novice python dev.
Example CSV data:
```
Amount,Code
CODE50,1246
CODE50,6290
CODE25,1077
CODE25,9790
CODE100,5319
CODE100,4988
```
Necessary output XML
```
<coupon-codes coupon-id="CODE50">
<code>1246</code>
<code>1246</code>
<coupon-codes/>
<coupon-codes coupon-id="CODE25">
<code>1077</code>
<code>9790</code>
<coupon-codes/>
<coupon-codes coupon-id="CODE100">
<code>5319</code>
<code>4988</code>
<coupon-codes/>
```
My guess is I have to use pandas to pull the csv in, use `pandas.groupby` to group the `Amount` column, then push this into an element/subelement to create the xml, then print/push to a xml file. I can't get the `groupby` to work and don't know how to then push that into the element, then populate the sub element.
|
2021/11/15
|
[
"https://Stackoverflow.com/questions/69979362",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/9803155/"
] |
Here how you do it:
```
Expanded(
child: CustomPaint(
painter: MyCustomPainter(),
size: Size.infinite,
),
),
```
|
You can get height and width of the screen by using MediaQuery and use them
```
final width = MediaQuery.of(context).size.width;
final height = MediaQuery.of(context).size.height;
return CustomPaint(
size: Size(width,height),
painter: Sky(),
child: const Center(
child: Text(
'Once upon a time...',
style: TextStyle(
fontSize: 40.0,
fontWeight: FontWeight.w900,
color: Color(0xFFFFFFFF),
),
),
),
);
```
| 10,031
|
1,984,871
|
I've been a Perl guy for over 10 years but a friend convinced me to try Python and told me how much faster it is than Perl. So just for kicks I ported an app I wrote in Perl to Python and found that it runs about 3x slower. Initially my friend told me that I must have done it wrong, so I rewrote and refactored until I could rewrite and refactor no more and ... it's still a lot slower. So I did a simple test:
```
i = 0
j = 0
while (i < 100000000):
i = i + 1
j = j + 1
print j
```
>
> $ time python python.py
>
> 100000000
>
>
> real 0m48.100s
>
> user 0m45.633s
>
> sys 0m0.043s
>
>
>
```
my $i = 0;
my $j = 0;
while ($i < 100000000) {
++$i; # also tested $i = $i + 1 to be fair, same result
++$j;
}
print $j;
```
>
> $ time perl perl.pl
>
> 100000000
>
>
> real 0m24.757s
>
> user 0m22.341s
>
> sys 0m0.029s
>
>
>
Just under twice as slow, which doesn't seem to reflect any of the benchmarks I've seen ... is there a problem with my installation or *is Python really that much slower than Perl*?
|
2009/12/31
|
[
"https://Stackoverflow.com/questions/1984871",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/241470/"
] |
python is slower then perl. It may be faster to develop but it doesnt execute faster here is one benchmark <http://xodian.net/serendipity/index.php?/archives/27-Benchmark-PHP-vs.-Python-vs.-Perl-vs.-Ruby.html> -edit- a terrible benchmark but it is at least a real benchmark with numbers and not some guess. To bad theres no source or test other then a loop.
|
I'm not up-to-date on everything with Python, but my first idea about this benchmark was the difference between Perl and Python numbers. In Perl, we have numbers. They aren't objects, and their precision is limited to the sizes imposed by the architecture. In Python, we have objects with arbitrary precision. For small numbers (those that fit in 32-bit), I'd expect Perl to be faster. If we go over the integer size of the architecture, the Perl script won't even work without some modification.
I see similar results for the original benchmark on my MacBook Air (32-bit) using a Perl 5.10.1 that I compiled myself and the Python 2.5.1 that came with Leopard:
However, I added arbitrary precision to the Perl program with the [bignum](http://perldoc.perl.org/bignum.html)
```
use bignum;
```
Now I wonder if the Perl version is ever going to finish. :) I'll post some results when it finishes, but it looks like it's going to be an order of magnitude difference.
Some of you may have seen my question about [What are five things you hate about your favorite language?](https://stackoverflow.com/questions/282329/what-are-five-things-you-hate-about-your-favorite-language). Perl's default numbers is one of the things that I hate. I should never have to think about it and it shouldn't be slow. In Perl, I lose on both. Note, however, that if I needed numeric processing in Perl, I could use [PDL](http://pdl.perl.org).
| 10,032
|
25,100,309
|
I have a memory constraint of 4GB RAM. I need to have 2.5 GB of data in RAM in order to perform further things
```
import numpy
a = numpy.random.rand(1000000,100)## This needs to be in memory
b= numpy.random.rand(1,100)
c= a-b #this need to be in code in order to perform next operation
d = numpy.linalg.norm(numpy.asarray(c, dtype = numpy.float32), axiss =1)
```
While creating c, memory usage explodes and python gets killed. Is there a way to fasten this process. I am performing this on EC2 Ubuntu with 4GB RAM and single core. When I perform same calculation on my MAC OSX, it gets easily done without any memory problem as well as takes less time. Why is this happening?
One solution that I can think of is
```
d =[numpy.sqrt(numpy.dot(i-b,i-b)) for i in a]
```
which I dont think will be good for speed.
|
2014/08/02
|
[
"https://Stackoverflow.com/questions/25100309",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3413239/"
] |
If the creation of `a` doesn't cause a memory problem, and you don't need to preserve the values in `a`, you could compute `c` by modifying `a` in place:
```
a -= b # Now use `a` instead of `c`.
```
Otherwise, the idea of working in smaller chunks or batches is a good one. With your list comprehension solution, you are, in effect, computing `d` from `a` and `b` in batch sizes of one row of `a`. You can improve the efficiency by using a larger batch size. Here's an example; it includes your code (with some cosmetic changes) and a version that computes the result (called `d2`) in batches of `batch_size` rows of `a`.
```
import numpy as np
#n = 1000000
n = 1000
a = np.random.rand(n,100) ## This needs to be in memory
b = np.random.rand(1,100)
c = a-b # this need to be in code in order to perform next operation
d = np.linalg.norm(np.asarray(c), axis=1)
batch_size = 300
# Preallocate the result.
d2 = np.empty(n)
for start in range(0, n, batch_size):
end = min(start + batch_size, n)
c2 = a[start:end] - b
d2[start:end] = np.linalg.norm(c2, axis=1)
```
|
If memory is the reason your code is slowing down and crashing why not just use a generator instead of list comprehension?
d =(numpy.sqrt(numpy.dot(i-b,i-b)) for i in a)
A generator essentially provides steps to get the next object in an iterator. In other words no operation is made and no data is stored until you call the next() method on the generator's iterator. The reason I'm stressing this is I don't want you to think when you call `d =(numpy.sqrt(numpy.dot(i-b,i-b)) for i in a)` it does all the computations, rather it stores the instructions.
| 10,042
|
72,177,222
|
Description.
------------
While trying to use pre-commit hooks, I am experiencing some difficulties, including [the latest Lava-nc release by Intel in `.tar.gz` format](https://github.com/lava-nc/lava/releases/download/v0.3.0/lava-nc-0.3.0.tar.gz) `pip` package in a Conda environment.
MWE
---
The following Conda `environment.yaml` file is used:
```
# This file is to automatically configure your environment. It allows you to
# run the code with a single command without having to install anything
# (extra).
# First run:: conda env create --file environment.yml
# If you change this file, run: conda env update --file environment.yml
# Instructions for this networkx-to-lava-nc repository only. First time usage
# On Ubuntu (this is needed for lava-nc):
# sudo apt upgrade
# sudo apt full-upgrade
# yes | sudo apt install gcc
# Conda configuration settings. (Specify which modules/packages are installed.)
name: networkx-to-lava
channels:
- conda-forge
- conda
dependencies:
- anaconda
- conda:
# Run python tests.
- pytest-cov
- pip
- pip:
# Run pip install on .tar.gz file in GitHub repository (For lava-nc only).
- https://github.com/lava-nc/lava/releases/download/v0.3.0/lava-nc-0.3.0.tar.gz
# Auto check static typing.
- mypy
# Auto check programming style aspects.
- pylint
```
If I include the following MWE `.pre-commit-config.yaml`:
```
repos:
# Test if the variable typing is correct
# - repo: https://github.com/python/mypy
- repo: https://github.com/pre-commit/mirrors-mypy
rev: v0.950
hooks:
- id: mypy
```
Output/Error Message
--------------------
The `git commit "something"` and the `pre-commit run --all-files` return both:
```
[INFO] Installing environment for https://github.com/pre-commit/mirrors-mypy.
[INFO] Once installed this environment will be reused.
[INFO] This may take a few minutes...
An unexpected error has occurred: CalledProcessError: command: ('/home/name/anaconda3/envs/networkx-to-lava/bin/python3.1', '-mvirtualenv', '/home/name/.cache/pre-commit/repolk_aet_q/py_env-python3.1', '-p', 'python3.1')
return code: 1
expected return code: 0
stdout:
RuntimeError: failed to find interpreter for Builtin discover of python_spec='python3.1'
stderr: (none)
Check the log at /home/name/.cache/pre-commit/pre-commit.log
```
Error Log.
----------
The output of `pre-commit.log` is:
```
### version information
pre-commit version: 2.19.0
git --version: git version 2.30.2
sys.version:
3.10.4 | packaged by conda-forge | (main, Mar 24 2022, 17:39:04) [GCC 10.3.0]
sys.executable: /home/name/anaconda3/envs/networkx-to-lava/bin/python3.1
os.name: posix
sys.platform: linux
### error information
An unexpected error has occurred: CalledProcessError: command: ('/home/name/anaconda3/envs/networkx-to-lava/bin/python3.1', '-mvirtualenv', '/home/name/.cache/pre-commit/repolk_aet_q/py_env-python3.1', '-p', 'python3.1')
return code: 1
expected return code: 0
stdout:
RuntimeError: failed to find interpreter for Builtin discover of python_spec='python3.1'
stderr: (none)
Traceback (most recent call last):
File "/home/name/anaconda3/envs/networkx-to-lava/lib/python3.10/site-packages/pre_commit/error_handler.py", line 73, in error_handler
yield
File "/home/name/anaconda3/envs/networkx-to-lava/lib/python3.10/site-packages/pre_commit/main.py", line 361, in main
return hook_impl(
File "/home/name/anaconda3/envs/networkx-to-lava/lib/python3.10/site-packages/pre_commit/commands/hook_impl.py", line 238, in hook_impl
return retv | run(config, store, ns)
File "/home/name/anaconda3/envs/networkx-to-lava/lib/python3.10/site-packages/pre_commit/commands/run.py", line 414, in run
install_hook_envs(to_install, store)
File "/home/name/anaconda3/envs/networkx-to-lava/lib/python3.10/site-packages/pre_commit/repository.py", line 223, in install_hook_envs
_hook_install(hook)
File "/home/name/anaconda3/envs/networkx-to-lava/lib/python3.10/site-packages/pre_commit/repository.py", line 79, in _hook_install
lang.install_environment(
File "/home/name/anaconda3/envs/networkx-to-lava/lib/python3.10/site-packages/pre_commit/languages/python.py", line 219, in install_environment
cmd_output_b(*venv_cmd, cwd='/')
File "/home/name/anaconda3/envs/networkx-to-lava/lib/python3.10/site-packages/pre_commit/util.py", line 146, in cmd_output_b
raise CalledProcessError(returncode, cmd, retcode, stdout_b, stderr_b)
pre_commit.util.CalledProcessError: command: ('/home/name/anaconda3/envs/networkx-to-lava/bin/python3.1', '-mvirtualenv', '/home/name/.cache/pre-commit/repolk_aet_q/py_env-python3.1', '-p', 'python3.1')
return code: 1
expected return code: 0
stdout:
RuntimeError: failed to find interpreter for Builtin discover of python_spec='python3.1'
stderr: (none)
```
Workaround
----------
This error is evaded if I remove the `/home/name/anaconda3/envs/networkx-to-lava/bin/python3.1` file. However, that breaks automation and seems to be an improper work-around.
Question
--------
How can I ensure creating and activating the Conda environment allows direct running of the pre-commit hook without error (without having to delete the `python3.1` file)?
|
2022/05/09
|
[
"https://Stackoverflow.com/questions/72177222",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/7437143/"
] |
this is unfortunately a known bug in `conda` -- though I'm unfamiliar with the status on it. they're mistakenly shipping a `python3.1` binary as the default executable (it should be called `python3.10` -- they have a [2020](https://github.com/asottile/flake8-2020) problem that's probably a `sys.version[:3]` somewhere)
fortunately, you can instruct pre-commit to ignore its "default" version detection (which is detecting your `python3.1` executable) and give it a specific instruction on which python version to use
there's two approaches to that:
1. [`default_language_version`](https://pre-commit.com/#top_level-default_language_version): as a "default" which applies when `language_version` is not set
```yaml
default_language_version:
python: python3.10 # or python3
# ...
```
2. [`language_version`](https://pre-commit.com/#config-language_version) on the specific hook (to override whatever default value the hook provider has set)
```yaml
# ...
- id: black
language_version: python3.10
```
---
disclaimer: I created pre-commit
|
There was a ticked opened regarding the observation in pre-commit repository <https://github.com/pre-commit/pre-commit/issues/1375>
One of pre-commit contributers suggest to use `language_version: python3` for black config in `.pre-commit-config.yaml`
```yaml
repo: https://github.com/psf/black
rev: 22.3.0
hooks:
- id: black
language_version: python3
```
<https://github.com/pre-commit/pre-commit/issues/1375#issuecomment-603906811>
| 10,043
|
29,224,447
|
I have a python function that takes a imported module as a parameter:
```
def printModule(module):
print("That module is named '%s'" % magic(module))
import foo.bar.baz
printModule(foo.bar.baz)
```
What I want is to be able to extract the module name (in this case `foo.bar.baz`) from a passed reference to the module. In the above example, the `magic()` function is a stand-in for the function I want.
`__name__` would normally work, but that requires executing in the context of the passed module, and I'm trying to extract this information from merely a reference to the module.
What is the proper procedure here?
All I can think of is either doing `string(module)`, and then some text hacking, or trying to inject a function into the module that I can then call to have return `__name__`, and neither of those solutions is elegant. (I tried both these, neither actually work. Modules can apparently permute their name in the `string()` representation, and injecting a function into the module and then calling it just returns the caller's context.)
|
2015/03/24
|
[
"https://Stackoverflow.com/questions/29224447",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/268006/"
] |
The `__name__` attribute seems to work:
```
def magic(m):
return m.__name__
```
|
If you have a string with the module name, you can use pkgutil.
```
import pkgutil
pkg = pkgutil.get_loader(module_name)
print pkg.fullname
```
From the module itself,
```
import pkgutil
pkg = pkgutil.get_loader(module.__name__)
print pkg.fullname
```
| 10,044
|
1,519,276
|
I've tried to debug memory crash in my Python C extension and tried to run script under valgrind. I found there is too much "noise" in the valgrind output, even if I've ran simple command as:
```
valgrind python -c ""
```
Valgrind output full of repeated info like this:
```
==12317== Invalid read of size 4
==12317== at 0x409CF59: PyObject_Free (in /usr/lib/libpython2.5.so.1.0)
==12317== by 0x405C7C7: PyGrammar_RemoveAccelerators (in /usr/lib/libpython2.5.so.1.0)
==12317== by 0x410A1EC: Py_Finalize (in /usr/lib/libpython2.5.so.1.0)
==12317== by 0x4114FD1: Py_Main (in /usr/lib/libpython2.5.so.1.0)
==12317== by 0x8048591: main (in /usr/bin/python2.5)
==12317== Address 0x43CD010 is 7,016 bytes inside a block of size 8,208 free'd
==12317== at 0x4022F6C: free (in /usr/lib/valgrind/x86-linux/vgpreload_memcheck.so)
==12317== by 0x4107ACC: PyArena_Free (in /usr/lib/libpython2.5.so.1.0)
==12317== by 0x41095D7: PyRun_StringFlags (in /usr/lib/libpython2.5.so.1.0)
==12317== by 0x40DF262: (within /usr/lib/libpython2.5.so.1.0)
==12317== by 0x4099569: PyCFunction_Call (in /usr/lib/libpython2.5.so.1.0)
==12317== by 0x40E76CC: PyEval_EvalFrameEx (in /usr/lib/libpython2.5.so.1.0)
==12317== by 0x40E70F3: PyEval_EvalFrameEx (in /usr/lib/libpython2.5.so.1.0)
==12317== by 0x40E896A: PyEval_EvalCodeEx (in /usr/lib/libpython2.5.so.1.0)
==12317== by 0x40E8AC2: PyEval_EvalCode (in /usr/lib/libpython2.5.so.1.0)
==12317== by 0x40FD99C: PyImport_ExecCodeModuleEx (in /usr/lib/libpython2.5.so.1.0)
==12317== by 0x40FFC93: (within /usr/lib/libpython2.5.so.1.0)
==12317== by 0x41002B0: (within /usr/lib/libpython2.5.so.1.0)
```
Python 2.5.2 on Slackware 12.2.
Is it normal behavior? If so then valgrind maybe is inappropriate tool for debugging memory errors in Python?
|
2009/10/05
|
[
"https://Stackoverflow.com/questions/1519276",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/65736/"
] |
This is quite common, in any largish system. You can use Valgrind's [suppression system](http://valgrind.org/docs/manual/mc-manual.html#mc-manual.suppfiles) to explicitly suppress warnings that you're not interested in.
|
There is another option I found. James Henstridge has custom build of python which can detect the fact that python running under valgrind and in this case pymalloc allocator is disabled, with PyObject\_Malloc/PyObject\_Free passing through to normal malloc/free, which valgrind knows how to track.
Package available here: <https://launchpad.net/~jamesh/+archive/python>
| 10,045
|
32,553,827
|
---I am on Ubuntu---
I am trying to setup Django Crispy forms and getting the following error:
```
Exception Type: TemplateDoesNotExist
Exception Value:
bootstrap3/layout/buttonholder.html
```
Settings.py template setup
==========================
```
TEMPLATES = [
{
'BACKEND': 'django.template.backends.django.DjangoTemplates',
'DIRS': [os.path.join(BASE_DIR, 'templates')],
'APP_DIRS': True,
'OPTIONS': {...............
```
Template-loader postmortem
==========================
```
Django tried loading these templates, in this order:
Using loader django.template.loaders.filesystem.Loader:
/home/schachte/Documents/School/cse360/CSE_360_Project/CSE360_Project/ipcms/ipcms/templates/bootstrap3/layout/buttonholder.html (File does not exist)
Using loader django.template.loaders.app_directories.Loader:
/usr/local/lib/python2.7/dist-packages/Django-1.8.4-py2.7.egg/django/contrib/admin/templates/bootstrap3/layout/buttonholder.html (File does not exist)
/usr/local/lib/python2.7/dist-packages/Django-1.8.4-py2.7.egg/django/contrib/auth/templates/bootstrap3/layout/buttonholder.html (File does not exist)
/usr/local/lib/python2.7/dist-packages/crispy_forms/templates/bootstrap3/layout/buttonholder.html (File does not exist)
```
It's pretty obvious that the dir is not there that it's trying to find, but how can I make crispy forms find the correct bootstrap directory?
|
2015/09/13
|
[
"https://Stackoverflow.com/questions/32553827",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5054204/"
] |
`ButtonHolder` isn't part of the Bootstrap template pack. It's documented as not being so.
See [this issue for discussion](https://github.com/maraujop/django-crispy-forms/issues/478)
Best bet is to use `FormActions` instead
|
Where do you currently save your static content. The issue seems to be due to Django's inability to find your static content from bootstrap.
| 10,055
|
55,910,797
|
Hello I've started using python fairly recently. I'm having so much trouble with this one segment of my code that gives me a keyerror when I try to remove an element from my set:
tiles.remove(m)
KeyError: 'B9'
EDIT: I forgot to mention that the m value changes everytime I call another function before the for loop. Also in the fc function, if it is false the function makes sure to add the tile back into the set tiles.add(m)
Researching online it says a keyerror only occurs when the element is not in the set but....right before I remove the element, I check to see that it is in the set and it is.
```
m = findMRV()
if checkEmpty(board, m) == False:
backtrack(board)
for d in domain[m].copy():
if checkValid(board, m[0], m[1], d ):
if m in tiles:
print(str(m)+"HELLO3")
tiles.remove(m)
board[m] = d
if(fc(board, m[0], m1], d) == False):
continue
```
the checkValid function just returns true or false and doesn't change m. I want m to be removed from the set that contains only empty tiles but I keep receiving a keyerror and I can't seem to figure out what the issue could be or where else it could be coming from.
|
2019/04/29
|
[
"https://Stackoverflow.com/questions/55910797",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/8817267/"
] |
You have a loop
```
for d in domain[m].copy():
```
where you are trying to `tiles.remove(m)` in every iteration. After it's removed in the first iteration, the dictionary won't have the key any more and you would get a keyerror in subsequent iterations.
|
The ‘remove’ statement needs to be included in the ‘if’ statement, otherwise it is never prevented.
| 10,056
|
47,870,297
|
I have a list with tuples in it looking like this:
```
my_list = (u'code', u'somet text', u'integer', [(u'1', u'text1'), (u'2', u'text2'), (u'3', u'text3'), (u'4', u'text4'), (u'5', u'text5')])
```
I'd like to iterate over `my_list[3]` and copy the rest so I would get n lists looking like this:
```
(u'code', u'somet text', u'integer', u'1', u'text1')
(u'code', u'somet text', u'integer', u'2', u'text2')
(u'code', u'somet text', u'integer', u'3', u'text3')
(u'code', u'somet text', u'integer', u'4', u'text4')
(u'code', u'somet text', u'integer', u'5', u'text5')
```
I have tried using a for loop but I end up this:
```
((u'code', u'somet text', u'integer'), (u'1', u'text1'))
((u'code', u'somet text', u'integer'), (u'2', u'text2'))
((u'code', u'somet text', u'integer'), (u'3', u'text3'))
((u'code', u'somet text', u'integer'), (u'4', u'text4'))
((u'code', u'somet text', u'integer'), (u'5', u'text5'))
```
Also, the code I am using does not feel very pythonic at all, `my_list[3]` can differ in length.
```
my_list = (u'code', u'somet text', u'integer', [(u'1', u'text1'), (u'2', u'text2'), (u'3', u'text3'), (u'4', u'text4'), (u'5', u'text5')])
my_modified_list = my_list[0:3]
last_part_of_my_list = my_list[3]
for i in last_part_of_my_list:
print (my_modified_list, i)
```
|
2017/12/18
|
[
"https://Stackoverflow.com/questions/47870297",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6139149/"
] |
You can achieve this using simple tuple concatenation with `+`:
```
nlists = [my_list[:-1] + tpl for tpl in my_list[-1]]
[(u'code', u'somet text', u'integer', u'1', u'text1'),
(u'code', u'somet text', u'integer', u'2', u'text2'),
(u'code', u'somet text', u'integer', u'3', u'text3'),
(u'code', u'somet text', u'integer', u'4', u'text4'),
(u'code', u'somet text', u'integer', u'5', u'text5')]
```
|
Firstly unpack your tuple:
`a,b,c,d = my_list`
This, of course, assumes 4 elements exactly to your tuple or you'll get an exception.
Then iterate over d:
`for d1,d2 in d:
print('({},{},{},{},{})'.format(a,b,c,d1,d2))`
| 10,057
|
47,648,133
|
I want to calculate Mean Absolute percentage error (MAPE) of predicted and true values. I found a solution from [here](https://stackoverflow.com/questions/42250958/how-to-optimize-mape-code-in-python), but this gives error and shows invalid syntax in the line `mask = a <> 0`
```
def mape_vectorized_v2(a, b):
mask = a <> 0
return (np.fabs(a - b)/a)[mask].mean()
def mape_vectorized_v2(a, b):
File "<ipython-input-5-afa5c1162e83>", line 1
def mape_vectorized_v2(a, b):
^
SyntaxError: unexpected EOF while parsing
```
I am using spyder3. My predicted value is a type np.array and true value is dataframe
```
type(predicted)
Out[7]: numpy.ndarray
type(y_test)
Out[8]: pandas.core.frame.DataFrame
```
How do i clear this error and proceed with MAPE Calculation ?
Edit :
```
predicted.head()
Out[22]:
Total_kWh
0 7.163627
1 6.584960
2 6.638057
3 7.785487
4 6.994427
y_test.head()
Out[23]:
Total_kWh
79 7.2
148 6.7
143 6.7
189 7.2
17 6.4
np.abs(y_test[['Total_kWh']] - predicted[['Total_kWh']]).head()
Out[24]:
Total_kWh
0 NaN
1 NaN
2 NaN
3 NaN
4 0.094427
```
|
2017/12/05
|
[
"https://Stackoverflow.com/questions/47648133",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/7972408/"
] |
In Python for compare by not equal need `!=`, not `<>`.
So need:
```
def mape_vectorized_v2(a, b):
mask = a != 0
return (np.fabs(a - b)/a)[mask].mean()
```
Another solution from [stats.stackexchange](https://stats.stackexchange.com/a/294069):
```
def mean_absolute_percentage_error(y_true, y_pred):
y_true, y_pred = np.array(y_true), np.array(y_pred)
return np.mean(np.abs((y_true - y_pred) / y_true)) * 100
```
|
Here is an improved version that is mindful of Zero:
```
#Mean Absolute Percentage error
def mape(y_true, y_pred,sample_weight=None,multioutput='uniform_average'):
y_type, y_true, y_pred, multioutput = _check_reg_targets(y_true, y_pred, multioutput)
epsilon = np.finfo(np.float64).eps
mape = np.abs(y_pred - y_true) / np.maximum(np.abs(y_true), epsilon)
output_errors = np.average(mape,weights=sample_weight, axis=0)
if isinstance(multioutput, str):
if multioutput == 'raw_values':
return output_errors
elif multioutput == 'uniform_average':
# pass None as weights to np.average: uniform mean
multioutput = None
return np.average(output_errors, weights=multioutput)
def _check_reg_targets(y_true, y_pred, multioutput, dtype="numeric"):
if y_true.ndim == 1:
y_true = y_true.reshape((-1, 1))
if y_pred.ndim == 1:
y_pred = y_pred.reshape((-1, 1))
if y_true.shape[1] != y_pred.shape[1]:
raise ValueError("y_true and y_pred have different number of output "
"({0}!={1})".format(y_true.shape[1], y_pred.shape[1]))
n_outputs = y_true.shape[1]
allowed_multioutput_str = ('raw_values', 'uniform_average',
'variance_weighted')
if isinstance(multioutput, str):
if multioutput not in allowed_multioutput_str:
raise ValueError("Allowed 'multioutput' string values are {}. "
"You provided multioutput={!r}".format(
allowed_multioutput_str,
multioutput))
elif multioutput is not None:
multioutput = check_array(multioutput, ensure_2d=False)
if n_outputs == 1:
raise ValueError("Custom weights are useful only in "
"multi-output cases.")
elif n_outputs != len(multioutput):
raise ValueError(("There must be equally many custom weights "
"(%d) as outputs (%d).") %
(len(multioutput), n_outputs))
y_type = 'continuous' if n_outputs == 1 else 'continuous-multioutput'
return y_type, y_true, y_pred, multioutput
```
| 10,062
|
52,135,293
|
I'm following the `Quickstart` on <https://developers.google.com/drive/api/v3/quickstart/python>. I've enabled the drive API through the page, loaded the **credentials.json** and can successfully list files in my google drive. However when I wanted to download a file, I got the message
```
`The user has not granted the app ####### read access to the file`
```
Do I need to do more than putting that scope in my code or do I need to activate something else?
```
SCOPES = 'https://www.googleapis.com/auth/drive.file'
client.flow_from_clientsecrets('credentials.json', SCOPES)
```
|
2018/09/02
|
[
"https://Stackoverflow.com/questions/52135293",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1767754/"
] |
Once you go through the `Quick-Start Tutorial` initially the Scope is given as:
```
SCOPES = 'https://www.googleapis.com/auth/drive.metadata.readonly'
```
So after listing files and you decide to download, it won't work as you need to generate the the token again, so changing scope won't recreate or prompt you for the 'google authoriziation' which happens on first run.
To force generate the token, just delete your curent token or use a new file to store your key from the storage:
```
store = file.Storage('tokenWrite.json')
```
|
I ran into the same error. I authorized the entire scope, then retrieved the file, and use the io.Base class to stream the data into a file. Note, you'll need to create the file first.
```
from __future__ import print_function
from googleapiclient.discovery import build
import io
from apiclient import http
from google.oauth2 import service_account
SCOPES = ['https://www.googleapis.com/auth/drive']
SERVICE_ACCOUNT_FILE = 'credentials.json'
FILE_ID = <file-id>
credentials = service_account.Credentials.from_service_account_file(SERVICE_ACCOUNT_FILE, scopes=SCOPES)
service = build('drive', 'v3', credentials=credentials)
def download_file(service, file_id, local_fd):
request = service.files().get_media(fileId=file_id)
media_request = http.MediaIoBaseDownload(local_fd, request)
while True:
_, done = media_request.next_chunk()
if done:
print ('Download Complete')
return
file_io_base = open('file.csv','wb')
download_file(service=service,file_id=FILE_ID,local_fd=file_io_base)
```
Hope that helps.
| 10,072
|
48,634,071
|
I'm very new to python, so please bear with me. I'm having trouble to visualize an excel/csv file with seaborn's lmplot. This code:
```
%matplotlib inline
import matplotlib.pyplot as plt
import pandas as pd
import seaborn as sns
df=pd.read_csv("C:/Users/me/Documents/Jupyter Notebooks/Seaborn/Test.csv")
sns.set_style('whitegrid')
sns.lmplot(x=df["TestX"],y=df["TestY"], data=df)
```
gives me this error:
```
---------------------------------------------------------------------------
KeyError Traceback (most recent call last)
<ipython-input-41-00ad8b882663> in <module>()
----> 1 sns.lmplot(x=df["TestX"],y=df["TestY"], data=df)
~\Anaconda3\lib\site-packages\seaborn\regression.py in lmplot(x, y, data, hue, col, row, palette, col_wrap, size, aspect, markers, sharex, sharey, hue_order, col_order, row_order, legend, legend_out, x_estimator, x_bins, x_ci, scatter, fit_reg, ci, n_boot, units, order, logistic, lowess, robust, logx, x_partial, y_partial, truncate, x_jitter, y_jitter, scatter_kws, line_kws)
550 need_cols = [x, y, hue, col, row, units, x_partial, y_partial]
551 cols = np.unique([a for a in need_cols if a is not None]).tolist()
--> 552 data = data[cols]
553
554 # Initialize the grid
~\Anaconda3\lib\site-packages\pandas\core\frame.py in __getitem__(self, key)
2131 if isinstance(key, (Series, np.ndarray, Index, list)):
2132 # either boolean or fancy integer index
-> 2133 return self._getitem_array(key)
2134 elif isinstance(key, DataFrame):
2135 return self._getitem_frame(key)
~\Anaconda3\lib\site-packages\pandas\core\frame.py in _getitem_array(self, key)
2175 return self._take(indexer, axis=0, convert=False)
2176 else:
-> 2177 indexer = self.loc._convert_to_indexer(key, axis=1)
2178 return self._take(indexer, axis=1, convert=True)
2179
~\Anaconda3\lib\site-packages\pandas\core\indexing.py in _convert_to_indexer(self, obj, axis, is_setter)
1267 if mask.any():
1268 raise KeyError('{mask} not in index'
-> 1269 .format(mask=objarr[mask]))
1270
1271 return _values_from_object(indexer)
KeyError: '[ 2 23 231 234 242 321] not in index'
```
I figured there is something wrong with the index, but to me (as I said, I'm new to this) the index seems to be set right:
```
df.head()
Asset Paid Organic
0 A 23 2
1 B 231 234
2 C 321 242
```
And when I use import Datasets, lmplot is working just fine:
```
%matplotlib inline
import seaborn as sns
sns.set(color_codes=True)
tips = sns.load_dataset("tips")
g = sns.lmplot(x="total_bill", y="tip", data=tips, hue="smoker")
```
What am I doing wrong? sns.regplot didn't give me that error, but I wanted to use hue, to make a much bigger dataset easier to understand. I tried many things to fix this and couldn't find any solution. xlxs and csv-files both give me this error.
|
2018/02/06
|
[
"https://Stackoverflow.com/questions/48634071",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/9319446/"
] |
```
sns.lmplot(x=df["TestX"],y=df["TestY"], data=df)
```
x, y : strings, optional
Input variables, these should be column names in data.
You can try again with:
```
sns.lmplot(x="TestX",y="TestY", data=df)
```
|
```
import pandas as pd
import numpy as np
import matplotlib
import seaborn as sns
import matplotlib.pyplot as plt
sns.set_theme(style="darkgrid")
df = pd.read_csv('data.csv')
sns.lmplot(x="Duration", y="Maxpulse", data=df)
plt.show()
plt.savefig(sys.stdout.buffer)
sys.stdout.flush()
```
"Duration" and "Maxpluse" are column names.
Here is the output of my code:
[](https://i.stack.imgur.com/IaXxY.png)
| 10,075
|
57,624,731
|
I would like to assert that two dictionaries are equal, using Python's [`unittest`](https://docs.python.org/3/library/unittest.html), but ignoring the values of certain keys in the dictionary, in a convenient syntax, like this:
```py
from unittest import TestCase
class Example(TestCase):
def test_example(self):
result = foobar()
self.assertEqual(
result,
{
"name": "John Smith",
"year_of_birth": 1980,
"image_url": ignore(), # how to do this?
"unique_id": ignore(), #
},
)
```
To be clear, I want to check that all four keys exist, and I want to check the values of `"name"` and `"year_of_birth"`, (but not `"image_url"` or `"unique_id`"), and I want to check that no other keys exist.
I know I could modify `result` here to the key-value pairs for `"image_url"` and `"unique_id"`, but I would like something more convenient that doesn't modify the original dictionary.
(This is inspired by [`Test::Deep`](https://metacpan.org/pod/Test::Deep#ignore) for Perl 5.)
|
2019/08/23
|
[
"https://Stackoverflow.com/questions/57624731",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/247696/"
] |
There is [`unittest.mock.ANY`](https://docs.python.org/3/library/unittest.mock.html#any) which compares equal to everything.
```
from unittest import TestCase
import unittest.mock as mock
class Example(TestCase):
def test_happy_path(self):
result = {
"name": "John Smith",
"year_of_birth": 1980,
"image_url": 42,
"unique_id": 'foo'
}
self.assertEqual(
result,
{
"name": "John Smith",
"year_of_birth": 1980,
"image_url": mock.ANY,
"unique_id": mock.ANY
}
)
def test_missing_key(self):
result = {
"name": "John Smith",
"year_of_birth": 1980,
"unique_id": 'foo'
}
self.assertNotEqual(
result,
{
"name": "John Smith",
"year_of_birth": 1980,
"image_url": mock.ANY,
"unique_id": mock.ANY
}
)
def test_extra_key(self):
result = {
"name": "John Smith",
"year_of_birth": 1980,
"image_url": 42,
"unique_id": 'foo',
"maiden_name": 'Doe'
}
self.assertNotEqual(
result,
{
"name": "John Smith",
"year_of_birth": 1980,
"image_url": mock.ANY,
"unique_id": mock.ANY
}
)
def test_wrong_value(self):
result = {
"name": "John Smith",
"year_of_birth": 1918,
"image_url": 42,
"unique_id": 'foo'
}
self.assertNotEqual(
result,
{
"name": "John Smith",
"year_of_birth": 1980,
"image_url": mock.ANY,
"unique_id": mock.ANY
}
)
```
|
You can simply ignore the selected keys in the `result` dictionary.
```
self.assertEqual({k: v for k, v in result.items()
if k not in ('image_url', 'unique_id')},
{"name": "John Smith",
"year_of_birth": 1980})
```
| 10,076
|
62,155,242
|
I'm trying to compile a python script with sklearn, pandas, numpy and igraph, but the Pyinstaller executable doesn't run correctly because it can't find version.json in tmp folder.
```
FileNotFoundError: [Errno 2] No such file or directory: 'C:\\Users\\Usuario\\AppData\\Local\\Temp\\_MEI106882\\wcwidth\\version.json'
[17248] Failed to execute script pyScript
```
|
2020/06/02
|
[
"https://Stackoverflow.com/questions/62155242",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/11902728/"
] |
You'll need to include the `wcwidth` project directory in your data since it isn't considered a package or a module, its a data file.
In your spec file:
```
...
import wcwidth
a = Analysis(['main.py'],
pathex=[],
binaries=[],
datas=[
(os.path.dirname(wcwidth.__file__), 'wcwidth')
],
...
```
Note: Above I use `os.path.dirname(wcwidth.__file__)` to dynamically get the directory of `wcwidth`, but this can just be `.venv/lib/site_packages/wcwidth` or where-ever it is installed, this was just crucial for CI for me.
Or with `--add-data`:
```
pyinstaller --add-data "/path/to/site_packages/wcwidth;wcwidth"
```
|
I hit this issue when creating a binary from a virtual environment. It seems installing ipython within the virtual environment is causing this.
Recreating a new virtual environment without ipython seems to overcome the issue.
| 10,077
|
58,308,911
|
1) I first map the key names to a dictionary called main\_dict with an empty list (the actual problem has many keys hence the reason why I am doing this)
2) I then loop over a data matrix consisting of 3 columns
3) When I append a value to a column (the key) in the dictionary, the data is appended to wrong keys.
Where am I going wrong here?
**Minimum working example:**
Edit: The data is being read from a file row by row and does not exist in lists as in this MWE.
Edit2: Prefer a more pythonic solution than Pandas.
```
import numpy as np
key_names = ["A", "B", "C"]
main_dict = {}
val = []
main_dict = main_dict.fromkeys(key_names, val)
data = np.array([[2018, 1.1, 3.3], [2017, 2.1, 5.4], [2016, 3.1, 1.4]])
for i in data:
main_dict["A"].append(i[0])
main_dict["B"].append(i[1])
main_dict["C"].append(i[2])
print(main_dict["A"])
# Actual output: [2018.0, 1.1, 3.3, 2017.0, 2.1, 5.4, 2016.0, 3.1, 1.4]
# print(main_dict["A"])
# Expected output: [2018.0, 2017.0, 2016.0]
# print(main_dict["B"])
# Expected output: [1.1, 2.1, 3.1]
# print(main_dict["C"])
# Expected output: [3.3, 5.4, 1.4]
```
|
2019/10/09
|
[
"https://Stackoverflow.com/questions/58308911",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6685541/"
] |
Without using numpy (which is a heavy weight package for what you are doing) I would do this:
```
keys = ["A", "B", "C"]
main_dict = {key: [] for key in keys}
data = [[2018, 1.1, 3.3], [2017, 2.1, 5.4], [2016, 3.1, 1.4]]
# since you are reading from a file
for datum in data:
for index, key in enumerate(keys):
main_dict[key].append(datum[index])
print(main_dict) # {'A': [2018, 2017, 2016], 'B': [1.1, 2.1, 3.1], 'C': [3.3, 5.4, 1.4]}
```
Alternatively you can use the built-in `defaultdict` which is a tad faster since you don't need to do the dictionary comprehension:
```
from collections import defaultdict
main_dict = defaultdict(list)
keys = ["A", "B", "C"]
data = [[2018, 1.1, 3.3], [2017, 2.1, 5.4], [2016, 3.1, 1.4]]
# since you are reading from a file
for datum in data:
for index, key in enumerate(keys):
main_dict[key].append(datum[index])
print(main_dict)
```
Lastly, if the keys really don't matter to you, you can be a little more dynamic by coming up with the keys dynamically starting from `A`. Thus allowing lines to have more than three attributes:
```
from collections import defaultdict
main_dict = defaultdict(list)
data = [[2018, 1.1, 3.3], [2017, 2.1, 5.4], [2016, 3.1, 1.4]]
# since you are reading from a file
for datum in data:
for index, attr in enumerate(datum):
main_dict[chr(ord('A') + index)].append(attr)
print(main_dict) # {'A': [2018, 2017, 2016], 'B': [1.1, 2.1, 3.1], 'C': [3.3, 5.4, 1.4]}
```
|
The problem is in
```
main_dict = main_dict.fromkeys(key_names, val)
```
The same list val is referenced by all the keys since python passes reference.
| 10,078
|
42,585,598
|
When I try to deploy to my Elastic Beanstalk environment I am getting this python error. Everything was working fine a few days ago.
```
$ eb deploy
ERROR: AttributeError :: 'NoneType' object has no attribute 'split'
```
I've thus far attempted to update everything to no effect by issuing the following commands:
```
sudo pip install --upgrade setuptools
```
and
```
sudo pip install --upgrade awscli
```
Here are the resulting versions I'm working with:
```
$ eb --version
EB CLI 3.10.0 (Python 2.7.1)
$ aws --version
aws-cli/1.11.56 Python/2.7.13rc1 Darwin/16.4.0 botocore/1.5.19
```
Everything looks fine under eb status
```
$ eb status
Environment details for: ***
Application name: ***
Region: us-west-2
Deployed Version: ***
Environment ID: ***
Platform: 64bit Amazon Linux 2016.09 v3.3.1 running Node.js
Tier: WebServer-Standard
CNAME: ***.us-west-2.elasticbeanstalk.com
Updated: 2017-03-02 14:48:29.099000+00:00
Status: Ready
Health: Green
```
This issue appears to only effect this elastic beanstalk project. I am able to deploy to another project on the same AWS account.
|
2017/03/03
|
[
"https://Stackoverflow.com/questions/42585598",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2767129/"
] |
I had the same issue. Turns out that when you enable CodeCommit the CLI looks for a remote called "codecommit-origin" and if you don't have a git remote with that *specific* name it will throw that error.
Posting this for anyone else who stumbles upon the same issue.
|
I fixed this with `eb codesource local && eb deploy`, so it forgets about CodeCommit.
| 10,079
|
36,051,751
|
I am trying to sort 4 integers input by the user into numerical order using only the min() and max() functions in python. I can get the highest and lowest number easily, but cannot work out a combination to order the two middle numbers? Does anyone have an idea?
|
2016/03/17
|
[
"https://Stackoverflow.com/questions/36051751",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6074977/"
] |
So I'm guessing your input is something like this?
```
string = input('Type your numbers, separated by a space')
```
Then I'd do:
```
numbers = [int(i) for i in string.strip().split(' ')]
amount_of_numbers = len(numbers)
sorted = []
for i in range(amount_of_numbers):
x = max(numbers)
numbers.remove(x)
sorted.append(x)
print(sorted)
```
This will sort them using max, but min can also be used.
**If you *didn't have to* use min and max**:
```
string = input('Type your numbers, separated by a space')
numbers = [int(i) for i in string.strip().split(' ')]
numbers.sort() #an optional reverse argument possible
print(numbers)
```
|
LITERALLY just min and max? Odd, but, why not. I'm about to crash, but I think the following would work:
```
# Easy
arr[0] = max(a,b,c,d)
# Take the smallest element from each pair.
#
# You will never take the largest element from the set, but since one of the
# pairs will be (largest, second_largest) you will at some point take the
# second largest. Take the maximum value of the selected items - which
# will be the maximum of the items ignoring the largest value.
arr[1] = max(min(a,b)
min(a,c)
min(a,d)
min(b,c)
min(b,d)
min(c,d))
# Similar logic, but reversed, to take the smallest of the largest of each
# pair - again omitting the smallest number, then taking the smallest.
arr[2] = min(max(a,b)
max(a,c)
max(a,d)
max(b,c)
max(b,d)
max(c,d))
# Easy
arr[3] = min(a,b,c,d)
```
| 10,082
|
29,989,956
|
I want to share a variable value between a function defined within a python class and an externally defined function. So in the code below, when the internalCompute() function is called, self.data is updated. How can I access this updated data value inside a function that is defined outside the class, i.e inside the report function?
**Note:
I would like to avoid using of global variable as much as possible.**
```
class Compute(object):
def __init__(self):
self.data = 0
def internalCompute(self):
self.data = 5
def externalCompute():
q = Compute()
q.internalCompute()
def report():
# access the updated variable self.data from Compute class
print "You entered report"
externalCompute()
report()
```
|
2015/05/01
|
[
"https://Stackoverflow.com/questions/29989956",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1833722/"
] |
>
> determine the physical size of an iPhone (in inches)
>
>
>
This will never be possible, because the physical screen is made up of *pixels* (square LEDs), and there is no way to ask the size of one of those.
Thus, for example, an app that presents a ruler (inches / centimetres) would need to know *in some other way* how the pixel count relates to the physical dimensions of the screen - for example, by looking it up in a built-in table. As you rightly say, if a new device were introduced, the app would have no entry in that table and would be powerless to present a correctly-scaled ruler.
|
I'm using this code im my pch file:
```
#define IS_IPAD ([[UIDevice currentDevice] userInterfaceIdiom] == UIUserInterfaceIdiomPad)
#define IS_IPHONE6PLUS (!IS_IPAD && [[UIScreen mainScreen] bounds].size.height >= 736)
#define IS_IPHONE6 (!IS_IPAD && !IS_PHONE6PLUS && [[UIScreen mainScreen] bounds].size.height >= 667)
#define IS_IPHONE5 (!IS_IPAD && !IS_PHONE6PLUS && !IS_IPHONE6 && ([[UIScreen mainScreen] bounds].size.height >= 568))
```
then in your code you can ask
```
if (IS_IPHONE6PLUS) {
// do something
}
```
output from `NSStringFromCGRect([[UIScreen mainScreen] bounds])` is `{{0, 0}, {414, 736}}`
| 10,087
|
49,901,249
|
I have searched everywhere for how to fix this and I could not find anything, so I'm sorry if there is already a thread existing on this issue. Also, I'm fairly new to Linux, GDP, and StackOverflow, this is my first post.
First, I am running on Debian GNU/Linux 9 (stretch) with the Windows subsystem for Linux and when I start gdb I get this:
```
GNU gdb (Debian 7.12-6) 7.12.0.20161007-git
...
This GDB was configured as "x86_64-linux-gnu".
...
```
Also when I show the configuration I get this:
```
configure --host=x86_64-linux-gnu --target=x86_64-linux-gnu
--with-auto-load-dir=$debugdir:$datadir/auto-load
--with-auto-load-safe-path=$debugdir:$datadir/auto-load
--with-expat
--with-gdb-datadir=/usr/share/gdb (relocatable)
--with-jit-reader-dir=/usr/lib/gdb (relocatable)
--without-libunwind-ia64
--with-lzma
--with-python=/usr (relocatable)
--without-guile
--with-separate-debug-dir=/usr/lib/debug (relocatable)
--with-system-gdbinit=/etc/gdb/gdbinit
--with-babeltrace
```
I've created some sample c code to show the issue I have been encountering. I just want to clarify that I know I'm using heaps and I could just as easily use a char buffer for this example, the issue is not about that.
I'll call this sample program test.c
```
#include <string.h>
#include <stdio.h>
#include <stdlib.h>
int main(int argc, char **argv)
{
char *str = malloc(8);
char *str2 = malloc(8);
strcpy(str, argv[1]);
strcpy(str2, argv[2]);
printf("This is the end.\n");
printf("1st: %s\n2nd: %s\n", str, str2);
}
```
I then compiled it with
```
gcc -g -o test test.c
```
And with a quick run, I know that everything works the way it should.
```
$ ./test AAAAAAAA BBBBBBBB
This is the end.
1st: AAAAAAAA
2nd: BBBBBBBB
```
When I start the program with no input args, I get a segmentation fault as expected. But when I then try to use gdb to show what exactly happens, I get the following.
```
$ gdb test
(gdb) run
Program received signal SIGSEGV, Segmentation fault.
__strcpy_sse2_unaligned () at ../sysdeps/x86_64/multiarch/strcpy-sse2-unaligned.S:296
296 ../sysdeps/x86_64/multiarch/strcpy-sse2-unaligned.S: No such file or directory.
```
What I expected was to get information about what failed such as the destination address and the source address that were sent as parameters for strcpy, but I just get empty parentheses.
Again I'm relatively new to gdb so I don't know if this is normal or not.
Any help would be great.
**EDIT 1**
>
> Can you paste the output of `(gdb) bt` into your question? Your program is stopped in a library function that hasn't had its optional debug information installed on your system, but some other parts of the stack trace may be useful. – Mark Plotnick
>
>
>
```
(gdb) run
(gdb) bt
#0 __strcpy_sse2_unaligned () at ../sysdeps/x86_64/multiarch/strcpy-sse2-unaligned.S:296
#1 0x00000000080007d5 in main (argc=1, argv=0x7ffffffde308) at test.c:10
(gdb) run AAAAAAAA
(gdb) bt
#0 __strcpy_sse2_unaligned () at ../sysdeps/x86_64/multiarch/strcpy-sse2-unaligned.S:296
#1 0x00000000080007ef in main (argc=2, argv=0x7ffffffde2f8) at test.c:11
```
|
2018/04/18
|
[
"https://Stackoverflow.com/questions/49901249",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/9664285/"
] |
When you build the project it generates .war file as you know.You can extract it and find all the dependent jar files at {.war}\WEB-INF\lib\
|
My suggestion is, go to C:\Users\<>.m2\repository (your maven directory) and search \*.jar...it will list out all the jars in the window.
Copy all the jars and paste in your directory.
| 10,088
|
34,295,670
|
I have a problem with a litlle program in python:
what I want is to write on a archive called text numbers from 0 to 10, but the program give me error all the time and doesn't print anything.
```
i=0
while(i<11):
outfile = open('text.txt', 'a')
outfile.write('\n'+i)
outfile.close()
i=i+1
```
I tried puting
```
outfile.write('\n'+i)
outfile.write('\n',i)
outfile.write('\n'),i
outfile.write(i)
```
but not a single one of them works, can you tell me what I am doing wrong please?
|
2015/12/15
|
[
"https://Stackoverflow.com/questions/34295670",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5631254/"
] |
I'm guessing that the error you're getting involves concatenating a string with an integer.
try:
```
outfile.write(str(i) + '\n')
```
|
Assuming you want a file that looks like this:
```
1
2
3
4
5
6
7
8
9
```
Did you try it this way?
```
>>> fo = open('outfile.txt', 'w')
>>> for i in range(1,10):
... fo.write(str(i)+"\n")
>>> fo.close()
```
The error you probably are getting is this :
>
> TypeError: unsupported operand type(s) for +: 'int' and 'str'
>
>
>
Which is because you are trying to concatenate a string and an integer. COnvert you integer ( `i`) into a string by enclosing it with `str()`
| 10,089
|
45,860,272
|
My goal is to run a Python script that uses Anaconda libraries (such as Pandas) on `Azure WebJob` but can't seem to figure out how to load the libraries.
I start out just by testing a simple Azure blob to blob file copy which works when run locally but hit into an error `"ImportError: No module named 'azure'"` when ran in WebJob.
example code:
```py
from azure.storage.blob import BlockBlobService
blobAccountName = <name>
blobStorageKey = <key>
containerName = <containername>
blobService = BlockBlobService(account_name=blobAccountName,
account_key=blobStorageKey)
blobService.set_container_acl(containerName)
b = blobService.get_blob_to_bytes(containerName, 'file.csv')
blobService.create_blob_from_bytes(containerName, 'file.csv', b.content)
```
I can't even get Azure SDK libraries to run. let alone Anaconda's
How do I run a python script that requires external libraries such as Anaconda (and even Azure SDK). How do I "pip install" these stuff for a WebJob?
|
2017/08/24
|
[
"https://Stackoverflow.com/questions/45860272",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6329714/"
] |
It seems like you've kown about deployment of Azure WebJobs, I offer the below steps for you to show how to load external libraries in python scripts.
Step 1 :
Use the **virtualenv** component to create an independent python runtime environment in your system.Please install it first with command `pip install virtualenv` if you don't have it.
If you installed it successfully ,you could see it in your **python/Scripts** file.
[](https://i.stack.imgur.com/fOtfV.png)
Step2 : Run the commad to create independent python runtime environment.
[](https://i.stack.imgur.com/AqAR6.png)
Step 3: Then go into the created directory's **Scripts** folder and activate it (**this step is important , don't miss it**)
[](https://i.stack.imgur.com/lVtrx.png)
Please **don't close this command window** and use `pip install <your libraryname>` to download external libraries in this command window.
[](https://i.stack.imgur.com/IHAqU.png)
Step 4:Keep the Sample.py uniformly compressed into a folder with the libs packages in the **Libs/site-packages** folder that you rely on.
[](https://i.stack.imgur.com/ui8Ej.png)
Step 5:
Create webjob in Web app service and upload the zip file,then you could execute your Web Job and check the log
[](https://i.stack.imgur.com/eS13d.png)
You could also refer to the SO thread :[Options for running Python scripts in Azure](https://stackoverflow.com/questions/45716757/options-for-running-python-scripts-in-azure/45725327#45725327)
In addition, if you want to use the modules in Anaconda, please download them separately. There is no need to download the entire library.
Hope it helps you.
|
You can point your Azure WebJob to your main WebApp environment (and thus its real site packages). This allows you to use the newest fastest version of Python supported by the WebApp (right now mine is 364x64), much better than 3.4 or 2.7 in x86. Another huge benefit is then you don't have to maintain an additional set of packages that are statically held in a file somewhere (this gave me a lot of trouble with dynamic libraries with crazy dependencies such as psycopg2 and pandas).
HOW: In your WebJobs files, set up a .cmd file that runs your run.py, and in that .cmd file, you can just have one line of code like this:
D:\home\python364x64\python.exe run.py
That's it!
Azure WebJobs looks at .cmd files first, then run.py and others.
See this link for an official MS post on this method:
<https://blogs.msdn.microsoft.com/azureossds/2016/12/09/running-python-webjob-on-azure-app-services-using-non-default-python-version/>
| 10,094
|
59,103,401
|
I am currently implementing a MINLP optimization problem in Python GEKKO for determining the optimal operational strategy of a trigeneration energy system. As I consider the energy demand during all periods of different representative days as input data, basically all my decision variables, intermediates, etc. are 2D arrays.
I suspect that the declaration of the 2D intermediates is my problem. Right now I used list comprehension to declare 2D intermediates, but it seems like python cannot use these intermediates. Furthermore, the error *This steady-state IMODE only allows scalar values.* occurs.
Whenever I use the GEKKO m.Array function like this:
`e_GT = m.Array(m.Intermediate(E_GT[z][p]/E_max_GT) for z in range(Z) for p in range(P), (Z,P))`
it says, that the GEKKO object m.Intermediate cannot be called.
I would be very thankful if anyone could give me a hint.
Here is the complete code:
```
"""
Created on Fri Nov 22 10:18:33 2019
@author: julia
"""
# __Get GEKKO & numpy___
from gekko import GEKKO
import numpy as np
# ___Initialize model___
m = GEKKO()
# ___Global options_____
m.options.SOLVER = 1 # APOPT is MINLP Solver
# ______Constants_______
i = m.Const(value=0.05)
n = m.Const(value=10)
C_GT = m.Const(value=100000)
C_RB = m.Const(value=10000)
C_HB = m.Const(value=10000)
C_RS = m.Const(value=10000)
C_RE = m.Const(value=10000)
Z = 12
P = 24
E_min_GT = m.Const(value=1000)
E_max_GT = m.Const(value=50000)
F_max_GT = m.Const(value=100000)
Q_max_GT = m.Const(value=100000)
a = m.Const(value=1)
b = m.Const(value=1)
c = m.Const(value=1)
d = m.Const(value=1)
eta_RB = m.Const(value=0.01)
eta_HB = m.Const(value=0.01)
eta_RS = m.Const(value=0.01)
eta_RE = m.Const(value=0.01)
alpha = m.Const(value=0.01)
# ______Parameters______
T_z = m.Param([31, 28, 31, 30, 31, 30, 31, 31, 30, 31, 30, 31])
C_p_Gas = m.Param(np.ones([P]))
C_p_Elec = m.Param(np.ones([P]))
E_d = np.ones([Z,P])
H_d = np.ones([Z,P])
K_d = np.ones([Z,P])
# _______Variables______
E_purch = m.Array(m.Var, (Z,P), lb=0)
E_GT = m.Array(m.Var, (Z,P), lb=0)
F_GT = m.Array(m.Var, (Z,P), lb=0)
Q_GT = m.Array(m.Var, (Z,P), lb=0)
Q_GT_RB = m.Array(m.Var, (Z,P), lb=0)
Q_disp = m.Array(m.Var, (Z,P), lb=0)
Q_HB = m.Array(m.Var, (Z,P), lb=0)
K_RS = m.Array(m.Var, (Z,P), lb=0)
K_RE = m.Array(m.Var, (Z,P), lb=0)
delta_GT = m.Array(m.Var, (Z,P), lb=0, ub=1, integer=True)
delta_RB = m.Array(m.Var, (Z,P), lb=0, ub=1, integer=True)
delta_HB = m.Array(m.Var, (Z,P), lb=0, ub=1, integer=True)
delta_RS = m.Array(m.Var, (Z,P), lb=0, ub=1, integer=True)
delta_RE = m.Array(m.Var, (Z,P), lb=0, ub=1, integer=True)
# ____Intermediates_____
R = m.Intermediate((i*(1+i)**n)/((1+i)**n-1))
e_min_GT = m.Intermediate(E_min_GT/E_max_GT)
e_GT = [m.Intermediate(E_GT[z][p]/E_max_GT) for z in range(Z) for p in range(P)]
f_GT = [m.Intermediate(F_GT[z][p]/F_max_GT) for z in range(Z) for p in range(P)]
q_GT = [m.Intermediate(Q_GT[z][p]/Q_max_GT) for z in range(Z) for p in range(P)]
Q_RB = [m.Intermediate(eta_RB*Q_GT_RB[z][p]*delta_RB[z][p]) for z in range(Z) for p in range(P)]
F_HB = [m.Intermediate(eta_HB*Q_HB[z][p]*delta_HB[z][p]) for z in range(Z) for p in range(P)]
Q_RS = [m.Intermediate(eta_RS*K_RS[z][p]*delta_RS[z][p]) for z in range(Z) for p in range(P)]
E_RE = [m.Intermediate(eta_RE*K_RE[z][p]*delta_RE[z][p]) for z in range(Z) for p in range(P)]
F_Gas = [m.Intermediate(F_GT[z][p] + eta_HB*Q_HB[z][p]*delta_HB[z][p]) for z in range(Z) for p in range(P)]
Cc = m.Intermediate(R*(C_GT + C_RB + C_HB + C_RS + C_RE))
Cr_z = m.Intermediate((sum(C_p_Gas[p]*F_Gas[z][p] + C_p_Elec[p]*E_purch[z][p]) for p in range(P)) for z in range(Z))
Cr = m.Intermediate(sum(Cr_z[z]*T_z[z]) for z in range(Z))
# ______Equations_______
m.Equation(e_min_GT[z][p]*delta_GT[z][p] <= e_GT[z][p] for z in range(Z) for p in range(P))
m.Equation(e_GT[z][p] <= 1*delta_GT[z][p] for z in range(Z) for p in range(P))
m.Equation(f_GT [z][p]== a*delta_GT[z][p] + b*e_GT[z][p] for z in range(Z) for p in range(P))
m.Equation(q_GT [z][p]== c*delta_GT[z][p] + d*e_GT[z][p] for z in range(Z) for p in range(P))
m.Equation(E_purch[z][p] + E_GT[z][p] == E_RE[z][p] + E_d[z][p] for z in range(Z) for p in range(P))
m.Equation(Q_GT[z][p] == Q_disp[z][p] + Q_GT_RB[z][p] for z in range(Z) for p in range(P))
m.Equation(Q_RB[z][p] + Q_HB[z][p] == Q_RS[z][p] + H_d[z][p] for z in range(Z) for p in range(P))
m.Equation(K_RS[z][p] + K_RE[z][p] == K_d[z][p] for z in range(Z) for p in range(P))
m.Equation(Q_disp[z][p] <= alpha*Q_GT[z][p] for z in range(Z) for p in range(P))
# ______Objective_______
m.Obj(Cc + Cr)
#_____Solve Problem_____
m.solve()
```
|
2019/11/29
|
[
"https://Stackoverflow.com/questions/59103401",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/12456060/"
] |
Extra square brackets are needed for 2D list definition. This gives a 2D list with 3 rows and 4 columns.
```py
[[p+10*z for p in range(3)] for z in range(4)]
# Result: [[0, 1, 2], [10, 11, 12], [20, 21, 22], [30, 31, 32]]
```
If you leave out the inner brackets, it is a 1D list of length 12.
```py
[p+10*z for p in range(3) for z in range(4)]
# Result: [0, 10, 20, 30, 1, 11, 21, 31, 2, 12, 22, 32]
```
It also works when each element of the list is a Gekko `Intermediate`.
```py
[[m.Intermediate(p+10*z) for p in range(3)] for z in range(4)]
```
|
To diagnose the problem, I added a call to open the run folder before the solve command.
```py
#_____Solve Problem_____
m.open_folder()
m.solve()
```
I opened the `gk_model0.apm` model file with a text editor to look at a text version of the model. At the bottom it shows that there are problems with the last two intermediates and 9 equations.
```
i2327=<generator object <genexpr> at 0x0E51BC30>
i2328=<generator object <genexpr> at 0x0E51BC30>
End Intermediates
Equations
<generator object <genexpr> at 0x0E51BC30>
<generator object <genexpr> at 0x0E51BC30>
<generator object <genexpr> at 0x0E51BC30>
<generator object <genexpr> at 0x0E51BC30>
<generator object <genexpr> at 0x0E51BC30>
<generator object <genexpr> at 0x0E51BC30>
<generator object <genexpr> at 0x0E51BC30>
<generator object <genexpr> at 0x0E51BC30>
<generator object <genexpr> at 0x0E51BC30>
minimize (i2326+i2328)
End Equations
End Model
```
This happens when the `.value` property of a `Param` or `Var` is a lists or numpy array instead of a scalar value.
```py
# ______Parameters______
#T_z = m.Param([31, 28, 31, 30, 31, 30, 31, 31, 30, 31, 30, 31])
T_z = [31, 28, 31, 30, 31, 30, 31, 31, 30, 31, 30, 31]
```
There were also some other problems such as
* `e_min_GT` referenced as an element of a list instead of a scalar value in the first equation
* missing square brackets in the Intermediate list comprehensions to make the list 2D such as `[[2 for p in range(P)] for z in range(Z)]`
* dimension mismatch from `for z in range(Z)] for p in range(P)]` instead of `for p in range(P)] for z in range(Z)]`
* for better efficiency, use `m.sum` instead of `sum`
I made a few other changes as well. A differencing program should show them.
```py
"""
Created on Fri Nov 22 10:18:33 2019
@author: julia
"""
# __Get GEKKO & numpy___
from gekko import GEKKO
import numpy as np
# ___Initialize model___
m = GEKKO()
# ___Global options_____
m.options.SOLVER = 1 # APOPT is MINLP Solver
# ______Constants_______
i = m.Const(value=0.05)
n = m.Const(value=10)
C_GT = m.Const(value=100000)
C_RB = m.Const(value=10000)
C_HB = m.Const(value=10000)
C_RS = m.Const(value=10000)
C_RE = m.Const(value=10000)
Z = 12
P = 24
E_min_GT = m.Const(value=1000)
E_max_GT = m.Const(value=50000)
F_max_GT = m.Const(value=100000)
Q_max_GT = m.Const(value=100000)
a = m.Const(value=1)
b = m.Const(value=1)
c = m.Const(value=1)
d = m.Const(value=1)
eta_RB = m.Const(value=0.01)
eta_HB = m.Const(value=0.01)
eta_RS = m.Const(value=0.01)
eta_RE = m.Const(value=0.01)
alpha = m.Const(value=0.01)
# ______Parameters______
T_z = [31, 28, 31, 30, 31, 30, 31, 31, 30, 31, 30, 31]
C_p_Gas = np.ones(P)
C_p_Elec = np.ones(P)
E_d = np.ones([Z,P])
H_d = np.ones([Z,P])
K_d = np.ones([Z,P])
# _______Variables______
E_purch = m.Array(m.Var, (Z,P), lb=0)
E_GT = m.Array(m.Var, (Z,P), lb=0)
F_GT = m.Array(m.Var, (Z,P), lb=0)
Q_GT = m.Array(m.Var, (Z,P), lb=0)
Q_GT_RB = m.Array(m.Var, (Z,P), lb=0)
Q_disp = m.Array(m.Var, (Z,P), lb=0)
Q_HB = m.Array(m.Var, (Z,P), lb=0)
K_RS = m.Array(m.Var, (Z,P), lb=0)
K_RE = m.Array(m.Var, (Z,P), lb=0)
delta_GT = m.Array(m.Var, (Z,P), lb=0, ub=1, integer=True)
delta_RB = m.Array(m.Var, (Z,P), lb=0, ub=1, integer=True)
delta_HB = m.Array(m.Var, (Z,P), lb=0, ub=1, integer=True)
delta_RS = m.Array(m.Var, (Z,P), lb=0, ub=1, integer=True)
delta_RE = m.Array(m.Var, (Z,P), lb=0, ub=1, integer=True)
# ____Intermediates_____
R = m.Intermediate((i*(1+i)**n)/((1+i)**n-1))
e_min_GT = m.Intermediate(E_min_GT/E_max_GT)
e_GT = [[m.Intermediate(E_GT[z][p]/E_max_GT) for p in range(P)] for z in range(Z)]
f_GT = [[m.Intermediate(F_GT[z][p]/F_max_GT) for p in range(P)] for z in range(Z)]
q_GT = [[m.Intermediate(Q_GT[z][p]/Q_max_GT) for p in range(P)] for z in range(Z)]
Q_RB = [[m.Intermediate(eta_RB*Q_GT_RB[z][p]*delta_RB[z][p]) for p in range(P)] for z in range(Z)]
F_HB = [[m.Intermediate(eta_HB*Q_HB[z][p]*delta_HB[z][p]) for p in range(P)] for z in range(Z)]
Q_RS = [[m.Intermediate(eta_RS*K_RS[z][p]*delta_RS[z][p]) for p in range(P)] for z in range(Z)]
E_RE = [[m.Intermediate(eta_RE*K_RE[z][p]*delta_RE[z][p]) for p in range(P)] for z in range(Z)]
F_Gas = [[m.Intermediate(F_GT[z][p] + eta_HB*Q_HB[z][p]*delta_HB[z][p]) for p in range(P)] for z in range(Z)]
Cc = m.Intermediate(R*(C_GT + C_RB + C_HB + C_RS + C_RE))
Cr_z = [m.Intermediate(m.sum([C_p_Gas[p]*F_Gas[z][p] + C_p_Elec[p]*E_purch[z][p] for p in range(P)])) for z in range(Z)]
Cr = m.Intermediate(m.sum([Cr_z[z]*T_z[z] for z in range(Z)]))
# ______Equations_______
m.Equation([e_min_GT*delta_GT[z][p] <= e_GT[z][p] for z in range(Z) for p in range(P)])
m.Equation([e_GT[z][p] <= 1*delta_GT[z][p] for z in range(Z) for p in range(P)])
m.Equation([f_GT [z][p]== a*delta_GT[z][p] + b*e_GT[z][p] for z in range(Z) for p in range(P)])
m.Equation([q_GT [z][p]== c*delta_GT[z][p] + d*e_GT[z][p] for z in range(Z) for p in range(P)])
m.Equation([E_purch[z][p] + E_GT[z][p] == E_RE[z][p] + E_d[z][p] for z in range(Z) for p in range(P)])
m.Equation([Q_GT[z][p] == Q_disp[z][p] + Q_GT_RB[z][p] for z in range(Z) for p in range(P)])
m.Equation([Q_RB[z][p] + Q_HB[z][p] == Q_RS[z][p] + H_d[z][p] for z in range(Z) for p in range(P)])
m.Equation([K_RS[z][p] + K_RE[z][p] == K_d[z][p] for z in range(Z) for p in range(P)])
m.Equation([Q_disp[z][p] <= alpha*Q_GT[z][p] for z in range(Z) for p in range(P)])
# ______Objective_______
m.Obj(Cc + Cr)
#_____Solve Problem_____
#m.open_folder()
m.solve()
```
The problem solves with `1.6 sec` of solver time.
```
--------- APM Model Size ------------
Each time step contains
Objects : 13
Constants : 20
Variables : 5209
Intermediates: 2320
Connections : 313
Equations : 5213
Residuals : 2893
Number of state variables: 5209
Number of total equations: - 2905
Number of slack variables: - 864
---------------------------------------
Degrees of freedom : 1440
----------------------------------------------
Steady State Optimization with APOPT Solver
----------------------------------------------
Iter: 1 I: 0 Tm: 1.53 NLPi: 3 Dpth: 0 Lvs: 0 Obj: 2.69E+04 Gap: 0.00E+00
Successful solution
---------------------------------------------------
Solver : APOPT (v1.0)
Solution time : 1.59730000000854 sec
Objective : 26890.6404951639
Successful solution
---------------------------------------------------
```
| 10,095
|
42,329,346
|
I am trying to learn Cloudformation im stuck with a senario where I need a second EC2 instance started after one EC2 is provisioned and good to go.
This is what i have in UserData of Instance one
```
"#!/bin/bash\n",
"#############################################################################################\n",
"sudo add-apt-repository ppa:fkrull/deadsnakes\n",
"sudo apt-get update\n",
"curl -sL https://deb.nodesource.com/setup_7.x | sudo -E bash -\n",
"sudo apt-get install build-essential libssl-dev python2.7 python-setuptools -y\n",
"#############################################################################################\n",
"Install Easy Install",
"#############################################################################################\n",
"easy_install https://s3.amazonaws.com/cloudformation-examples/aws-cfn-bootstrap-latest.tar.gz\n",
"#############################################################################################\n",
"#############################################################################################\n",
"GIT LFS Repo",
"#############################################################################################\n",
"curl -s https://packagecloud.io/install/repositories/github/git-lfs/script.deb.sh | sudo bash\n",
"#############################################################################################\n",
"cfn-init",
" --stack ",
{
"Ref": "AWS::StackName"
},
" --resource UI",
" --configsets InstallAndRun ",
" --region ",
{
"Ref": "AWS::Region"
},
"\n",
"#############################################################################################\n",
"# Signal the status from cfn-init\n",
"cfn-signal -e 0 ",
" --stack ",
{
"Ref": "AWS::StackName"
},
" --resource UI",
" --region ",
{
"Ref": "AWS::Region"
},
" ",
{
"Ref": "WaitHandleUIConfig"
},
"\n"
```
I have a WaitCondition , which i think is whats used to do this
```
"WaitHandleUIConfig" : {
"Type" : "AWS::CloudFormation::WaitConditionHandle",
"Properties" : {}
},
"WaitConditionUIConfig" : {
"Type" : "AWS::CloudFormation::WaitCondition",
"DependsOn" : "UI",
"Properties" : {
"Handle" : { "Ref" : "WaitHandleUIConfig" },
"Timeout" : "500"
}
}
```
In the Instance i use the DependsOn in the second instance to wait for first instance.
```
"Service": {
"Type": "AWS::EC2::Instance",
"Properties": {
},
"Metadata": {
"AWS::CloudFormation::Designer": {
"id": "1ba546d0-2bad-4b68-af47-6e35159290ca"
},
},
"DependsOn":"WaitConditionUIConfig"
}
```
this isnt working. I keep getting the error
**WaitCondition timed out. Received 0 conditions when expecting 1**
Any help would be appreciated.
Thanks
|
2017/02/19
|
[
"https://Stackoverflow.com/questions/42329346",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/907937/"
] |
The first thing I notice is that you have two `GET`s, instead of a `GET` and then a `POST` (as you noticed yourself).
But let's assume the HTML form is correct, since it works in development. That makes me suspect there is some Javascript that is throwing things off. One common problem when moving things from development to production is the Asset Pipeline. So I would open your browser's console, refresh the page, and see if you get any Javascript errors, especially about files failing to load.
You should also share how you are including Javascript files. Can you share that part of your layout (or whatever)? I would make sure that you use `asset_path` and friends, and not just `/assets/application.js`, because the latter will break when you go to production. Even better, use `javascript_include_tag`.
|
I am no expert but since your code is working on local just try the following to get an idea where the problem might be:
a) Run production environment on your local, see if the problem persists there.
b) Try to test run without any javascript enabled or atleast disable custom ones on the production server.
c) Try to redirect after devise authentication to an existing/static page.
I am sure you will get some hints or ideas
| 10,096
|
16,256,341
|
I am trying to go back to the top of a function (not restart it, but go to the top) but can not figure out how to do this. Instead of giving you the long code I'm just going to make up an example of what I want:
```
used = [0,0,0]
def fun():
score = input("please enter a place to put it: ")
if score == "this one":
score [0] = total
if score == "here"
if used[1] == 0:
score[1] = total
used[1] = 1
elif used[1] == 1:
print("Already used")
#### Go back to score so it can let you choice somewhere else.
list = [this one, here]
```
I need to be able to go back so essentially it forgets you tried to use "here" again without wiping the memory. All though I know they are awful, I basically need a go to but they don't exist in python. Any ideas?
\*Edit: Ah sorry, I forgot to mention that when it's already in use, I need to be able to pick somewhere else for it to go (I just didn't want to bog down the code). I added the score == "this one"- so if I tried to put it in "here", "here" was already taken, it would give me the option of redoing score = input("") and then I could take that value and plug it into "this one" instead of "here". Your loop statement will get back to the top, but doesn't let me take the value I just found and put it somewhere else. I hope this is making sense:p
|
2013/04/27
|
[
"https://Stackoverflow.com/questions/16256341",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2255589/"
] |
What you are looking for is a `while` loop. You want to set up your loop to keep going until a place is found. Something like this:
```
def fun():
found_place = False
while not found_place:
score = input("please enter a place to put it: ")
if score == "here"
if used[1] == 0:
score[1] = total
used[1] = 1
found_place = True
elif used[1] == 1:
print("Already used")
```
That way, once you've found a place, you set `found_place` to `True` which stops the loop. If you haven't found a place, `found_place` remains `False` and you go through the loop again.
|
As Ashwini correctly points out, you should do a `while` loop
```
def fun():
end_condition = False
while not end_condition:
score = input("please enter a place to put it: ")
if score == "here":
if used[1] == 0:
score[1] = total
used[1] = 1
elif used[1] == 1:
print("Already used")
```
| 10,102
|
44,628,435
|
I am making a request to my api.ai chatbot after following the instructions given on their official github website [here](https://github.com/api-ai/apiai-python-client/blob/master/examples/send_text_example.py). The following is the code for which I am getting an error, to which the solution is supposedly to call the function with proxy settings. I however do not know a way to do so.`
```
ai = apiai.ApiAI(CLIENT_ACCESS_TOKEN)
request = ai.text_request()
request.set_proxy('proxy1.company.com:8080','http')
question = input()
request.query = question
response = request.getresponse()`
```
I get the following error on the last line.
```
ConnectionResetError: [WinError 10054] An existing connection was forcibly closed by the remote host
```
Please suggest how I use the proxy settings.
I am using Anaconda on Windows to run the script.
|
2017/06/19
|
[
"https://Stackoverflow.com/questions/44628435",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/7872066/"
] |
>
> I've tried various casting attempts
>
>
>
Have you tried this one?
```
.FirstOrDefault(ids => ids.Contains((T)propertyInfo.GetValue(item, null)))
```
Since `ids` is of type `IGrouping<TKey, TElement>` where `TElement` is of type `T` in your case, casting the value of property to `T` will allow for the comparison.
|
Ok, so I cracked it in the end. I needed to add more detail in my generic method header/signature.
```
public static IEnumerable<T> MixObjectsByProperty<T, U>(
IEnumerable<T> objects, string propertyName, IEnumerable<IEnumerable<U>> groupsToMergeByProperty = null)
where T : class
where U : class
{
...
```
Note, I added class constraints which allows me to use nullable operators, etc.
And the body became (after adding a propertyValue variable which is casted to the correct type!):
```
groups =
(from item in objects
let propertyInfo = item.GetType().GetProperty(propertyName)
where propertyInfo != null
let propertyValue = (U)propertyInfo.GetValue(item, null)
group item by
groupsToMergeByProperty
.FirstOrDefault(ids => ids.Contains(propertyValue))
?.First()
?? propertyValue
into itemGroup
select itemGroup.ToArray())
.ToList();
```
| 10,103
|
53,311,721
|
This question is killing me softly at the moment.
I am trying to learn python, lambda, and Dynamodb.
Python looks awesome, I am able to connect to MySQL while using a normal MySQL server like Xampp, the goal is to learn to work with Dynamodb, but somehow I am unable to get\_items from the Dynamodb. This is really kicking my head in and already taking the last two days.
Have watched tons of youtube movies and read the aws documentation.
Any clues what I am doing wrong.
My code till now;
```
import json
import boto3
from boto3.dynamodb.conditions import Key, Attr
#always start with the lambda_handler
def lambda_handler(event, context):
# make the connection to dynamodb
dynamodb = boto3.resource('dynamodb')
# select the table
table = dynamodb.Table("html_contents")
# get item from database
items = table.get_item(Key={"id": '1'})
```
Everywhere I look I see that I should do it like this.
But I keep getting the following error
```
{errorMessage=An error occurred (ValidationException) when calling the GetItem operation: The provided key element does not match the schema, errorType=ClientError, stackTrace=[["\/var\/task\/lambda_function.py",16,"lambda_handler","\"id\": '1'"],["\/var\/runtime\/boto3\/resources\/factory.py",520,"do_action","response = action(self, *args, **kwargs)"],["\/var\/runtime\/boto3\/resources\/action.py",83,"__call__","response = getattr(parent.meta.client, operation_name)(**params)"],["\/var\/runtime\/botocore\/client.py",314,"_api_call","return self._make_api_call(operation_name, kwargs)"],["\/var\/runtime\/botocore\/client.py",612,"_make_api_call","raise error_class(parsed_response, operation_name)"]]}
```
My database structure.
[](https://i.stack.imgur.com/DU7c9.png)
My DynamoDb settings
Table name html\_contents
Primary partition key id (Number)
Primary sort key -
Point-in-time recovery DISABLEDEnable
Encryption DISABLED
Time to live attribute DISABLEDManage TTL
Table status Active
What am I doing wrong here? I start to think I did something wrong with the aws configuration.
Thank you in advance.
Wesley
|
2018/11/15
|
[
"https://Stackoverflow.com/questions/53311721",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/9821202/"
] |
Thats to @ippi.
It was the quotes that I am using.
```
table.get_item(Key={"id": '1'})
```
needed to be
```
table.get_item(Key={"id": 1})
```
As I am using a numeric and not a string.
Hope this helps for the next person(s) with the same problem.
|
You're facing this problem because you have created a table with a partition key whose data type is an integer.
Now you're performing a read an item operation specifying partition as a string which needs to be an integer that causes this issue.
I'm the author of Lucid-Dynamodb, a minimalist wrapper to AWS DynamoDB. It covers all the Dynamodb operations.
**Reference:** <https://github.com/dineshsonachalam/Lucid-Dynamodb#4-read-an-item>
| 10,106
|
61,973,288
|
currently im learning how to use Apache Airflow and trying to create a simple DAG script like this
```
from datetime import datetime
from airflow import DAG
from airflow.operators.dummy_operator import DummyOperator
from airflow.operators.python_operator import PythonOperator
def print_hello():
return 'Hello world!'
dag = DAG('hello_world', description='Simple tutorial DAG',
schedule_interval='0 0 * * *',
start_date=datetime(2020, 5, 23), catchup=False)
dummy_operator = DummyOperator(task_id='dummy_task', retries=3, dag=dag)
hello_operator = PythonOperator(task_id='hello_task', python_callable=print_hello, dag=dag)
dummy_operator >> hello_operator
```
i run those DAG using web server and run succesfully even checked the logs
```
[2020-05-23 20:43:53,411] {taskinstance.py:669} INFO - Dependencies all met for <TaskInstance: hello_world.hello_task 2020-05-23T13:42:17.463955+00:00 [queued]>
[2020-05-23 20:43:53,431] {taskinstance.py:669} INFO - Dependencies all met for <TaskInstance: hello_world.hello_task 2020-05-23T13:42:17.463955+00:00 [queued]>
[2020-05-23 20:43:53,432] {taskinstance.py:879} INFO -
--------------------------------------------------------------------------------
[2020-05-23 20:43:53,432] {taskinstance.py:880} INFO - Starting attempt 1 of 1
[2020-05-23 20:43:53,432] {taskinstance.py:881} INFO -
--------------------------------------------------------------------------------
[2020-05-23 20:43:53,448] {taskinstance.py:900} INFO - Executing <Task(PythonOperator): hello_task> on 2020-05-23T13:42:17.463955+00:00
[2020-05-23 20:43:53,477] {standard_task_runner.py:53} INFO - Started process 7442 to run task
[2020-05-23 20:43:53,685] {logging_mixin.py:112} INFO - Running %s on host %s <TaskInstance: hello_world.hello_task 2020-05-23T13:42:17.463955+00:00 [running]> LAPTOP-9BCTKM5O.localdomain
[2020-05-23 20:43:53,715] {python_operator.py:114} INFO - Done. Returned value was: Hello world!
[2020-05-23 20:43:53,738] {taskinstance.py:1052} INFO - Marking task as SUCCESS.dag_id=hello_world, task_id=hello_task, execution_date=20200523T134217, start_date=20200523T134353, end_date=20200523T134353
[2020-05-23 20:44:03,372] {logging_mixin.py:112} INFO - [2020-05-23 20:44:03,372] {local_task_job.py:103} INFO - Task exited with return code 0
```
but when i tried to test run a single task using this command
```
airflow test dags/main.py hello_task 2020-05-23
```
it shows this error
```
airflow.exceptions.AirflowException: dag_id could not be found: dags/main.py. Either the dag did not exist or it failed to parse.
```
where i went wrong ?
|
2020/05/23
|
[
"https://Stackoverflow.com/questions/61973288",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/-1/"
] |
Put a ROW ID on your tables
```
df_1 <- read_table("A B C
2.3 5 3
12 3 1
0.4 13 2") %>%
rowid_to_column("ROW")
df_2 <- read_table("A B C
4.3 23 1
1 7 2
0.4 10 2") %>%
rowid_to_column("ROW")
df_3 <- read_table("A B C
1.3 3 3
2.2 4 2
12.4 10 1") %>%
rowid_to_column("ROW")
```
Bind them together in an ensemble
```
ensamb <- bind_rows(df_1, df_2, df_3)
```
`group_by` row and then summarize each one by its own method
```
ensamb %>%
group_by(ROW) %>%
summarise(A = mean(A), B = median(B),
C = C[which.max(C)])
# A tibble: 3 x 4
ROW A B C
<int> <dbl> <dbl> <dbl>
1 1 2.63 5 3
2 2 5.07 4 2
3 3 4.4 10 2
```
|
You can put all the dataframes in a list :
```
list_df <- mget(ls(pattern = 'df_\\d+'))
```
Then calculate the stats for each column separately.
```
data.frame(A = Reduce(`+`, lapply(list_df, `[[`, 1))/length(list_df),
B = apply(do.call(rbind, lapply(list_df, `[[`, 2)), 2, median),
C = apply(do.call(rbind, lapply(list_df, `[[`, 3)), 2, Mode),
row.names = NULL)
# A B C
#1 2.633333 5 3
#2 5.066667 4 2
#3 4.400000 10 2
```
where `Mode` function is taken from [here](https://stackoverflow.com/a/8189441/3962914) :
```
Mode <- function(x) {
ux <- unique(x)
ux[which.max(tabulate(match(x, ux)))]
}
```
| 10,107
|
28,741,772
|
I am a novice writing a simple script to analyse a game. The data I would like to use describes "Items" and they have statistics associated with them (eg. "Attack Speed").
To clarify: The game is not something I have access to beyond being a player, my script is to compare combinations of the items. I will manually look up the information on each item, for example:
```
Name: Bloodforge
Price: 2715
Lifesteal: 0.15
Power: 40
```
These items will change as they are updated in the actual game, so I am looking for a way to store/update them manually (editing text) and easily access the statistics for these items using python.
I have looked into using XML and JSON, as well as MySQL. Are there any other suggestions that might fit this usage? Which libraries should I use?
|
2015/02/26
|
[
"https://Stackoverflow.com/questions/28741772",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4610057/"
] |
Without further info, I would say to use JSON, as it's easy to use and human-readable:
```
{
"Attack Speed": 5,
"Items": ["Dirt", "Flower", "Egg"]
}
```
|
Well, You got many more options. From least to most complicated :
* [Pickle](https://wiki.python.org/moin/UsingPickle)
* [Shelve](http://pymotw.com/2/shelve/)
* [SQLite](http://zetcode.com/db/sqlitepythontutorial/)
* [SQLAlchemy](http://www.sqlalchemy.org/)
What You should use really depends on what are Your needs exactly. If You are only starting developing game, and are novice - go for pickle or shelve (or both) for now.
They are simple and will let You keep focus on game mechanics and learning python stuff.
Later, when You will need something more complicated - You can move to using some relational database and go for the web with SQLAlchemy.
**EDIT:**
Information that You provided suggest that You do not need python at all. The thing You want coud be achieved in spreadsheet.
But analyzing data would be simpler in SQL, so I can recomend MySQL for it, or if You want something really simple SQLite with some managing tools, like for example [this FF plugin](https://addons.mozilla.org/pl/firefox/addon/sqlite-manager/). You can create table You need, manually create rows in it and then write some SQL query to give You statistics in the way You need.
| 10,108
|
19,228,516
|
Here is my argparse sample say sample.py
```
import argparse
parser = argparse.ArgumentParser()
parser.add_argument("-p", nargs="+", help="Stuff")
args = parser.parse_args()
print args
```
Python - 2.7.3
I expect that the user supplies a list of arguments separated by spaces after the -p option. For example, if you run
```
$ sample.py -p x y
Namespace(p=['x', 'y'])
```
But my problem is that when you run
```
$ sample.py -p x -p y
Namespace(p=['y'])
```
Which is neither here nor there. I would like one of the following
* Throw an exception to the user asking him to not use -p twice instead just supply them as one argument
* Just assume it is the same option and produce a list of ['x','y'].
I can see that python 2.7 is doing neither of them which confuses me. Can I get python to do one of the two behaviours documented above?
|
2013/10/07
|
[
"https://Stackoverflow.com/questions/19228516",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/44124/"
] |
>
> Note: python 3.8 adds an `action="extend"` which will create the desired list of ['x','y']
>
>
>
To produce a list of ['x','y'] use `action='append'`. Actually it gives
```
Namespace(p=[['x'], ['y']])
```
For each `-p` it gives a list `['x']` as dictated by `nargs='+'`, but `append` means, add that value to what the Namespace already has. The default action just sets the value, e.g. `NS['p']=['x']`. I'd suggest reviewing the `action` paragraph in the docs.
`optionals` allow repeated use by design. It enables actions like `append` and `count`. Usually users don't expect to use them repeatedly, or are happy with the last value. `positionals` (without the `-flag`) cannot be repeated (except as allowed by `nargs`).
[How to add optional or once arguments?](https://stackoverflow.com/questions/18544468/how-to-add-optional-or-once-arguments/18564559#18564559) has some suggestions on how to create a 'no repeats' argument. One is to create a custom `action` class.
|
I ran into the same issue. I decided to go with the custom action route as suggested by mgilson.
```
import argparse
class ExtendAction(argparse.Action):
def __call__(self, parser, namespace, values, option_string=None):
if getattr(namespace, self.dest, None) is None:
setattr(namespace, self.dest, [])
getattr(namespace, self.dest).extend(values)
parser = argparse.ArgumentParser()
parser.add_argument("-p", nargs="+", help="Stuff", action=ExtendAction)
args = parser.parse_args()
print args
```
This results in
```
$ ./sample.py -p x -p y -p z w
Namespace(p=['x', 'y', 'z', 'w'])
```
Still, it would have been much neater if there was an `action='extend'` option in the library by default.
| 10,109
|
55,376,876
|
I would like to setup the local pgadmin in server mode behind the reverse proxy. The reverse proxy and the pgadmin could be on the same machine. I tried to set up but it always fails.
Here is mypgadmin conf:
```
Listen 8080
<VirtualHost *:8080>
SSLEngine on
SSLCertificateFile /etc/pki/tls/certs/pgadmin.crt
SSLCertificateKeyFile /etc/pki/tls/private/pgadmin.key
LoadModule wsgi_module modules/mod_wsgi.so
LoadModule ssl_module modules/mod_ssl.so
WSGIDaemonProcess pgadmin processes=1 threads=25
WSGIScriptAlias /pgadmin /usr/lib/python2.7/site-packages/pgadmin4-web/pgAdmin4.wsgi
<Directory /usr/lib/python2.7/site-packages/pgadmin4-web/>
WSGIProcessGroup pgadmin
WSGIApplicationGroup %{GLOBAL}
<IfModule mod_authz_core.c>
# Apache 2.4
Require all granted
</IfModule>
<IfModule !mod_authz_core.c>
# Apache 2.2
Order Deny,Allow
Deny from All
Allow from 127.0.0.1
Allow from ::1
</IfModule>
</Directory>
</VirtualHost>
```
and my reverse proxy conf
```
Listen 443
<VirtualHost *:443>
SSLEngine on
SSLCertificateFile /etc/pki/tls/certs/localhost.crt
SSLCertificateKeyFile /etc/pki/tls/private/localhost.key
ErrorLog /var/log/httpd/reverse_proxy_error.log
CustomLog /var/log/httpd/reverse_proxy_access.log combined
SSLProxyEngine on
SSLProxyVerify require
SSLProxyCheckPeerCN off
SSLProxyCheckPeerName off
SSLProxyCACertificateFile "/etc/pki/tls/certs/ca-bundle.crt"
ProxyPreserveHost On
ProxyPass / https://localhost:8080/pgadmin
ProxyPassReverse / https://localhost:8080/pgadmin
</VirtualHost>
```
The httpd start but when I want to test it with
```
wget --no-check-certificate https://localhost/
```
it give me error 400
but the
```
wget --no-check-certificate https://localhost:8080/pgadmin
```
is working. Where is the problem in my config?
|
2019/03/27
|
[
"https://Stackoverflow.com/questions/55376876",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2886412/"
] |
this work for me. I make pgadmin proxy to sub directory (https://localhost/pgadmin)
```
<VirtualHost *:80>
ServerName localhost
DocumentRoot "/var/www"
<Directory "/var/www">
AllowOverride all
</Directory
ProxyPass /ws/ ws://0.0.0.0:8888/
ProxyPass /phpmyadmin/ http://phpmyadmin/
<Location /pgadmin/>
ProxyPass http://pgadmin:5050/
ProxyPassReverse http://pgadmin:5050/
RequestHeader set X-Script-Name /pgadmin
RequestHeader set Host $http_host
</Location>
</VirtualHost>
```
|
Have you tried with latest version, I think it is fixed this commit Ref: [LINK](https://git.postgresql.org/gitweb/?p=pgadmin4.git;a=commit;h=f401def044c8b47974d58c71ff9e6f71f34ef41d)
Online Docs: <https://www.pgadmin.org/docs/pgadmin4/dev/server_deployment.html>
| 10,110
|
13,336,628
|
I have very simple web page example read from html file using python. the html called led.html as in bellow:
```
<html>
<body>
<br>
<p>
<p>
<a href="?switch=1"><img src="images/on.png"></a>
</body>
</html>
```
and the python code is:
```
import cherrypy
import os.path
import struct
class Server(object):
led_switch=1
def index(self, switch=''):
html = open('led.html','r').read()
if switch:
self.led_switch = int(switch)
print "Hellow world"
return html
index.exposed = True
conf = {
'global' : {
'server.socket_host': '0.0.0.0', #0.0.0.0 or specific IP
'server.socket_port': 8080 #server port
},
'/images': { #images served as static files
'tools.staticdir.on': True,
'tools.staticdir.dir': os.path.abspath('images')
},
'/favicon.ico': { #favorite icon
'tools.staticfile.on': True,
'tools.staticfile.filename': os.path.abspath("images/bulb.ico")
}
}
cherrypy.quickstart(Server(), config=conf)
```
The web page contain only one button called "on", when I click it I can see the text "Hello World " display on the terminal.
My question is how to make this text display on the web page over the "on" button after click on that button?
Thanks in advance.
|
2012/11/11
|
[
"https://Stackoverflow.com/questions/13336628",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1813738/"
] |
If you require to swap *all* first (of pair) elements (and not just `(1, 36)` and `(0, 36)`), you can do
`fwd_count_sort=sorted(rvs_count.items(), key=lambda x: (x[0][1],-x[0][0]), reverse=True)`
|
I'm not exactly sure on the definition of your sorting criteria, but this is a method to sort the `pair` list according to the values in `fwd_count` and `rvs_count`. Hopefully you can use this to get to the result you want.
```
def keyFromPair(pair):
"""Return a tuple (f, r) to be used for sorting the pairs by frequency."""
global fwd_count
global rvs_count
first, second = pair
countFirstInv = -fwd_count[first] # use the negative to reverse the sort order
countSecond = rvs_count[second]
return (first, second)
pairs_sorted = sorted(pair, key = keyFromPair)
```
The basic idea is to use Python's in-built tuple ordering mechanism to sort on multiple keys, and to invert one of the values in the tuple so make it a reverse-order sort.
| 10,112
|
63,851,302
|
I need to execute below function based on user input:
>
> If `X=0`, then from line `URL ....Print('Success` should be written to a file & get saved as `test.py`.
>
>
>
At the backend, the saved file (`Test.py`) would automatically get fetched by Task scheduler from the saved location & would run periodically.
And yes, we have many example to write a file / run python from another file, but couldn't get any resemblance to write the python script from another file.
I am sure missing few basic steps.
```
if x=0:
### Need to write below content to a file & save as test.py######
URL = "https://.../login"
headers = {"Content-Type":"application/json"}
params = {
"userName":"xx",
"password":"yy"
}
resp = requests.post(URL, headers = headers, data=json.dumps(params))
if resp.status_code != 200:
print('fail')
else:
print('Success')]
else:
### Need to write below content to a file ######
URL = "https://.../login"
headers = {"Content-Type":"application/json"}
params = {
"userName":"RR",
"password":"TT"
}
resp = requests.post(URL, headers = headers, data=json.dumps(params))
if resp.status_code != 200:
print('fail')
else:
print('Success')
```
|
2020/09/11
|
[
"https://Stackoverflow.com/questions/63851302",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/13824611/"
] |
Given a scalar `x` and a vector `v` the expression `x <=quantile (v, .95)` can be written as `sum( x > v) < Q` where `Q = .95 * numel(v)` \*.
Also `A_1` can be splitted before the loop to avoid extra indexing.
Moreover the most inner loop can be removed in favor of vectorization.
```
Af_1 = A_1(:,1);
Af_2 = A_2(:,1);
Af_3 = A_3(:,1);
As_1 = A_1(:,2:end);
As_2 = A_2(:,2:end);
As_3 = A_3(:,2:end);
Q = .95 * (n -1);
for i=1:m
for j=1:m
if any (sum (Af_1(i) + Af_2(j) + Af_3 > As_1(i,:) + As_2(j,:) + As_3, 2) < Q)
check(i) = 1;
break;
end
end
end
```
More optimization can be achieved by rearranging the expressions involved in the inequality and pre-computation:
```
lhs = A_3(:,1) - A_3(:,2:end);
lhsi = A_1(:,1) - A_1(:,2:end);
rhsj = A_2(:,2:end) - A_2(:,1);
Q = .95 * (n - 1);
for i=1:m
LHS = lhs + lhsi(i,:);
for j=1:m
if any (sum (LHS > rhsj(j,:), 2) < Q)
check(i) = 1;
break;
end
end
end
```
* Note that because of the method that is used in the computation of [quantile](https://www.mathworks.com/help/stats/quantile.html) you may get a slightly different result.
|
Option 1:
Because all numbers are positive, you can do some optimizations. 95 percentile will be only higher if you add `A1` to the mix - if you find the `j` and `k` of greatest 95 percentile of `A2+A3` on the right side compared to the sum of the first 2 elements, you can simply take that for every `i`.
```
maxDif = -inf;
for j = 1 : m
for k = 1 : m
newDif = quantile(A_2..., 0.95) - A_2(j,1)-A_3(k,1);
maxDif = max(newDif, maxDif);
end
end
```
If even that is too slow, you can first get `maxDifA2` and `maxDifA3`, then estimate that `maxDif` will be for those particular `j` and `k` values and calculate it.
Now, for some numbers you will get that `maxDif > A_1`, then the `check` is 1. For some numbers you will get that `maxDif + quantile(A1, 0.95) < A_1`, here `check` is 0 (if you estimated `maxDif` by separate calculation of A2 and A3 this isn't true!). For some (most?) you will unfortunately get values in between and this won't be helpful at all. Then what remains is option 2 (it is also more straightforward):
Option 2:
You could save some time if you can save summation `A_2+A_3` on the right side, as that calculation repeats for every different `i`, but that requires A LOT of memory. But `quantile` is the more expensive operation anyway, so you aren't saving a lot of time. Something along the lines of
```
for j = 1 : m
for k = 1 : m
A23R(j,k,:) = A2(j,:)+A3(k,:); % Unlikely to fit in memory.
end
end
```
Then you can perform your loops, using A23R and avoiding to repeat that sum for every `i`.
| 10,113
|
19,479,644
|
I saw a python example today and it used -> for example this was what I saw:
```
spam = None
bacon = 42
def monty_python(a:spam,b:bacon) -> "different:":
pass
```
What is that code doing? I'm not quite sure I've never seen code like that I don't really get what
```
a:spam,b:bacon
```
is doing either, can someone explain this for me? I googled, "what does -> do in python" but no good searches came up that I found.
|
2013/10/20
|
[
"https://Stackoverflow.com/questions/19479644",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2805788/"
] |
It is function annotation for a return type. [`annotations`](https://stackoverflow.com/questions/3038033/what-are-good-uses-for-python3s-function-annotations) do nothing inside the code, they are there to help a user with code completion (in my experience).
Here is the [PEP](http://www.python.org/dev/peps/pep-3107/) for it.
Let me demonstrate, what I mean by "annotations do nothing inside the code". Here is an example:
```
def fun(a: str, b: int) -> str:
return 1
if __name__ == '__main__':
print(fun(10, 10))
```
The above code will run without any errors. but as you can see the first parameter *should* be a `string`, and the second an `int`. But, this only is a problem in my IDE, the code runs just fine:

|
They're [function annotations](http://ceronman.com/2013/03/12/a-powerful-unused-feature-of-python-function-annotations/). They don't really do anything by themselves, but they can be used for documentation or in combination with metaprogramming.
| 10,114
|
13,700,045
|
I'm trying to build a graph library in python (along with standard graph-algorithms). I've tried to implement DFS and this is what it looks like
```
def DFS(gr, s, path):
""" Depth first search
Returns a list of nodes "findable" from s """
if s in path: return False
path.append(s)
for each in gr.neighbors(s):
if each not in path:
DFS(gr, each, path)
```
This is working fine but I'm not happy with how it needs to be used. E.g. currently you need to do this
```
path = []
DFS(mygraph, "s", path)
print path
```
Instead of this, I want to DFS to be used in this manner
```
path = DFS(mygraph, "s")
print path
```
With the recursive DFS, I am unable to come up with the implementation that works like above. Can someone give me some pointers on how can I achieve this?
|
2012/12/04
|
[
"https://Stackoverflow.com/questions/13700045",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/427069/"
] |
Just make a wrapper method that calls the one you already have:
```
def DFS(gr, s):
path = []
DFS2(gr, s, path)
return path
```
Here `DFS2` is the method you showed above.
|
Actually why don't you just set `path` to have a default of an empty list?
So using your same code but slightly different arguments:
```
# Original
def DFS(gr, s, path):
# Modified
def DFS(gr, s, path=[]):
# From here you can do
DFS(gr, s)
```
| 10,115
|
12,127,869
|
I'm trying to build a package from source by executing `python setup.py py2exe`
This is the section of code from setup.py, I suppose would be relevant:
```
if sys.platform == "win32": # For py2exe.
import matplotlib
sys.path.append("C:\\Program Files\\Microsoft Visual Studio 9.0\\VC\\redist\\x86\\Microsoft.VC90.CRT")
base_path = ""
data_files = [("Microsoft.VC90.CRT", glob.glob(r"C:\Program Files\Microsoft Visual Studio 9.0\VC\redist\x86\Microsoft.VC90.CRT\*.*")),
```
Error it shows:
```
*** finding dlls needed ***
error: MSVCP90.dll: No such file or directory
```
**But I've installed "Microsoft Visual C++ 2008 Redistributable Package".** I'm running 32-bit python on 64-bit Windows 8. I'm trying to build a 32-bit binaries.
Also there is no folder like this: "C:\Program Files\Microsoft Visual Studio 9.0\VC\redist\". This is what my computer contains:

**EDIT:**
On searching for `msvcp90.dll` on my C:\ drive I found that they are installed in weird paths like this:

|
2012/08/26
|
[
"https://Stackoverflow.com/questions/12127869",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/193653/"
] |
I would recommend ignoring the dependency outright. Add `MSVCP90.dll` to the list of `dll_excludes` given as an option to `py2exe`. Users will have to install the Microsoft Visual C++ 2008 redistributable. An example:
```
setup(
options = {
"py2exe":{
...
"dll_excludes": ["MSVCP90.dll", "HID.DLL", "w9xpopen.exe"],
...
}
},
console = [{'script': 'program.py'}]
)
```
|
(new answer, since the other answer describes an alternate solution)
You can take the files from the WinSxS directory and copy them to the `C:\Program Files\Microsoft Visual Studio 9.0\VC\redist\x86\Microsoft.VC90.CRT` directory (normally created by Visual Studio, which you don't have). Copy them to get the following structure:
```
+-Microsoft.VC90.CRT
| |
| +-Microsoft.VC90.CRT.manifest
| +-msvcm90.dll
| +-msvcp90.dll
| +-msvcr90.dll
```
Then, you should be able to run the setup program (still excluding `msvcp90.dll`, as in the other answer), and it should successfully find the files under `Microsoft.VC90.CRT` and copy them as data files to your bundle.
See [the py2exe tutorial](http://www.py2exe.org/index.cgi/Tutorial#Step522) for more information.
| 10,117
|
48,716,989
|
I'm trying to create an infinite loop that will output the Y axis of a sine wave, and want to use variables specifying the amplitude of the wave, frequency, and resolution. Where frequency is the number of full sine waves in a second like electrical AC frequency.
I'm trying to do something like this:
```
#!/usr/bin/python
from time import sleep
from math import sin
amplitude=100
frequency=0.01
resolution=0.01
while True:
y = <Sine wave math>
print str(y)
sleep(resolution)
```
I need help with the math and getting the resolution part right.
|
2018/02/10
|
[
"https://Stackoverflow.com/questions/48716989",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6058228/"
] |
How about something like this:
```
DELETE FROM inventory
WHERE updated NOT IN (
SELECT updated FROM (
SELECT MAX(updated) updated
FROM inventory
GROUP BY DATE(updated)
) i
)
```
This would work well if you have the `updated` indexed (ordered).
Basically the sub query gets all the max updated dates per day and excludes them (`NOT IN`) from the `DELETE` statement.
|
Get the most recent time in a subquery, join that with the table, and delete.
```
DELETE i1
FROM inventory AS i1
JOIN (SELECT DATE(updated) AS date, MAX(updated) AS latest
FROM inventory
WHERE itemname = '24T7351'
GROUP BY date) AS i2 ON DATE(i1.updated) = i2.date AND i1.updated != i2.latest
WHERE itemname = '24T7351'
```
| 10,122
|
62,892,652
|
I have a class that gets the data from the form, makes some changes and save it to the database.
I want to have several method inside.
* Get
* Post
* And some other method that will make some changes to the data from the form
I want the post method to save the data from the form to the database and pass the instanse variable to the next method. The next method should make some changes, save it to the databese and return redirect.
But I have an error. 'Site' object has no attribute 'get'
Here is my code:
```
class AddSiteView(View):
form_class = AddSiteForm
template_name = 'home.html'
def get(self, request, *args, **kwargs):
form = self.form_class()
return render(request, self.template_name, { 'form': form })
def post(self, request, *args, **kwargs):
form = self.form_class(request.POST)
if form.is_valid():
# Get website url from form
site_url = request.POST.get('url')
# Check if the site is in DB or it's a new site
try:
site_id = Site.objects.get(url=site_url)
except ObjectDoesNotExist:
site_instanse = form.save()
else:
site_instanse = site_id
return site_instanse
return render(request, self.template_name, { 'form': form })
def get_robots_link(self, *args, **kwargs):
# Set veriable to the Robot Model
robots = Robot.objects.get(site=site_instanse)
# Robobts Link
robots_url = Robots(site_url).get_url()
robots.url = robots_url
robots.save()
return redirect('checks:robots', robots.id, )
```
I need to pass site\_instanse from def post to def get\_robots\_link
Here is the traceback:
```
Internal Server Error: /add/
Traceback (most recent call last):
File "/home/atom/.local/lib/python3.8/site-packages/django/core/handlers/exception.py", line 34, in inner
response = get_response(request)
File "/home/atom/.local/lib/python3.8/site-packages/django/utils/deprecation.py", line 96, in __call__
response = self.process_response(request, response)
File "/home/atom/.local/lib/python3.8/site-packages/django/middleware/clickjacking.py", line 26, in process_response
if response.get('X-Frame-Options') is not None:
AttributeError: 'Site' object has no attribute 'get'
[14/Jul/2020 10:36:27] "POST /add/ HTTP/1.1" 500 61371
```
**Here is the place where the problem is:**
If I use redirect inside the post method. Everything works good. Like so:
```
class AddSiteView(View):
form_class = AddSiteForm
template_name = 'home.html'
def get(self, request, *args, **kwargs):
form = self.form_class()
return render(request, self.template_name, { 'form': form })
def post(self, request, *args, **kwargs):
form = self.form_class(request.POST)
if form.is_valid():
# Get website url from form
site_url = request.POST.get('url')
# Check if the site is in DB or it's a new site
try:
site_id = Site.objects.get(url=site_url)
except ObjectDoesNotExist:
site_instanse = form.save()
else:
site_instanse = site_id
# Set veriable to the Robot Model
robots = Robot.objects.get(site=site_instanse)
# Robobts Link
robots_url = Robots(site_url).get_url()
robots.url = robots_url
robots.save()
return redirect('checks:robots', robots.id, ) ## HERE
return render(request, self.template_name, { 'form': form })
```
But if remove the line `return redirect('checks:robots', robots.id, )` from the post method and put return self.site\_instance there. and add the def get\_robots\_link. It give the error: `'Site' object has no attribute 'get'`
|
2020/07/14
|
[
"https://Stackoverflow.com/questions/62892652",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/13197641/"
] |
```
class AddSiteView(View):
form_class = AddSiteForm
template_name = 'home.html'
def get(self, request, *args, **kwargs):
form = self.form_class()
return render(request, self.template_name, { 'form': form })
def post(self, request, *args, **kwargs):
form = self.form_class(request.POST)
if form.is_valid():
# Get website url from form
site_url = request.POST.get('url')
# Check if the site is in DB or it's a new site
try:
site_id = Site.objects.get(url=site_url)
except ObjectDoesNotExist:
site_instanse = form.save()
else:
site_instanse = site_id
self.site_instance = site_instanse #See this
return site_instanse
return render(request, self.template_name, { 'form': form })
def get_robots_link(self, *args, **kwargs):
# Set veriable to the Robot Model
robots = Robot.objects.get(site=self.site_instance)
# Robobts Link
robots_url = Robots(site_url).get_url()
robots.url = robots_url
robots.save()
return redirect('checks:robots', robots.id, )
```
Use `self` to encode the data within the object
|
One must return a response from the `post` method. This code returns a `Site` instance on these lines. Not sure what is the intended behavior, either a `redirect` or `render` should be used.
```
try:
site_id = Site.objects.get(url=site_url)
except ObjectDoesNotExist:
site_instanse = form.save()
else:
site_instanse = site_id
return site_instanse
```
| 10,125
|
36,682,832
|
Exactly how should python models be exported for use in c++?
I'm trying to do something similar to this tutorial:
<https://www.tensorflow.org/versions/r0.8/tutorials/image_recognition/index.html>
I'm trying to import my own TF model in the c++ API in stead of the inception one. I adjusted input size and the paths, but strange errors keep popping up. I spent all day reading stack overflow and other forums but to no avail.
I've tried two methods for exporting the graph.
Method 1: metagraph.
```
...loading inputs, setting up the model, etc....
sess = tf.InteractiveSession()
sess.run(tf.initialize_all_variables())
for i in range(num_steps):
x_batch, y_batch = batch(50)
if i%10 == 0:
train_accuracy = accuracy.eval(feed_dict={
x:x_batch, y_: y_batch, keep_prob: 1.0})
print("step %d, training accuracy %g"%(i, train_accuracy))
train_step.run(feed_dict={x: x_batch, y_: y_batch, keep_prob: 0.5})
print("test accuracy %g"%accuracy.eval(feed_dict={
x: features_test, y_: labels_test, keep_prob: 1.0}))
saver = tf.train.Saver(tf.all_variables())
checkpoint =
'/home/sander/tensorflow/tensorflow/examples/cat_face/data/model.ckpt'
saver.save(sess, checkpoint)
tf.train.export_meta_graph(filename=
'/home/sander/tensorflow/tensorflow/examples/cat_face/data/cat_graph.pb',
meta_info_def=None,
graph_def=sess.graph_def,
saver_def=saver.restore(sess, checkpoint),
collection_list=None, as_text=False)
```
Method 1 yields the following error when trying to run the program:
```
[libprotobuf ERROR
google/protobuf/src/google/protobuf/wire_format_lite.cc:532] String field
'tensorflow.NodeDef.op' contains invalid UTF-8 data when parsing a protocol
buffer. Use the 'bytes' type if you intend to send raw bytes.
E tensorflow/examples/cat_face/main.cc:281] Not found: Failed to load
compute graph at 'tensorflow/examples/cat_face/data/cat_graph.pb'
```
I also tried another method of exporting the graph:
Method 2: write\_graph:
```
tf.train.write_graph(sess.graph_def,
'/home/sander/tensorflow/tensorflow/examples/cat_face/data/',
'cat_graph.pb', as_text=False)
```
This version actually seems to load something, but I get an error about variables not being initialized:
```
Running model failed: Failed precondition: Attempting to use uninitialized
value weight1
[[Node: weight1/read = Identity[T=DT_FLOAT, _class=["loc:@weight1"],
_device="/job:localhost/replica:0/task:0/cpu:0"](weight1)]]
```
|
2016/04/17
|
[
"https://Stackoverflow.com/questions/36682832",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5452997/"
] |
At first, you need to graph definition to file by using following command
```
with tf.Session() as sess:
//Build network here
tf.train.write_graph(sess.graph.as_graph_def(), "C:\\output\\", "mymodel.pb")
```
Then, save your model by using saver
```
saver = tf.train.Saver(tf.global_variables())
saver.save(sess, "C:\\output\\mymodel.ckpt")
```
Then, you will have 2 files at your output, mymodel.ckpt, mymodel.pb
Download freeze\_graph.py from [here](https://github.com/tensorflow/tensorflow/blob/master/tensorflow/python/tools/freeze_graph.py) and run following command in C:\output\. Change output node name if it is different for you.
>
> python freeze\_graph.py --input\_graph mymodel.pb --input\_checkpoint mymodel.ckpt --output\_node\_names softmax/Reshape\_1 --output\_graph mymodelforc.pb
>
>
>
You can use mymodelforc.pb directly from C.
You can use following C code to load the proto file
```
#include "tensorflow/core/public/session.h"
#include "tensorflow/core/platform/env.h"
#include "tensorflow/cc/ops/image_ops.h"
Session* session;
NewSession(SessionOptions(), &session);
GraphDef graph_def;
ReadBinaryProto(Env::Default(), "C:\\output\\mymodelforc.pb", &graph_def);
session->Create(graph_def);
```
Now you can use session for inference.
You can apply inference parameter as following:
```
// Same dimension and type as input of your network
tensorflow::Tensor input_tensor(tensorflow::DT_FLOAT, tensorflow::TensorShape({ 1, height, width, channel }));
std::vector<tensorflow::Tensor> finalOutput;
// Fill input tensor with your input data
std::string InputName = "input"; // Your input placeholder's name
std::string OutputName = "softmax/Reshape_1"; // Your output placeholder's name
session->Run({ { InputName, input_tensor } }, { OutputName }, {}, &finalOutput);
// finalOutput will contain the inference output that you search for
```
|
You can try this (modify name of output layer):
```
import os
import tensorflow as tf
from tensorflow.python.framework import graph_util
def load_graph_def(model_path, sess=None):
sess = sess if sess is not None else tf.get_default_session()
saver = tf.train.import_meta_graph(model_path + '.meta')
saver.restore(sess, model_path)
def freeze_graph(sess, output_layer_name, output_graph):
graph = tf.get_default_graph()
input_graph_def = graph.as_graph_def()
# Exporting the graph
print("Exporting graph...")
output_graph_def = graph_util.convert_variables_to_constants(
sess,
input_graph_def,
output_layer_name.split(","))
with tf.gfile.GFile(output_graph, "wb") as f:
f.write(output_graph_def.SerializeToString())
def freeze_from_checkpoint(checkpoint_file, output_layer_name):
model_folder = os.path.basename(checkpoint_file)
output_graph = os.path.join(model_folder, checkpoint_file + '.pb')
with tf.Session() as sess:
load_graph_def(checkpoint_file)
freeze_graph(sess, output_layer_name, output_graph)
if __name__ == '__main__':
freeze_from_checkpoint(
checkpoint_file='/home/sander/tensorflow/tensorflow/examples/cat_face/data/model.ckpt',
output_layer_name='???')
```
| 10,126
|
61,578,697
|
I am trying to wrap my head around python. Basically I am trying to remove a duplicate string (a date to be more precise) in some data. So for example:
```
2019-03-31
2019-06-30
2019-09-30
2019-12-31
2020-03-31
2020-03-31
```
notice 2020-03-31 is duplicated. I would like to find the duplicated date and rename it as `last quarter`
conversely, If the data has no duplicates I want to leave as is.
I currently have some working code that checks to see if there are duplicates. And that is it. I just need some guidance in the right direction as to how to rename the duplicate.
```
def checkForDuplicates(listOfElems):
if len(listOfElems) == len(set(listOfElems)):
return
else:
return print("You have a duplicate")
```
|
2020/05/03
|
[
"https://Stackoverflow.com/questions/61578697",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/13461709/"
] |
Use a [`set`](https://docs.python.org/3/library/stdtypes.html#set) to keep track of items you've seen. If any items are already in the set, append the desired string (use `= "last quarter"` if you want a full rename; it's unclear).
```
data = """2019-03-31
2019-06-30
2019-09-30
2019-12-31
2020-03-31
2020-03-31""".split("\n")
seen = set()
for i, e in enumerate(data):
if e in seen:
data[i] += " last quarter"
seen.add(e)
print(data)
```
Output:
```
['2019-03-31', '2019-06-30', '2019-09-30',
'2019-12-31', '2020-03-31', '2020-03-31 last quarter']
```
|
For your function, if you have list of elements then your function will be :
```
def checkForDuplicates(listOfElems):
check=[]
for i in range(len(listOfElems)):
if listofElems[i] in check:
#then it is a duplicate and you can rename it
listofElems[i]='last quarter'
else:
check.append(listofElems[i])
```
This will replace all the duplicate enteries to `last quarter`.
| 10,128
|
44,610,150
|
I downloaded Python 3.6 from Python's website (from the download page for Windows) and it seems only the interpreter is available. I don't see anything else (Standard Library or something) in my system. Is it included in the interpreter and hidden or something?
I tried to install ibm\_db 2.0.7 as an extension of Python DB API, but the installation instructions seem too old. The paths defined don't exist in my Win10.
By the way, I installed the latest Platform SDK as instructed (which is the predecessor of Windows 10 SDK, so I had to to install Windows 10 SDK instead). I also installed .NET SDK V1.1, which is told to include Visual C++ 2003 (Visual C++ 2003 is not available today on its own). I considered to install VS2017, but because it was too big (12.some GB) I passed on that option.
I am stuck and can't proceed for I don't know which changes happened from the point the installation instructions have been written and what else I need to do. How can I install python with the ibm\_db package on Windows 10?
|
2017/06/17
|
[
"https://Stackoverflow.com/questions/44610150",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/8176681/"
] |
Timeout is for raising a timeout error if an event isn't emitted within a certain time period. You probably want Observable.interval:
```
return
Observable.interval(1000).mergeMap(t=> this.http.get(matchUrl))
.toPromise()
.then(response => response.json().participants as Match[]);
```
if you want to serialize the requests so that it runs 1 second after the other, use concatMap instead.
```
return
Observable.interval(1000).concatMap(t=> this.http.get(matchUrl))
.toPromise()
.then(response => response.json().participants as Match[]);
```
|
Use `debounceTime` operator as below
```
getMatch(matchId: number): Promise<Match[]> {
let matchUrl: string = 'https://br1.api.riotgames.com/lol/match/v3/matches/'+ matchId +'?api_key=';
return this.http.get(matchUrl)
.debounceTime(1000)
.toPromise()
.then(response => response.json().participants as Match[]);
```
| 10,131
|
5,395,782
|
In a python/google app engine app, I've got a choice between storing some static data (couple KB in size) in a local json/xml file or putting it into the datastore and querying it from there. The data is created by me, so there's no issues with badly formed data. In specific terms such as saving quota, less resource usage and app speed, which is the better method for this case?
I'd guess using simplyjson to read from a json file would be better since this method doesn't require a datastore query, whilst still being reasonably quick.
Taking this further, the app doesn't require a large data store (currently ~400KB), so would it be worthwhile moving all the data to a json file to get around the quota restrictions?
|
2011/03/22
|
[
"https://Stackoverflow.com/questions/5395782",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/614453/"
] |
If your data is small, static and infrequently changed, you'll get the best performance by just writing your data as a `dict` in it's own module and just `import` it where you need it. This would take advantage of the fact that Python will cache your modules on import.
|
It is faster to keep your data in a static file instead of the datastore. As you said, this saves on datastore quota, and also saves time in round-trips to the datastore.
However, any **data you store in static files is static and cannot be changed by your application** (see the ["Sandbox" section here](http://code.google.com/appengine/docs/whatisgoogleappengine.html)). You would need to re-deploy the file each time you wanted to make changes (i.e. none of the changes could be made through a web interface). If this is acceptable, then use this method, since it's simple and fast.
| 10,132
|
32,977,076
|
Related: [ImportError: No module named bootstrap3 even while using virtualenv](https://stackoverflow.com/questions/29781872/importerror-no-module-named-bootstrap3-even-while-using-virtualenv)
Every time I attempt to use manage.py (startapp, shell, etc) or load my page (using Apache), I get the error below. I'm running Django 1.8 inside a virtual environment, and already installed django-bootstrap-toolkit (and tried with django-bootstrap as well although I'm not sure what the difference is). The instructions on github said to add 'bootstrap3' to INSTALLED\_APPS, which I did, and now get the following error:
```
...
File "/usr/lib64/python2.7/importlib/__init__.py", line 37, in import_module
__import__(name)
ImportError: No module named bootstrap3
```
My Versions:
```
Django 1.8
Python 2.7.5
django-bootstrap-toolkit 2.15.0
```
from settings.py:
```
INSTALLED_APPS = (
'django.contrib.auth',
'django.contrib.contenttypes',
'django.contrib.sessions',
'django.contrib.messages',
'django.contrib.staticfiles',
'django.contrib.admin',
'bootstrap3',
'django_cron',
```
What am I missing here? Thanks for your time.
Edit: Full pip freeze output:
```
(venv)[root@ myhost path]# pip freeze
IPy==0.75
PyOpenGL==3.0.1
SSSDConfig==1.12.2
backports.ssl-match-hostname==3.4.0.2
blivet==0.61.0.27
chardet==2.2.1
configobj==4.7.2
configshell-fb==1.1.14
coverage==3.6b3
cupshelpers==1.0
decorator==3.4.0
di==0.3
django-bootstrap-toolkit==2.15.0
dnspython==1.11.1
ethtool==0.8
firstboot==19.5
freeipa==2.0.0.alpha.0
fros==1.0
glusterfs-api==3.6.0.29
iniparse==0.4
initial-setup==0.3.9.23
iotop==0.6
ipaplatform==4.1.0
ipapython==4.1.0
javapackages==1.0.0
kerberos==1.1
kitchen==1.1.1
kmod==0.1
langtable==0.0.13
lxml==3.2.1
meld==3.11.0
netaddr==0.7.5
nose==1.3.0
numpy==1.7.1
pcp==1.0
policycoreutils-default-encoding==0.1
psutil==1.2.1
psycopg2==2.5.1
pyOpenSSL==0.13.1
pyasn1==0.1.6
pycups==1.9.63
pycurl==7.19.0
pygobject==3.8.2
pygpgme==0.3
pyinotify==0.9.4
pykickstart==1.99.43.17
pyliblzma==0.5.3
pyodbc==3.0.0-unsupported
pyparsing==1.5.6
pyparted==3.9
python-augeas==0.4.1
python-dateutil==1.5
python-default-encoding==0.1
python-dmidecode==3.10.13
python-ldap==2.4.15
python-meh==0.25.2
python-nss==0.16.0
python-yubico==1.2.1
pytz==2012d
pyudev==0.15
pyusb==1.0.0b1
pyxattr==0.5.1
qrcode==5.0.1
rtslib-fb==2.1.50
scdate==1.10.6
seobject==0.1
sepolicy==1.1
setroubleshoot==1.1
six==1.3.0
slip==0.4.0
slip.dbus==0.4.0
stevedore==0.14
targetcli-fb==2.1.fb37
urlgrabber==3.10
urwid==1.1.1
virtualenv==1.10.1
virtualenv-clone==0.2.4
virtualenvwrapper==4.3.1
wsgiref==0.1.2
yum-langpacks==0.4.2
yum-metadata-parser==1.1.4
```
|
2015/10/06
|
[
"https://Stackoverflow.com/questions/32977076",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1303827/"
] |
As it is said in docs, you need `django-bootstrap3` package to use bootstrap3. Here is the [link](https://github.com/dyve/django-bootstrap3).
|
I defer to the answer above. Just pointing out that the django-bootstrap-toolkit, which is for v2 of Bootstrap, could be removed. Thanks for using these libraries!
| 10,135
|
5,880,781
|
Can anybody tell me what is wrong in this program? I face
```
syntaxerror unexpected character after line continuation character
```
when I run this program:
```
f = open(D\\python\\HW\\2_1 - Copy.cp,"r");
lines = f.readlines();
for i in lines:
thisline = i.split(" ");
```
|
2011/05/04
|
[
"https://Stackoverflow.com/questions/5880781",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/642564/"
] |
You need to quote that filename:
```
f = open("D\\python\\HW\\2_1 - Copy.cp", "r")
```
Otherwise the bare backslash after the D is interpreted as a line-continuation character, and should be followed by a newline. This is used to extend long expressions over multiple lines, for readability:
```
print "This is a long",\
"line of text",\
"that I'm printing."
```
Also, you shouldn't have semicolons (`;`) at the end of your statements in Python.
|
Replace
`f = open(D\\python\\HW\\2_1 - Copy.cp,"r");`
by
`f = open("D:\\python\\HW\\2_1 - Copy.cp", "r")`
1. File path needs to be a string (constant)
2. need colon in Windows file path
3. space after comma for better style
4. ; after statement is allowed but fugly.
What tutorial are you using?
| 10,136
|
16,297,892
|
Upgrade to 13.04 has totally messed my system up .
I am having this issue when running
```
./manage.py runserver
Traceback (most recent call last):
File "./manage.py", line 8, in <module>
from django.core.management import execute_from_command_line
File "/home/rats/rats/local/lib/python2.7/site-packages/django/core/management
/__init__.py", line 4, in <module>
from optparse import OptionParser, NO_DEFAULT
File "/usr/lib/python2.7/optparse.py", line 77, in <module>
import textwrap
File "/usr/lib/python2.7/textwrap.py", line 10, in <module>
import string, re
File "/usr/lib/python2.7/string.py", line 83, in <module>
import re as _re
File "/home/rats/rats/lib/python2.7/re.py", line 105, in <module>
import sre_compile
File "/home/rats/rats/lib/python2.7/sre_compile.py", line 14, in <module>
import sre_parse
File "/home/rats/rats/lib/python2.7/sre_parse.py", line 17, in <module>
from sre_constants import *
File "/home/rats/rats/lib/python2.7/sre_constants.py", line 18, in <module>
from _sre import MAXREPEAT
ImportError: cannot import name MAXREPEAT
```
this is happening for both the real environment as well as for virtual environment .
i tried removing python with
```
sudo apt-get remove python
```
and sadly it has removed everything .
now google chrome does not show any fonts .
i am looking for getting things back to work .
help is needed for proper configuring it again.
|
2013/04/30
|
[
"https://Stackoverflow.com/questions/16297892",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1080407/"
] |
If you are using virtualenvwrapper then you can recreate the virtualenv on top of the existing one (with no environment currently active):
`mkvirtualenv <existing name>`
which should pull in the latest (upgraded) python version from the system and fix any mismatch errors.
|
I have just solved that problem on my machine.
The problem was that Ubuntu 13.04 use python 2.7.4. That makes conflict with the Python version of the `virtualenv`.
What I do was to re-create the `virtualenv` with the new version of python. I think it's the simplest way, but you can try to upgrade the python version without re-creating all of the `virtualenv`.
| 10,139
|
51,531,429
|
I have an ndarray of N 1x3 arrays I'd like to perform dot multiplication with a 3x3 matrix. I can't seem to figure out an efficient way to do this, as all the multi\_dot and tensordot, etc methods seem to recursively sum or multiply the results of each operation. I simply want to apply a dot multiply the same way you can apply a scalar. I can do this with a for loop or list comprehension but it is much too slow for my application.
```
N = np.asarray([[1, 2, 3], [4, 5, 6], [7, 8, 9], ...])
m = np.asarray([[10, 20, 30], [40, 50, 60], [70, 80, 90]])
```
I'd like to perform something such as this but without any python loops:
```
np.asarray([np.dot(m, a) for a in N])
```
so that it simply returns `[m * N[0], m * N[1], m * N[2], ...]`
What's the most efficient way to do this? And is there a way to do this so that if N is just a single 1x3 matrix, it will just output the same as np.dot(m, N)?
|
2018/07/26
|
[
"https://Stackoverflow.com/questions/51531429",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6587755/"
] |
Try This:
```
import numpy as np
N = np.asarray([[1, 2, 3], [4, 5, 6], [7, 8, 9], [1, 2, 3], [4, 5, 6]])
m = np.asarray([[10, 20, 30], [40, 50, 60], [70, 80, 90]])
re0 = np.asarray([np.dot(m, a) for a in N]) # original
re1 = np.dot(m, N.T).T # efficient
print("result0:\n{}".format(re0))
print("result1:\n{}".format(re1))
print("Is result0 == result1? {}".format(np.array_equal(re0, re1)))
```
Output:
```
result0:
[[ 140 320 500]
[ 320 770 1220]
[ 500 1220 1940]
[ 140 320 500]
[ 320 770 1220]]
result1:
[[ 140 320 500]
[ 320 770 1220]
[ 500 1220 1940]
[ 140 320 500]
[ 320 770 1220]]
Is result0 == result1? True
```
Time cost:
```
import timeit
setup = '''
import numpy as np
N = np.random.random((1, 3))
m = np.asarray([[10, 20, 30], [40, 50, 60], [70, 80, 790]])
'''
>> timeit.timeit("np.asarray([np.dot(m, a) for a in N])", setup=setup, number=100000)
0.295798063278
>> timeit.timeit("np.dot(m, N.T).T", setup=setup, number=100000)
0.10135102272
# N = np.random.random((10, 3))
>> timeit.timeit("np.asarray([np.dot(m, a) for a in N])", setup=setup, number=100000)
1.7417007659969386
>> timeit.timeit("np.dot(m, N.T).T", setup=setup, number=100000)
0.1587108800013084
# N = np.random.random((100, 3))
>> timeit.timeit("np.asarray([np.dot(m, a) for a in N])", setup=setup, number=100000)
11.6454949379
>> timeit.timeit("np.dot(m, N.T).T", setup=setup, number=100000)
0.180465936661
```
|
First, regarding your last question. There's a difference between a (3,) `N` and (1,3):
```
In [171]: np.dot(m,[1,2,3])
Out[171]: array([140, 320, 500]) # (3,) result
In [172]: np.dot(m,[[1,2,3]])
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
<ipython-input-172-e8006b318a32> in <module>()
----> 1 np.dot(m,[[1,2,3]])
ValueError: shapes (3,3) and (1,3) not aligned: 3 (dim 1) != 1 (dim 0)
```
Your iterative version produces a (1,3) result:
```
In [174]: np.array([np.dot(m,a) for a in [[1,2,3]]])
Out[174]: array([[140, 320, 500]])
```
Make `N` a (4,3) array (this helps keep the first dim of N distinct):
```
In [176]: N = np.array([[1, 2, 3], [4, 5, 6], [7, 8, 9], [10,11,12]])
In [177]: N.shape
Out[177]: (4, 3)
In [178]: np.array([np.dot(m,a) for a in N])
Out[178]:
array([[ 140, 320, 500],
[ 320, 770, 1220],
[ 500, 1220, 1940],
[ 680, 1670, 2660]])
```
Result is (4,3).
A simple `dot` doesn't work (same as in the (1,3) case):
```
In [179]: np.dot(m,N)
...
ValueError: shapes (3,3) and (4,3) not aligned: 3 (dim 1) != 4 (dim 0)
In [180]: np.dot(m,N.T) # (3,3) dot with (3,4) -> (3,4)
Out[180]:
array([[ 140, 320, 500, 680],
[ 320, 770, 1220, 1670],
[ 500, 1220, 1940, 2660]])
```
So this needs another transpose to match your iterative result.
The explicit indices of `einsum` can also take care of these transpose:
```
In [181]: np.einsum('ij,kj->ki',m,N)
Out[181]:
array([[ 140, 320, 500],
[ 320, 770, 1220],
[ 500, 1220, 1940],
[ 680, 1670, 2660]])
```
Also works with the (1,3) case (but not with the (3,) case):
```
In [182]: np.einsum('ij,kj->ki',m,[[1,2,3]])
Out[182]: array([[140, 320, 500]])
```
`matmul`, `@` is also designed to calculate repeated dots - if the inputs are 3d (or broadcastable to that):
```
In [184]: (m@N[:,:,None]).shape
Out[184]: (4, 3, 1)
In [185]: (m@N[:,:,None])[:,:,0] # to squeeze out that last dimension
Out[185]:
array([[ 140, 320, 500],
[ 320, 770, 1220],
[ 500, 1220, 1940],
[ 680, 1670, 2660]])
```
`dot` and `matmul` describe what happens with 1, 2 and 3d inputs. It can take some time, and experimentation, to get a feel for what is happening. The basic rule is last of A with 2nd to the last of B.
Your `N` is actually (n,3), `n` `(3,)` arrays. Here's what 4 (1,3) arrays looks like:
```
In [186]: N1 = N[:,None,:]
In [187]: N1.shape
Out[187]: (4, 1, 3)
In [188]: N1
Out[188]:
array([[[ 1, 2, 3]],
[[ 4, 5, 6]],
[[ 7, 8, 9]],
[[10, 11, 12]]])
```
and the dot as before (4,1,3) dot (3,3).T -> (4,1,3) -> (4,3)
```
In [190]: N1.dot(m.T).squeeze()
Out[190]:
array([[ 140, 320, 500],
[ 320, 770, 1220],
[ 500, 1220, 1940],
[ 680, 1670, 2660]])
```
and n of those:
```
In [191]: np.array([np.dot(a,m.T).squeeze() for a in N1])
Out[191]:
array([[ 140, 320, 500],
[ 320, 770, 1220],
[ 500, 1220, 1940],
[ 680, 1670, 2660]])
```
| 10,142
|
25,570,507
|
Still working with LDAP...
The problem i submit today is this: i'm creating a posixGroup on a server LDAP using a custom method developed in python using Django framework. I attach the method code below.
The main issue is that attribute **gidNumber is compulsory of posixGroup class**, but usually is not required when using graphical LDAP client like phpLDAPadmin since they fill automatically this field like an auto-integer.
Here the question: **gidNumber is an auto integer attribute for default, or just using client like the quoted above? Must i specify it during the posixGroup entry creation?**
```
def ldap_cn_entry(self):
import ldap.modlist as modlist
dn = u"cn=myGroupName,ou=plant,dc=ldap,dc=dem2m,dc=it"
# A dict to help build the "body" of the object
attrs = {}
attrs['objectclass'] = ['posixGroup']
attrs['cn'] = 'myGroupName'
attrs['gidNumber'] = '508'
# Convert our dict to nice syntax for the add-function using modlist-module
ldif = modlist.addModlist(attrs)
# Do the actual synchronous add-operation to the ldapserver
self.connector.add_s(dn, ldif)
```
* connector is first instanced in the constructor of the class where this method is built. The constructor provides also to the LDAP initialization and binding. Than, the connection will be closed by the destructor.
* to use the method i begin instancing the class it belongs, so it also connects to LDAP server. Than i use the method and finally i destroy the object i instanced before to close the connection. **All works, indeed, if use this procedure to create a different entry, or if i specify the gidNumber manually**.
The fact is i CAN'T specify the gidNumber any time i want to create a group to goal my purpose. I should leave it filling automatically (if that's possible) or think about another way to complete it.
I'm not posting more code about the class i made to not throng the page.
I'll provide more information if needed. Thank you.
|
2014/08/29
|
[
"https://Stackoverflow.com/questions/25570507",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3963403/"
] |
See paragraph a "Note on Minification" in <https://docs.angularjs.org/tutorial/step_05>
It is used to keep a string reference of your injections of dependencies after minifications :
>
> Since Angular infers the controller's dependencies from the names of
> arguments to the controller's constructor function, if you were to
> minify the JavaScript code for PhoneListCtrl controller, all of its
> function arguments would be minified as well, and the dependency
> injector would not be able to identify services correctly.
>
>
>
|
Controllers are callables, and their arguments must be injected with existing/valid/registered dependencies. Angular takes three ways:
1. If the passed controller (this also applies to providers) is an array, the last item is the controller, and the former items are expected to be strings with the names of dependencies to inject. The names count and the arity must match.
```
//parameter names are what I want.
c = mymodule.controller('MyController', ['$scope', '$http', function($s, $h) {}]);
```
2. Otherwise, if the passed controller has an `$inject` property, it is expected that such property is an array of strings being the names of the dependencies. The length of the array and the arity must match.
```
con = function($s, $h) {};
con.$inject = ['$scope', '$http'];
c = mymodule.controller('MyController', conn);
```
3. Otherwise, the array of names to inject is taken from the parameter list, so they **must be named accordingly**.
```
c = mymodule.controller('MyController', function($scope, $http) {});
//one type, one minification, and you're screwed
```
You should **never** expect that the controller works if you don't explicitly set -explicitly- the names of the dependencies to inject. It is a **bad** practice because:
1. The parameter names **will** change if you minify (and you **will** -some day- minify your script).
2. A typo in your parameter name, and you'll be hours looking for the error.
Suggestion: **always** use an explicit notation (way 1 or 2).
| 10,143
|
8,127,648
|
I have two threads in python (2.7).
I start them at the beginning of my program. While they execute, my program reaches the end and exits, killing both of my threads before waiting for resolution.
I'm trying to figure out how to wait for both threads to finish before exiting.
```
def connect_cam(ip, execute_lock):
try:
conn = TelnetConnection.TelnetClient(ip)
execute_lock.acquire()
ExecuteUpdate(conn, ip)
execute_lock.release()
except ValueError:
pass
execute_lock = thread.allocate_lock()
thread.start_new_thread(connect_cam, ( headset_ip, execute_lock ) )
thread.start_new_thread(connect_cam, ( handcam_ip, execute_lock ) )
```
In .NET I would use something like WaitAll() but I haven't found the equivalent in python. In my scenario, TelnetClient is a long operation which may result in a failure after a timeout.
|
2011/11/14
|
[
"https://Stackoverflow.com/questions/8127648",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6367/"
] |
`Thread` is meant as a lower level primitive interface to Python's threading machinery - use [`threading`](http://docs.python.org/library/threading.html#thread-objects) instead. Then, you can use `threading.join()` to synchronize threads.
>
> Other threads can call a thread’s join() method. This blocks the
> calling thread until the thread whose join() method is called is
> terminated.
>
>
>
|
First, you ought to be using the [threading](http://docs.python.org/library/threading.html#module-threading) module, not the thread module. Next, have your main thread [join()](http://docs.python.org/library/threading.html#threading.Thread.join) the other threads.
| 10,144
|
43,603,199
|
I am using Docker on a Python Flask webapp, but am getting an error when I try and run it.
```
$ sudo docker run -t imgcomparer6
unable to load configuration from app.py
```
**Python**
In my app.py file, my only instance of `app.run()` in the webapp is within the `'__main__':` function (seen [here](https://stackoverflow.com/questions/34615743/unable-to-load-configuration-from-uwsgi))
```
if __name__ == '__main__':
app.run(host="127.0.0.1", port=int("8000"), debug=True)
```
**Dockerfile**
```
FROM ubuntu:latest
#Update OS
RUN sed -i 's/# \(.*multiverse$\)/\1/g' /etc/apt/sources.list
RUN apt-get update
RUN apt-get -y upgrade
# Install Python
RUN apt-get install -y python-dev python-pip
RUN mkdir /webapp/
# Add requirements.txt
ADD requirements.txt /webapp/
ADD requirements.txt .
# Install uwsgi Python web server
RUN pip install uwsgi
# Install app requirements
RUN pip install -r requirements.txt
# Create app directory
ADD . /webapp/
# Set the default directory for our environment
ENV HOME /webapp/
WORKDIR /webapp/
# Expose port 8000 for uwsgi
EXPOSE 8000
ENTRYPOINT ["uwsgi", "--http", "127.0.0.1:8000", "--module", "app:app", "--processes", "1", "--threads", "8"]
#ENTRYPOINT ["python"]
CMD ["app.py"]
```
**Directory Structure**
```
app.py
image_data.db
README.txt
requirements.txt
Dockerfile
templates
- index.html
static/
- image.js
- main.css
img/
- camera.png
images/
- empty
```
EDIT:
Docker images
```
castro@Ezri:~/Desktop/brian_castro_programming_test$ sudo docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
imgcomparer6 latest d2af1b18ec87 59 minutes ago 430 MB
imgcomparer5 latest 305fa5062b41 About an hour ago 430 MB
<none> <none> e982e54b011a About an hour ago 430 MB
imgcomparer2 latest c7e3ad57be55 About an hour ago 430 MB
imgcomparer latest a1402ec1efb1 About an hour ago 430 MB
<none> <none> 8f5126108354 14 hours ago 425 MB
flask-sample-one latest 9bdc51fa4d7c 23 hours ago 453 MB
ubuntu latest 6a2f32de169d 12 days ago 117 MB
```
Image Log (gives error)
```
sudo docker logs imgcomparer6
Error: No such container: imgcomparer6
```
Tried running this, as suggested below:
'$ sudo docker run -t imgcomparer6 ; sudo docker logs $(docker ps -lq)'
```
unable to load configuration from app.py
Got permission denied while trying to connect to the Docker daemon socket at unix:///var/run/docker.sock: Get http://%2Fvar%2Frun%2Fdocker.sock/v1.27/containers/json?limit=1: dial unix /var/run/docker.sock: connect: permission denied
"docker logs" requires exactly 1 argument(s).
See 'docker logs --help'.
Usage: docker logs [OPTIONS] CONTAINER
Fetch the logs of a container
```
ps -a
```
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
e80bfd0a3a11 imgcomparer6 "uwsgi --http 127...." 54 minutes ago Exited (1) 54 minutes ago musing_fermat
29c188ede9ba imgcomparer6 "uwsgi --http 127...." 54 minutes ago Exited (1) 54 minutes ago kind_jepsen
a58945d9cd86 imgcomparer6 "uwsgi --http 127...." 55 minutes ago Exited (1) 55 minutes ago musing_wright
ca70b624df5e imgcomparer6 "uwsgi --http 127...." 55 minutes ago Exited (1) 55 minutes ago brave_hugle
964a1366b105 imgcomparer6 "uwsgi --http 127...." 55 minutes ago Exited (1) 55 minutes ago clever_almeida
155c296a3dce imgcomparer6 "uwsgi --http 127...." 2 hours ago Exited (1) 2 hours ago jovial_heisenberg
0a6a3bb55b55 imgcomparer5 "uwsgi --http 127...." 2 hours ago Exited (1) 2 hours ago sharp_mclean
76d4f40c4b82 e982e54b011a "uwsgi --http 127...." 2 hours ago Exited (1) 2 hours ago kind_hodgkin
918954bf416a d73c44a6c215 "/bin/sh -c 'mkdir..." 2 hours ago Exited (1) 2 hours ago amazing_bassi
205276ba1ab2 d73c44a6c215 "/bin/sh -c 'mkdir..." 2 hours ago Exited (1) 2 hours ago distracted_joliot
86a180f071c6 d73c44a6c215 "/bin/sh -c '#(nop..." 2 hours ago Created goofy_torvalds
fc1ec345c236 imgcomparer2 "uwsgi --http 127...." 2 hours ago Created wizardly_boyd
b051d4cdf0c6 imgcomparer "uwsgi --http 127...." 2 hours ago Created jovial_mclean
ed78e965755c d73c44a6c215 "/bin/sh -c '#(nop..." 3 hours ago Created elated_shirley
a65978d30c8f d73c44a6c215 "/bin/sh -c '#(nop..." 3 hours ago Created vigilant_wright
760ac5a0281b d73c44a6c215 "/bin/sh -c '#(nop..." 3 hours ago Created xenodochial_heyrovsky
9d7d8bcb2226 d73c44a6c215 "/bin/sh -c '#(nop..." 3 hours ago Created sleepy_noyce
36012d4c6115 d73c44a6c215 "/bin/sh -c '#(nop..." 3 hours ago Created adoring_hypatia
deacab89f416 d73c44a6c215 "/bin/sh -c '#(nop..." 3 hours ago Created objective_franklin
43e894f8fb9c d73c44a6c215 "/bin/sh -c '#(nop..." 3 hours ago Created sleepy_hodgkin
2d190d0fc6e5 d73c44a6c215 "/bin/sh -c '#(nop..." 3 hours ago Created modest_hoover
b1640a039c31 d73c44a6c215 "/bin/sh -c '#(nop..." 3 hours ago Created affectionate_galileo
baf94cf2dc6e d73c44a6c215 "/bin/sh -c '#(nop..." 3 hours ago Created affectionate_elion
2b54996907b6 d73c44a6c215 "/bin/sh -c '#(nop..." 3 hours ago Created elastic_wiles
663d4e096938 8f5126108354 "/bin/sh -c 'pip i..." 15 hours ago Exited (1) 15 hours ago admiring_agnesi
```
|
2017/04/25
|
[
"https://Stackoverflow.com/questions/43603199",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1258509/"
] |
I think your code is ok.
You need close resultset, statement and connection in block finally.
|
Though your code works, I strongly suggest to **refactor it for better maintenance and readability** as shown below. Also, ensure that the resources are closed properly:
```
public void dashboardReports() {
handleTotalStocks();
handleTotalSales();
handleTotalPurchages();
//Add others
}
```
**handleTotalStocks() method:**
```
private void handleTotalStocks() {
String total_stock_value="select sum(price*closingstock)as tsv
from purchase_table";
try(Statement ps_tsv=connection.createStatement();
ResultSet set_tsv=ps_tsv.executeQuery(total_stock_value);) {
if(set_tsv.next()) {
total_stock.setText(set_tsv.getString("tsv"));
}
}
}
//add other methods
```
| 10,146
|
47,961,437
|
I'm using Jupyter Notebook to develop some Python. This is my first stab at logging errors and I'm having an issue where no errors are logged to my error file.
I'm using:
```
import logging
logger = logging.getLogger('error')
logger.propagate = False
hdlr = logging.FileHandler("error.log")
formatter = logging.Formatter('%(asctime)s %(message)s')
hdlr.setFormatter(formatter)
logger.addHandler(hdlr)
logger.setLevel(logging.DEBUG)
```
to create the logger.
I'm then using a try/Except block which is being called on purpose (for testing) by using a column that doesn't exist in my database:
```
try:
some_bad_call_to_the_database('start',group_id)
except Exception as e:
logger.exception("an error I would like to log")
print traceback.format_exc(limit=1)
```
and the exception is called as can be seen from my output in my notebook:
```
Traceback (most recent call last):
File "<ipython-input-10-ef8532b8e6e0>", line 19, in <module>
some_bad_call_to_the_database('start',group_id)
InternalError: (1054, u"Unknown column 'start_times' in 'field list'")
```
However, error.log is not being written to. Any thoughts would be appreciated.
|
2017/12/24
|
[
"https://Stackoverflow.com/questions/47961437",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1694657/"
] |
If **spark-shell** doesn't show this line on start:
>
> Spark context available as 'sc' (master = local[\*], app id = local-XXX).
>
>
>
Run
```
val sc = SparkContext.getOrCreate()
```
|
The issue is that you created `sc` of type `SparkConfig` not `SparkContext` (both have the same initials).
---
For using parallelize method in Spark 2.0 version or any other version, `sc` should be `SparkContext` and not `SparkConf`. The correct code should be like this:
```
import org.apache.spark.SparkContext
import org.apache.spark.SparkConf
val sparkConf = new SparkConf().setAppName("myname").setMaster("mast")
val sc = new SparkContext(sparkConf)
val data = Array(1, 2, 3, 4, 5)
val distData = sc.parallelize(data)
```
This will give you the desired result.
| 10,156
|
16,913,086
|
I want to run third part tool written in python on my ubuntu machine ([corgy tool](https://github.com/pkerpedjiev/corgy)).
However I don't know how to add additional modules to Python path.
```
cat doc/download.rst
There is currently no setup.py, so you need to manually add
the download directory to your PYTHON_PATH environment variable.
```
How can I add directory to PYTHON\_PATH?
**I have tried:**
`export PYTHON_PATH=/home/user/directory:$PYTHON_PATH && source .bashrc`
`export PATH=/home/user/directory:$PATH && source .bashrc`
`python
import sys
sys.path.append("/home/user/directory/")`
But when I try to run this tool I get:
```
Traceback (most recent call last):
File "examples/dotbracket_to_bulge_graph.py", line 4, in <module>
import corgy.graph.bulge_graph as cgb
ImportError: No module named corgy.graph.bulge_graph
```
|
2013/06/04
|
[
"https://Stackoverflow.com/questions/16913086",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1286528/"
] |
Create a `.bash_profile` in your home directory. Then, add the line
```
PYTHONPATH=$PYTHONPATH:new_dir
EXPORT $PYTHONPATH
```
Or even better:
```
if [ -d "new_dir" ] ; then
PYTHONPATH="$PYTHONPATH:new_dir"
fi
EXPORT $PYTHONPATH
```
The `.bash_profile` properties are loaded every time you log in.
The `source` command is useful if you don't want to log in again.
|
[@fedorqui](https://stackoverflow.com/users/1983854/fedorqui-so-stop-harming)'s answer above was almost good for me, but there is at least one mistake (I am not sure about the `export` statement in all caps, I am a complete newbie).
There should not be a `$` sign preceding PYTHONPATH in the export statement. So the options would be:
>
> Create a .bash\_profile in your home directory. Then, add the line
>
>
>
> ```
> PYTHONPATH=$PYTHONPATH:new_dir
> export PYTHONPATH
>
> ```
>
> Or even better:
>
>
>
> ```
> if [ -d "new_dir" ] ; then
> PYTHONPATH="$PYTHONPATH:new_dir"
> fi
> export PYTHONPATH
>
> ```
>
>
| 10,163
|
54,503,298
|
I have a list of list of lists (all of lists have same size) in python like this:
```
A = [[1,2,3,4],['a','b','c','d'] , [12,13,14,15]]
```
I want to remove some columns (i-th elements of all lists).
Is there any way that does this without `for` statements?
|
2019/02/03
|
[
"https://Stackoverflow.com/questions/54503298",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/7789910/"
] |
As mentioned, you can't do this without loop. However, using built-in functions here's a functional approach that doesn't explicitly use any loop:
```
In [24]: from operator import itemgetter
In [25]: def remove_col(arr, ith):
...: itg = itemgetter(*filter((ith).__ne__, range(len(arr[0]))))
...: return list(map(list, map(itg, arr)))
...:
```
Demo:
```
In [26]: remove_col(A, 1)
Out[26]: [[1, 3, 4], ['a', 'c', 'd'], [12, 14, 15]]
In [27]: remove_col(A, 3)
Out[27]: [[1, 2, 3], ['a', 'b', 'c'], [12, 13, 14]]
```
Note that instead of `list(map(list, map(itg, arr)))` if you only return `map(itg, arr)` it will give you the expected result but as an iterator of iterators instead of list of lists. This will be a more optimized approach in terms of both memory and run-time in this case.
Also, using loops here's the way I'd do this:
```
In [31]: def remove_col(arr, ith):
...: return [[j for i,j in enumerate(sub) if i != ith] for sub in arr]
```
Surprisingly (not if you believe in the power of C :)) the functional approach is even faster for large arrays.
```
In [41]: arr = A * 10000
In [42]: %timeit remove_col_functional(arr, 2)
8.42 ms ± 37.9 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
In [43]: %timeit remove_col_list_com(arr, 2)
23.7 ms ± 165 µs per loop (mean ± std. dev. of 7 runs, 10 loops each)
# And if in functional approach you just return map(itg, arr)
In [47]: %timeit remove_col_functional_iterator(arr, 2)
1.48 µs ± 4.71 ns per loop (mean ± std. dev. of 7 runs, 1000000 loops each)
```
|
You could easily use [list comprehension](https://www.pythonforbeginners.com/basics/list-comprehensions-in-python) and [slices](https://www.pythoncentral.io/how-to-slice-listsarrays-and-tuples-in-python/) :
```
A = [[1,2,3,4],['a','b','c','d'] , [12,13,14,15]]
k = 1
B = [l[:k]+l[k+1:] for l in A]
print(B) # >> returns [[1, 3, 4], ['a', 'c', 'd'], [12, 14, 15]]
```
| 10,164
|
60,171,622
|
I'm working with large data sets. I'm trying to use the NumPy library where I can or python features to process the data sets in an efficient way (e.g. LC).
First I find the relevant indexes:
```
dt_temp_idx = np.where(dt_diff > dt_temp_th)
```
Then I want to create a mask containing for each index a sequence starting from the index to a stop value, I tried:
```
mask_dt_temp = [np.arange(idx, idx+dt_temp_step) for idx in dt_temp_idx]
```
and:
```
mask_dt_temp = [idxs for idx in dt_temp_idx for idxs in np.arange(idx, idx+dt_temp_step)]
```
but it gives me the exception:
```
The truth value of an array with more than one element is ambiguous. Use a.any() or a.all()
```
Example input:
```
indexes = [0, 100, 1000]
```
Example output with stop values after 10 integers from each indexes:
```
list = [0, 1, ..., 10, 100, 101, ..., 110, 1000, 1001, ..., 1010]
```
1) How can I solve it?
2) Is it the best practice to do it?
|
2020/02/11
|
[
"https://Stackoverflow.com/questions/60171622",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5080562/"
] |
Using masks (boolean arrays) are efficient being memory-efficient and performant too. We will make use of [`SciPy's binary-dilation`](https://docs.scipy.org/doc/scipy-0.14.0/reference/generated/scipy.ndimage.morphology.binary_dilation.html) to extend the thresholded mask.
Here's a step-by-step setup and solution run-
```
In [42]: # Random data setup
...: np.random.seed(0)
...: dt_diff = np.random.rand(20)
...: dt_temp_th = 0.9
In [43]: # Get mask of threshold crossings
...: mask = dt_diff > dt_temp_th
In [44]: mask
Out[44]:
array([False, False, False, False, False, False, False, False, True,
False, False, False, False, True, False, False, False, False,
False, False])
In [45]: W = 3 # window size for extension (edit it according to your use-case)
In [46]: from scipy.ndimage.morphology import binary_dilation
In [47]: extm = binary_dilation(mask, np.ones(W, dtype=bool), origin=-(W//2))
In [48]: mask
Out[48]:
array([False, False, False, False, False, False, False, False, True,
False, False, False, False, True, False, False, False, False,
False, False])
In [49]: extm
Out[49]:
array([False, False, False, False, False, False, False, False, True,
True, True, False, False, True, True, True, False, False,
False, False])
```
Compare `mask` against `extm` to see how the extension takes place.
As, we can see the thresholded `mask` is extended by window-size `W` on the right side, as is the expected output mask `extm`. This can be use to mask out those in the input array : `dt_diff[~extm]` to simulate the deleting/dropping of the elements from the input following [`boolean-indexing`](https://docs.scipy.org/doc/numpy/reference/arrays.indexing.html#boolean-array-indexing) or inversely `dt_diff[extm]` to simulate selecting those.
### Alternatives with NumPy based functions
**Alternative #1**
```
extm = np.convolve(mask, np.ones(W, dtype=int))[:len(dt_diff)]>0
```
**Alternative #2**
```
idx = np.flatnonzero(mask)
ext_idx = (idx[:,None]+ np.arange(W)).ravel()
ext_mask = np.ones(len(dt_diff), dtype=bool)
ext_mask[ext_idx[ext_idx<len(dt_diff)]] = False
# Get filtered o/p
out = dt_diff[ext_mask]
```
|
`dt_temp_idx` is a numpy array, but still a Python iterable so you can use a good old Python list comprehension:
```
lst = [ i for j in dt_temp_idx for i in range(j, j+11)]
```
If you want to cope with sequence overlaps and make it back a np.array, just do:
```
result = np.array({i for j in dt_temp_idx for i in range(j, j+11)})
```
But beware the use of a set is robust and guarantee no repetition but it could be more expensive that a simple list.
| 10,168
|
5,077,625
|
I'm trying to write a short program that will read in the contents of e-mails within a folder on my exchange/Outlook profile so I can manipulate the data. However I'm having a problem finding much information about python and exchange/Outlook integration. A lot of stuff is either very old/has no docs/not explained. I've tried several snippets but seem to be getting the same errors. I've tried Tim Golden's code:
```
import win32com.client
session = win32com.client.gencache.EnsureDispatch ("MAPI.Session")
#
# Leave blank to be prompted for a session, or use
# your own profile name if not "Outlook". It is also
# possible to pull the default profile from the registry.
#
session.Logon ("Outlook")
messages = session.Inbox.Messages
#
# Although the inbox_messages collection can be accessed
# via getitem-style calls (inbox_messages[1] etc.) this
# is the recommended approach from Microsoft since the
# Inbox can mutate while you're iterating.
#
message = messages.GetFirst ()
while message:
print message.Subject
message = messages.GetNext ()
```
However I get an error:
```
pywintypes.com_error: (-2147221005, 'Invalid class string', None, None)
```
Not sure what my profile name is so I tried with:
```
session.Logon()
```
to be prompted but that didn't work either (same error). Also tried both with Outlook open and closed and neither changed anything.
|
2011/02/22
|
[
"https://Stackoverflow.com/questions/5077625",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/559504/"
] |
I had the same problem you did - didn't find much that worked. The following code, however, works like a charm.
```
import win32com.client
outlook = win32com.client.Dispatch("Outlook.Application").GetNamespace("MAPI")
inbox = outlook.GetDefaultFolder(6) # "6" refers to the index of a folder - in this case,
# the inbox. You can change that number to reference
# any other folder
messages = inbox.Items
message = messages.GetLast()
body_content = message.body
print body_content
```
|
I have created my own iterator to iterate over Outlook objects via python. The issue is that python tries to iterates starting with Index[0], but outlook expects for first item Index[1]... To make it more Ruby simple, there is below a helper class Oli with following
methods:
.items() - yields a tuple(index, Item)...
.prop() - helping to introspect outlook object exposing available properties (methods and attributes)
```
from win32com.client import constants
from win32com.client.gencache import EnsureDispatch as Dispatch
outlook = Dispatch("Outlook.Application")
mapi = outlook.GetNamespace("MAPI")
class Oli():
def __init__(self, outlook_object):
self._obj = outlook_object
def items(self):
array_size = self._obj.Count
for item_index in xrange(1,array_size+1):
yield (item_index, self._obj[item_index])
def prop(self):
return sorted( self._obj._prop_map_get_.keys() )
for inx, folder in Oli(mapi.Folders).items():
# iterate all Outlook folders (top level)
print "-"*70
print folder.Name
for inx,subfolder in Oli(folder.Folders).items():
print "(%i)" % inx, subfolder.Name,"=> ", subfolder
```
| 10,169
|
32,150,849
|
I'm writing a simple Flask app, with the sole purpose to learn Python and MongoDB.
I've managed to reach to the point where all the collections are defined, and CRUD operations work in general. Now, one thing that I really want to understand, is how to refresh the collection, after updating its structure. For example, say that I have the following `model`:
user.py
=======
```
class User(db.Document, UserMixin):
email = db.StringField(required=True, unique=True)
password = db.StringField(required=True)
active = db.BooleanField()
first_name = db.StringField(max_length=64, required=True)
last_name = db.StringField(max_length=64, required=True)
registered_at = db.DateTimeField(default=datetime.datetime.utcnow())
confirmed = db.BooleanField()
confirmed_at = db.DateTimeField()
last_login_at = db.DateTimeField()
current_login_at = db.DateTimeField()
last_login_ip = db.StringField(max_length=45)
current_login_ip = db.StringField(max_length=45)
login_count = db.IntField()
companies = db.ListField(db.ReferenceField('Company'), default=[])
roles = db.ListField(db.ReferenceField(Role), default=[])
meta = {
'indexes': [
{'fields': ['email'], 'unique': True}
]
}
```
Now, I already have entries in my `user` collection, but I want to change `companies` to:
```
company = db.ReferenceField('Company')
```
How can I refresh the collection's structure, without having to bring the whole database down?
----------------------------------------------------------------------------------------------
I do have a `manage.py` script that helps me and also provides a shell:
```
#!/usr/bin/python
from flask.ext.script import Manager
from flask.ext.script.commands import Shell
from app import factory
app = factory.create_app()
manager = Manager(app)
manager.add_command("shell", Shell(use_ipython=True))
# manager.add_command('run_tests', RunTests())
if __name__ == "__main__":
manager.run()
```
and I have tried a couple of commands, from information that I could recompile and out of my basic knowledge:
```
>>> from app.models import db, User
>>> import mongoengine
>>> mongoengine.Document(User)
field = iter(self._fields_ordered)
AttributeError: 'Document' object has no attribute '_fields_ordered'
>>> mongoengine.Document(User).modify() # well, same result as above
```
Any pointers on how to achieve this?
Update
------
I am asking all of this, because I have updated my `user.py` to match my new requests, but anytime I interact with the db its self, since the table's structure was not refreshed, I get the following error:
>
> FieldDoesNotExist: The field 'companies' does not exist on the
> document 'User', referer: <http://local.faqcolab.com/company>
>
>
>
|
2015/08/21
|
[
"https://Stackoverflow.com/questions/32150849",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/971392/"
] |
Solution is easier then I expected:
```
db.getCollection('user').update(
// query
{},
// update
{
$rename: {
'companies': 'company'
}
},
// options
{
"multi" : true, // update all documents
"upsert" : false // insert a new document, if no existing document match the query
}
);
```
Explanation for each of the `{}`:
* First is empty because I want to update all documents in `user` collection.
* Second contains [`$rename`](http://docs.mongodb.org/manual/reference/operator/update/rename/) which is the invoking action to rename the fields I want.
* Last contains aditional settings for the query to be executed.
|
>
> I have updated my `user.py` to match my new requests, but anytime I interact with the db its self, since the table's structure was not refreshed, I get the following error
>
>
>
MongoDB does not have a "table structure" like relational databases do. After a document has been inserted, you can't change it's schema by changing the document model.
I don't want to sound like I'm telling you that the answer is to use different tools, but seeing things like `db.ListField(db.ReferenceField('Company'))` makes me think you'd be much better off with a relational database (Postgres is well supported in the Flask ecosystem).
Mongo works best for storing schema-less documents (you don't know before hand how your data is structured, or it varies significantly between documents). Unless you have data like that, it's worth looking at other options. Especially since you're just getting started with Python and Flask, there's no point in making things harder than they are.
| 10,174
|
66,684,265
|
The case is if I want to reverse select a python list to `n` like:
```
n = 3
l = [1,2,3,4,5,6]
s = l[5:n:-1] # s is [6, 5]
```
OK, it works, but how can I set `n`'s value to select the whole list?
let's see this example, what I expect the first line is `[5, 4, 3, 2, 1]`
```
[40]: for i in range(-1, 5):
...: print(l[4:i:-1])
...:
[]
[5, 4, 3, 2]
[5, 4, 3]
[5, 4]
[5]
[]
```
if the upper bound n set to 0, the result will lost `0`. but if n is `-1`, the result is empty because `-1` means "the last one".
The only way I can do is:
```
if n < 0:
s = l[5::-1]
else:
s = l[5:n:-1]
```
a bit confusing.
|
2021/03/18
|
[
"https://Stackoverflow.com/questions/66684265",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2955827/"
] |
I was led to answer from <https://github.com/JeffreyWay/laravel-mix/issues/2896;>
they seemed to upgrade the syntax the documentation can be found here:
<https://github.com/JeffreyWay/laravel-mix/blob/467f0c9b01b7da71c519619ba8b310422321e0d6/UPGRADE.md#vue-configuration>
|
When I tried your solution it did not work.
Here is how I managed to fix this issue, using this new/changed property additionalData, in previous versions this property was data or prependData.
```
mix.webpackConfig({
module: {
rules: [
{
test: /\.scss$/,
use: [
{
loader: "sass-loader",
options: {
additionalData: `@import "@/_variables.scss";
@import "@/_mixins.scss";
@import "@/_extends.scss";
@import "@/_general.scss";`
},
},
],
}
]
},
resolve: {
alias: {
'@': path.resolve('resources/sass'),
'~': path.resolve('/resources/js'),
'Vue': 'vue/dist/vue.esm-bundler.js',
}
},
output: {
chunkFilename: '[name].js?id=[chunkhash]',
},
});
mix.js('resources/js/app.js', 'public/js').vue()
.sass('resources/sass/app.scss', 'public/css')
.copyDirectory('resources/images', 'public/images');
if (mix.inProduction()) {
mix.version();
}
```
Note that alias for path 'resources/scss' is "@".
I hope this saved someone some time :)
| 10,175
|
67,792,538
|
i'm writing a python script which reads emails from Outlook then extract the body
The problem is that when it reads an email answer, the body contains the previous emails.
Is there away to avoid that and just extract the body of the email.
This is a part of my code :
```
import requests
import json
import base64
utlook = win32com.client.Dispatch("Outlook.Application").GetNamespace("MAPI")
folder = outlook.Folders.Item("UGAP-AMS-L2")
inbox = folder.Folders.Item("Inbox")
mails = inbox.Items
mails.Sort("[ReceivedTime]", False)
for mail in mails:
if mail.UnRead == True :
print(" mail.Body")
```
This is what i get :
*-Email body of the current email-*
*De : "tracker@gmail.fr" tracker@gmail.fr*
*Date : vendredi 21 mai 2021 à 08:44*
*À : Me Me@outlook.com*
*Objet : object*
*-body of previous email-*
|
2021/06/01
|
[
"https://Stackoverflow.com/questions/67792538",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/15268168/"
] |
Try this solution that uses `zoo` (and `dplyr`, which I'm inferring you're already using):
```r
library(dplyr)
eg <- expand.grid(Sample.Type = unique(dat$Sample.Type),
date = seq(min(dat$date), max(dat$date), by = "day"),
stringsAsFactors = FALSE)
dat %>%
mutate(a=TRUE) %>%
full_join(eg, by = c("Sample.Type", "date")) %>%
mutate(a=!is.na(a)) %>%
arrange(date) %>%
group_by(Sample.Type) %>%
mutate(last7 = zoo::rollapplyr(a, 7, any, partial = TRUE)) %>%
select(-a) %>%
ungroup() %>%
print(n=99)
# # A tibble: 29 x 3
# Sample.Type date last7
# <chr> <date> <lgl>
# 1 A 2020-10-05 TRUE
# 2 B 2020-10-05 TRUE
# 3 A 2020-10-06 TRUE
# 4 B 2020-10-06 TRUE
# 5 B 2020-10-06 TRUE
# 6 B 2020-10-06 TRUE
# 7 A 2020-10-07 TRUE
# 8 B 2020-10-07 TRUE
# 9 A 2020-10-08 TRUE
# 10 B 2020-10-08 TRUE
# 11 A 2020-10-09 TRUE
# 12 B 2020-10-09 TRUE
# 13 A 2020-10-10 TRUE
# 14 B 2020-10-10 TRUE
# 15 A 2020-10-11 TRUE
# 16 A 2020-10-11 TRUE
# 17 B 2020-10-11 TRUE
# 18 A 2020-10-12 TRUE
# 19 B 2020-10-12 TRUE
# 20 A 2020-10-13 TRUE
# 21 B 2020-10-13 FALSE
# 22 A 2020-10-14 TRUE
# 23 B 2020-10-14 FALSE
# 24 A 2020-10-15 TRUE
# 25 B 2020-10-15 FALSE
# 26 A 2020-10-16 TRUE
# 27 B 2020-10-16 FALSE
# 28 A 2020-10-17 TRUE
# 29 B 2020-10-17 FALSE
```
Data
```r
dat <- structure(list(Sample.Type = c("A", "B", "A", "B", "B", "B", "A", "A", "A", "A", "A", "A"), date = structure(c(18540, 18540, 18541, 18541, 18541, 18541, 18545, 18546, 18546, 18550, 18551, 18552), class = "Date")), row.names = c(NA, -12L), class = "data.frame")
```
|
You just need to `lag()` while grouping by `Sample.Type`.
1. Toy dataset. I just added a third Sample.Type
```r
library(dplyr)
library(lubridate)
typeday <- tibble(
Sample.Type = c("A", "B", "A", "B", "A", "A","B", "C", "C"),
date = as.Date(c("2020-10-05", "2020-10-05", "2020-10-06",
"2020-10-06", "2020-10-11", "2020-10-17",
"2020-10-17", "2020-10-17", "2020-10-18"))
)
typeday
#> # A tibble: 9 x 2
#> Sample.Type date
#> <chr> <date>
#> 1 A 2020-10-05
#> 2 B 2020-10-05
#> 3 A 2020-10-06
#> 4 B 2020-10-06
#> 5 A 2020-10-11
#> 6 A 2020-10-17
#> 7 B 2020-10-17
#> 8 C 2020-10-17
#> 9 C 2020-10-18
```
2. Then, make sure you have the right order for the types and dates. Once you group by Sample.Type, evaluate if the last date (`lag(date)`) is greater or equal than seven days behind your actual date. From there it is just cleaning the `sampled` column. You can also arrange by only date after ungrouping.
```r
typeday %>%
arrange(Sample.Type, date) %>%
group_by(Sample.Type) %>%
mutate(
sampled = lag(date) >= date - days(7),
sampled = case_when(
sampled ~ "yes",
!sampled | is.na(sampled) ~ "no"
)
) %>%
ungroup() %>%
arrange(date)
#> # A tibble: 9 x 3
#> Sample.Type date sampled
#> <chr> <date> <chr>
#> 1 A 2020-10-05 no
#> 2 B 2020-10-05 no
#> 3 A 2020-10-06 yes
#> 4 B 2020-10-06 yes
#> 5 A 2020-10-11 yes
#> 6 A 2020-10-17 yes
#> 7 B 2020-10-17 no
#> 8 C 2020-10-17 no
#> 9 C 2020-10-18 yes
```
Created on 2021-06-01 by the [reprex package](https://reprex.tidyverse.org) (v2.0.0)
| 10,176
|
65,888,118
|
I deployed a flask app to IIS using FastCGI and WSGI Handler. The steps that I have followed are
1. Created a virtual environment for Python and installed all packages including wfastCGI.
2. Set the Handler mappings and included the FastCGI settings.
3. Assigned the necessary permissions for the folders by adding IIS\_IUSRS and IUSR.
Below is the medium link that I followed in terms of the steps.
<https://medium.com/@dpralay07/deploy-a-python-flask-application-in-iis-server-and-run-on-machine-ip-address-ddb81df8edf3>
The folder structure for the code is as shown below with (checkin\_env) being the virtual environment.
[](https://i.stack.imgur.com/PZUH0.png)
[](https://i.stack.imgur.com/Mv3iH.png)
Fast CGI settings are shown as below with WSGI Handler being `checkFlask.app`
[](https://i.stack.imgur.com/Qqxir.png)
The web.config file which was generated is here.
[](https://i.stack.imgur.com/j7kVM.png)
When I tried running on Ports 80, 5000 I received a permission error related to System32 which I am totally confused and unsure about. Any thoughts or inputs are highly appreciated. Thank you.
[](https://i.stack.imgur.com/T0G7e.png)
|
2021/01/25
|
[
"https://Stackoverflow.com/questions/65888118",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6831630/"
] |
You use the wrong css.
The `translateX` is the state, not the animation. Use `animation` instead.
```css
@keyframes menu-open {
from {width: 0px;}
to {width: 220px;}
}
.open {
animation-name: menu-open;
animation-duration: 1s;
animation-fill-mode: forwards;
}
@keyframes menu-close {
from {width: 220px;}
to {width: 0px;}
}
.close {
animation-name: menu-close;
animation-duration: 1s;
animation-fill-mode: forwards;
}
.header__menu {
position: absolute;
top: 0;
right: 0;
background-color: black;
outline: 1px solid var(--border-accent);
padding: 64px 12px 0 12px;
height: 100vh;
z-index: 9;
}
```
```
import cs from 'classnames';
import React from "react";
import "./styles.css";
export default function App() {
const [showMenu, setShowMenu] = React.useState(false);
const onClick = React.useCallback(() => {
setShowMenu(!showMenu);
}, [showMenu]);
const classnames = cs('header__menu', {
open: showMenu,
close: !showMenu,
})
return (
<div className="App">
<button onClick={onClick}>{showMenu ? "close" : "open"}</button>
{<div className={classnames} />}
</div>
);
}
```
Here is the working [codesandbox](https://codesandbox.io/s/magical-pasteur-y1223?file=/src/App.js:51-414)
|
You should specify both transform properties in style attribute like so:
style={{transform: this.showMenu ? "translateX(0%)" : "translateX(100%)"}}
| 10,177
|
63,899,935
|
I'm writing a code to take a rain time-series and save hourly files for each day in order to feed a hydrological model, so, basically, I need to save each file with the hour of the day with tho digits, like this:
```
rain_20200101_0000.txt
rain_20200101_0100.txt
...
rain_20200101_0900.txt
rain_20200101_1000.txt
..
rain_20200101_2300.txt
```
But python doesn't put a zero before numbers between 0-9, so if I use a `range(24)` to do that it will save the first 10 hours like "rain\_20200101\_100.txt"
The solution I found was to put an `if` for x<10 and x>=10 inside the `range(24)` and insert a "0" before the hour for the first condition, but I think that is too rude and should be a more eficient way to do that. Could you help me with a simpler solution for this code?
|
2020/09/15
|
[
"https://Stackoverflow.com/questions/63899935",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/7335001/"
] |
I experienced the same case, for me, what worked was what has been commented before:
Change the value of 'cwd' property in launch.json to the value of the project directory.
{
...
"cwd": "${workspaceFolder}"
}
to
{
...
"cwd": "${workspaceFolder}/SATestUtils.API"
}
All the credits to Bemm...
[](https://i.stack.imgur.com/HYorb.png)
|
Did you try to specify the asp net environment at the launch.json?
Something like this:
[](https://i.stack.imgur.com/gwMzt.png)
| 10,178
|
60,836,709
|
So after doing some web scraping and turning data frames into lists, I want to compare one list to another that I have created myself. But, if one list doesn’t have a value from another, I want it added in the exact order of the list I’m comparing it to.
For example, if I’m comparing one list of snacks with another
I want to make sure List2 ends up looking exactly like List1. Another thing I want to do is add the price of apples to ListPrices to “Nan” since I don’t have an existing value and I want it done in one for/if statement.
```
List1 = ['apples', 'bananas', 'cookies', 'soda']
List2 = ['apples', 'cookies', 'soda']
ListPrices = [1, 3, 1]
for i in range(List1):
if List1[i] != List2[i]:
List2.insert([i-1], List1[i])
ListPrices.insert([i-1], Nan)
print(List2)
```
Above is the the python script, but I seem to be stuck at a dead end as to how to do it without generating errors. I'm hoping to get the output looking exactly like this.
```
List2 = ['apples', 'bananas', 'cookies', 'soda']
ListPrices = [1, Nan, 3, 1]
```
Any suggestions is truly appreciated.
|
2020/03/24
|
[
"https://Stackoverflow.com/questions/60836709",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/11583556/"
] |
Assuming that prices in `ListPrices` are consistent, i.e. if `List1` has two bananas, both have the same price, you can create a `dict` by `zip`ing `List1` and `ListPrices` and then look up the price for the items in `List2` in that dict, or use `nan` as a default.
```
prices = dict(zip(List2, ListPrices))
# {'apples': 1, 'cookies': 3, 'soda': 1}
List2 = List1 # make same as List1...
ListPrices = [prices.get(x, float("nan")) for x in List1]
# [1, nan, 3, 1]
```
|
You can convert it to a dictionary for a simple look up
```
d = dict(zip(List2,ListPrices))
[d.get(i,None) for i in List1]
```
| 10,179
|
17,980,691
|
I'm having difficulty getting my sizers to work properly in wxpython. I am trying to do a simple one horizontal bar at top (with text in it) and two vertical boxes below (with gridsizers \* the left one should only be 2 columns!! \* inside each). I want the everything in the image to stretch and fit my panel as well (with the ability to add padding to sides and top/bottom). 
I have two main issues:
1. I cant get the text in the horizontal bar to be in the middle (it goes to the left)
2. I would like to space the two vertical boxes to span AND fit the page appropriately (also would like the grids to span better too).
Here is my code (with some parts omitted):
```
self.LeagueInfoU = wx.Panel(self.LeagueInfo,-1, style=wx.BORDER_NONE)
self.LeagueInfoL = wx.Panel(self.LeagueInfo,-1, style=wx.BORDER_NONE)
self.LeagueInfoR = wx.Panel(self.LeagueInfo,-1, style=wx.BORDER_NONE)
vbox = wx.BoxSizer(wx.VERTICAL)
hbox1 = wx.BoxSizer(wx.HORIZONTAL)
hbox2 = wx.BoxSizer(wx.HORIZONTAL)
vbox2a = wx.GridSizer(12,2,0,0)
vbox3a = wx.GridSizer(10,3,0,0)
hbox1a = wx.BoxSizer(wx.VERTICAL)
vbox2 = wx.BoxSizer(wx.VERTICAL)
vbox3 = wx.BoxSizer(wx.VERTICAL)
hbox1.Add(self.LeagueInfoU, 1, wx.EXPAND | wx.ALL, 3)
vbox2.Add(self.LeagueInfoL, 1, wx.EXPAND | wx.ALL, 3)
vbox3.Add(self.LeagueInfoR, 1, wx.EXPAND | wx.ALL, 3)
vbox2a.AddMany([this is all correct])
self.LeagueInfoL.SetSizer(vbox2a)
vbox3a.AddMany([this is all correct])
self.LeagueInfoR.SetSizer(vbox3a)
font = wx.Font(20, wx.DEFAULT, wx.NORMAL, wx.BOLD)
self.Big_Header = wx.StaticText(self.LeagueInfoU, -1, 'Testing This')
self.Big_Header.SetFont(font)
hbox1a.Add(self.Big_Header, 0, wx.ALIGN_CENTER|wx.ALIGN_CENTER_VERTICAL)
self.LeagueInfoU.SetSizer(hbox1a)
hbox2.Add(vbox2, 0, wx.EXPAND)
hbox2.Add(vbox3, 0, wx.EXPAND)
vbox.Add(hbox1, 0, wx.EXPAND)
vbox.Add(hbox2, 1, wx.EXPAND)
self.LeagueInfo.SetSizer(vbox)
```
|
2013/07/31
|
[
"https://Stackoverflow.com/questions/17980691",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/936911/"
] |
Is this what you're after?

```
import wx
class Frame(wx.Frame):
def __init__(self, parent):
wx.Frame.__init__(self, parent)
self.panel = wx.Panel(self)
main_sizer = wx.BoxSizer(wx.VERTICAL)
# Title
self.centred_text = wx.StaticText(self.panel, label="Title")
main_sizer.Add(self.centred_text, 0, wx.ALIGN_CENTRE | wx.ALL, 3)
# Grids
content_sizer = wx.BoxSizer(wx.HORIZONTAL)
grid_1 = wx.GridSizer(12, 2, 0, 0)
grid_1.AddMany(wx.StaticText(self.panel, label=str(i)) for i in xrange(24))
content_sizer.Add(grid_1, 1, wx.EXPAND | wx.ALL, 3)
grid_2 = wx.GridSizer(10, 3, 0, 0)
grid_2.AddMany(wx.StaticText(self.panel, label=str(i)) for i in xrange(30))
content_sizer.Add(grid_2, 1, wx.EXPAND | wx.ALL, 3)
main_sizer.Add(content_sizer, 1, wx.EXPAND)
self.panel.SetSizer(main_sizer)
self.Show()
if __name__ == "__main__":
app = wx.App(False)
Frame(None)
app.MainLoop()
```
|
something like this??
```
import wx
class MyFrame(wx.Frame):
def __init__(self):
wx.Frame.__init__(self,None,-1,"Test Stretching!!")
p1 = wx.Panel(self,-1,size=(500,100))
p1.SetMinSize((500,100))
p1.SetBackgroundColour(wx.GREEN)
hsz = wx.BoxSizer(wx.HORIZONTAL)
p2 = wx.Panel(self,-1,size=(200,400))
p2.SetMinSize((200,400))
p2.SetBackgroundColour(wx.RED)
p3 = wx.Panel(self,-1,size=(300,400))
p3.SetMinSize((300,400))
p3.SetBackgroundColour(wx.BLUE)
hsz.Add(p2,1,wx.EXPAND)
hsz.Add(p3,1,wx.EXPAND)
sz = wx.BoxSizer(wx.VERTICAL)
sz.Add(p1,0,wx.EXPAND)
sz.Add(hsz,1,wx.EXPAND)
self.SetSizer(sz)
self.Layout()
self.Fit()
a = wx.App(redirect=False)
f = MyFrame()
f.Show()
a.MainLoop()
```
| 10,182
|
45,876,059
|
I have this server
<https://github.com/crossbario/autobahn-python/blob/master/examples/twisted/websocket/echo_tls/server.py>
And I want to connect to the server with this code:
```
ws = create_connection("wss://127.0.0.1:9000")
```
What options do I need to add to `create_connection`? Adding `sslopt={"cert_reqs": ssl.CERT_NONE}` does not work:
```
websocket._exceptions.WebSocketBadStatusException: Handshake status 400
```
|
2017/08/25
|
[
"https://Stackoverflow.com/questions/45876059",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1238675/"
] |
This works
```
import asyncio
import websockets
import ssl
async def hello():
async with websockets.connect('wss://127.0.0.1:9000',ssl=ssl.SSLContext(protocol=ssl.PROTOCOL_TLS)) as websocket:
data = 'hi'
await websocket.send(data)
print("> {}".format(data))
response = await websocket.recv()
print("< {}".format(response))
asyncio.get_event_loop().run_until_complete(hello())
```
|
For me the option from the question seems to work:
```
from websocket import create_connection
import ssl
ws = create_connection("wss://echo.websocket.org", sslopt={"cert_reqs": ssl.CERT_NONE})
ws.send("python hello!")
print (ws.recv())
ws.close()
```
See also here:
<https://github.com/websocket-client/websocket-client#faq>
Note: I'm using win7 with python 3.6.5 with following packages installed(pip):
* simple-websocket-server==0.4.0
* websocket-client==0.53.0
* websockets==6.0
| 10,184
|
17,986,923
|
I have a python dictionary of the form :
```
a1 = {
'SFP_1': ['cat', '3'],
'SFP_0': ['cat', '5', 'bat', '1']
}
```
The end result I need is a dictionary of the form :
```
{'bat': '1', 'cat': '8'}
```
I am currently doing this:
```
b1 = list(itertools.chain(*a1.values()))
c1 = dict(itertools.izip_longest(*[iter(b1)] * 2, fillvalue=""))
```
which gives me the output:
```
>>> c1
{'bat': '1', 'cat': '5'}
```
I can iterate over the dictionary and get this but can anybody give me a more pythonic way of doing the same?
|
2013/08/01
|
[
"https://Stackoverflow.com/questions/17986923",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2054020/"
] |
Using [defaultdict](http://docs.python.org/2/library/collections.html#collections.defaultdict):
```
import itertools
from collections import defaultdict
a1 = {u'SFP_1': [u'cat', u'3'], u'SFP_0': [u'cat', u'5', u'bat', u'1']}
b1 = itertools.chain.from_iterable(a1.itervalues())
c1 = defaultdict(int)
for animal, count in itertools.izip(*[iter(b1)] * 2):
c1[animal] += int(count)
# c1 => defaultdict(<type 'int'>, {u'bat': 1, u'cat': 8})
c1 = {animal: str(count) for animal, count in c1.iteritems()}
# c1 => {u'bat': '1', u'cat': '8'}
```
|
```
In [8]: a1 = {
'SFP_1': ['cat', '3'],
'SFP_0': ['cat', '5', 'bat', '1']
}
In [9]: answer = collections.defaultdict(int)
In [10]: for L in a1.values():
for k,v in itertools.izip(itertools.islice(L, 0, len(L), 2),
itertools.islice(L, 1, len(L), 2)):
answer[k] += int(v)
In [11]: answer
Out[11]: defaultdict(<type 'int'>, {'bat': 1, 'cat': 8})
In [12]: dict(answer)
Out[12]: {'bat': 1, 'cat': 8}
```
| 10,185
|
45,045,147
|
I'm trying to migrate a table with SQLAlchemy Migrate, but I'm getting this error:
```
sqlalchemy.exc.UnboundExecutionError: Table object 'responsibles' is not bound to an Engine or Connection. Execution can not proceed without a database to execute against.
```
When I run:
```
python manage.py test
```
This is my migration file:
```
from sqlalchemy import *
from migrate import *
meta = MetaData()
responsibles = Table(
'responsibles', meta,
Column('id', Integer, primary_key=True),
Column('breakdown_type', String(255)),
Column('breakdown_name', String(500)),
Column('email', String(255)),
Column('name', String(255)),
)
def upgrade(migrate_engine):
# Upgrade operations go here. Don't create your own engine; bind
# migrate_engine to your metadata
responsibles.create()
def downgrade(migrate_engine):
# Operations to reverse the above upgrade go here.
responsibles.drop()
```
|
2017/07/11
|
[
"https://Stackoverflow.com/questions/45045147",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1934510/"
] |
did you create your engine? like this
`engine = create_engine('sqlite:///:memory:')`
and then do
`meta.bind = engine
meta.create_all(engine)`
|
You need to supply `engine` or `connection`
[`sqlalchemy.schema.MetaData.bind`](http://docs.sqlalchemy.org/en/latest/core/metadata.html#sqlalchemy.schema.MetaData.bind)
For e.g.:
```
engine = create_engine("someurl://")
metadata.bind = engine
```
| 10,192
|
70,596,608
|
In the office we have a fileWatcher that converts pointclouds to .laz files.
We just started working with Revit but came to the conclusion that it is not possible to import .laz in Revit.
So I googled and found a solution execept it is written in python and our watcher is in c#.
Below the python script.
`<location>/decap.exe –importWithLicence E:\decap text\Building 1\ Building 1`
Is there a way to convert this python script to c# or is there maybe another way.
Please let me know.
|
2022/01/05
|
[
"https://Stackoverflow.com/questions/70596608",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/10411687/"
] |
Convert the columns to `Date` class and use `difftime`
```
df1$Difference <- with(df1, as.numeric(difftime(as.Date(DeliveryDate),
as.Date(ExpectedDate), units = "days")))
```
---
Or using `tidyverse`
```
library(dplyr)
library(lubridate)
df1 %>%
mutate(Difference = as.numeric(difftime(ymd(DeliveryDate),
ymd(ExpectedDate), units = "days")))
DeliveryDate ExpectedDate Difference
1 2022-01-05 2022-01-07 -2
```
|
First change your date to lubridate date:
2022-01-05 would be
```
date1 <- ymd("2022-01-05")
date2 <- ymd("2022-01-07")
diff_days <- difftime(date2, date1, units="days")
```
| 10,193
|
49,154,899
|
I want to create a virtual environment using conda and yml file.
Command:
```
conda env create -n ex3 -f env.yml
```
Type ENTER it gives following message:
```
ResolvePackageNotFound:
- gst-plugins-base==1.8.0=0
- dbus==1.10.20=0
- opencv3==3.2.0=np111py35_0
- qt==5.6.2=5
- libxcb==1.12=1
- libgcc==5.2.0=0
- gstreamer==1.8.0=0
```
However, I do have those on my Mac. My MacOS: High Sierra 10.13.3
My env.yml file looks like this:
```
name: ex3
channels:
- menpo
- defaults
dependencies:
- cairo=1.14.8=0
- certifi=2016.2.28=py35_0
- cycler=0.10.0=py35_0
- dbus=1.10.20=0
- expat=2.1.0=0
- fontconfig=2.12.1=3
- freetype=2.5.5=2
- glib=2.50.2=1
- gst-plugins-base=1.8.0=0
- gstreamer=1.8.0=0
- harfbuzz=0.9.39=2
- hdf5=1.8.17=2
- icu=54.1=0
- jbig=2.1=0
- jpeg=9b=0
- libffi=3.2.1=1
- libgcc=5.2.0=0
- libgfortran=3.0.0=1
- libiconv=1.14=0
- libpng=1.6.30=1
- libtiff=4.0.6=3
- libxcb=1.12=1
- libxml2=2.9.4=0
- matplotlib=2.0.2=np111py35_0
- mkl=2017.0.3=0
- numpy=1.11.3=py35_0
- openssl=1.0.2l=0
- pandas=0.20.1=np111py35_0
- patsy=0.4.1=py35_0
- pcre=8.39=1
- pip=9.0.1=py35_1
- pixman=0.34.0=0
- pyparsing=2.2.0=py35_0
- pyqt=5.6.0=py35_2
- python=3.5.4=0
- python-dateutil=2.6.1=py35_0
- pytz=2017.2=py35_0
- qt=5.6.2=5
- readline=6.2=2
- scipy=0.19.0=np111py35_0
- seaborn=0.8=py35_0
- setuptools=36.4.0=py35_1
- sip=4.18=py35_0
- six=1.10.0=py35_0
- sqlite=3.13.0=0
- statsmodels=0.8.0=np111py35_0
- tk=8.5.18=0
- wheel=0.29.0=py35_0
- xz=5.2.3=0
- zlib=1.2.11=0
- opencv3=3.2.0=np111py35_0
- pip:
- bleach==1.5.0
- enum34==1.1.6
- html5lib==0.9999999
- markdown==2.6.11
- protobuf==3.5.1
- tensorflow==1.4.1
- tensorflow-tensorboard==0.4.0
- werkzeug==0.14.1
```
How to solve this problem?
Well....The stack overflow prompt me to say more details, but I think I describe things clearly, it is sad, stack overflow does not support to upload attachment....
|
2018/03/07
|
[
"https://Stackoverflow.com/questions/49154899",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/7722305/"
] |
I got the same issue and found a [GitHub issue](https://github.com/conda/conda/issues/6073#issuecomment-356981567) related to this. In the comments, @kalefranz posted an ideal solution by using the `--no-builds` flag with conda env export.
```
conda env export --no-builds > environment.yml
```
However, even remove build numbers, some packages may still have different version number at different OS. The best way I think is to create different env yml file for different OS.
Hope this helps.
|
I had a similar issue and was able to work around it. My issue wasn't related to pip but rather because the export platform wasn't the same as the import platform (Ref: nehaljwani's November 2018 answer on <https://github.com/conda/conda/issues/7311>).
@Shixiang Wang's answer point towards a part of the solution. The no-build argument allows for more flexibility but there are some components which are specific to the platform or OS.
Using the no-build export, I was able to identify (from the error message at import time) which libraries were problematic and simply removed them from the YML file. This may not be flawless, but saves a lot of time compared to starting from scratch.
NOTE: I got a `Pip subprocess error` which interrupted the installation at a given library, which could be simply overcome via a `conda install <library>`. From there I could relaunch the import from the YML file.
| 10,194
|
65,822,290
|
I made two components using python functions and I am trying to pass data between them using files, but I am unable to do so. I want to calculate the sum and then send the answer to the other component using a file. Below is the partial code (The code works without the file passing). Please assist.
```
# Define your components code as standalone python functions:======================
def add(a: float, b: float, f: comp.OutputTextFile(float)) -> NamedTuple(
'AddOutput',
[
('out', comp.OutputTextFile(float))
]):
'''Calculates sum of two arguments'''
sum = a+b
f.write(sum)
from collections import namedtuple
addOutput = namedtuple(
'AddOutput',
['out'])
return addOutput(f) # the metrics will be uploaded to the cloud
def multiply(c:float, d:float, f: comp.InputTextFile(float) ):
'''Calculates the product'''
product = c * d
print(f.read())
add_op = comp.func_to_container_op(add, output_component_file='add_component.yaml')
product_op = comp.create_component_from_func(multiply,
output_component_file='multiple_component.yaml')
@dsl.pipeline(
name='Addition-pipeline',
description='An example pipeline that performs addition calculations.'
)
def my_pipeline(a, b='7', c='4', d='1'):
add_op = pl_comp_list[0]
product_op = pl_comp_list[1]
first_add_task = add_op(a, 4)
second_add_task = product_op(c, d, first_add_task.outputs['out'])
```
|
2021/01/21
|
[
"https://Stackoverflow.com/questions/65822290",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/13435688/"
] |
Here is a slightly simplified version of your pipeline that I tested and which works.
It doesn't matter what class type you pass to `OutputTextFile` and `InputTextFile`. It'll be read and written as `str`. So this is what you should change:
* While writing to `OutputTextFile`: cast `sum_` from `float to str`
* While reading from `InputTextFile`: cast `f.read()` value from `str to float`
```
import kfp
from kfp import dsl
from kfp import components as comp
def add(a: float, b: float, f: comp.OutputTextFile()):
'''Calculates sum of two arguments'''
sum_ = a + b
f.write(str(sum_)) # cast to str
return sum_
def multiply(c: float, d: float, f: comp.InputTextFile()):
'''Calculates the product'''
in_ = float(f.read()) # cast to float
product = c * d * in_
print(product)
return product
add_op = comp.func_to_container_op(add,
output_component_file='add_component.yaml')
product_op = comp.create_component_from_func(
multiply, output_component_file='multiple_component.yaml')
@dsl.pipeline(
name='Addition-pipeline',
description='An example pipeline that performs addition calculations.')
def my_pipeline(a, b='7', c='4', d='1'):
first_add_task = add_op(a, b)
second_add_task = product_op(c, d, first_add_task.output)
if __name__ == "__main__":
compiled_name = __file__ + ".yaml"
kfp.compiler.Compiler().compile(my_pipeline, compiled_name)
```
|
`('out', comp.OutputTextFile(float))`
This is not really valid. The `OutputTextFile` annotation (and other similar annotations) can only be used in the function parameters. The function return value is only for outputs that you want to output as values (not as files).
Since you already have `f: comp.OutputTextFile(float)` where you can just remove the function return value altogether. Then you pass the `f` output to the downstream component: `product_op(c, d, first_add_task.outputs['f'])`.
| 10,204
|
41,006,153
|
Using python 3.4.3 or python 3.5.1 I'm surprised to see that:
```
from decimal import Decimal
Decimal('0') * Decimal('123.456789123456')
```
returns:
```
Decimal('0E-12')
```
Worse part is that this specific use case works with float.
Is there anything I could do to make sure the maths work and 0 multiplied by anything returns 0?
|
2016/12/06
|
[
"https://Stackoverflow.com/questions/41006153",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6451083/"
] |
`0E-12` actually is `0` (it's short for `0 * 10 ** -12`; since the coefficient is 0, that's still 0), `Decimal` just provides the `E-12` bit to indicate the "confidence level" of the `0`. What you've got will still behave like zero (it's falsy, additive identity, etc.), the only quirk is in how it prints.
If you need formatting to match, you can use the formatting mini-language, or you can call `.normalize()` on the result, which will turn `Decimal('0E-12')` into `Decimal('0')` (the method's purpose is to strip trailing zeroes from the result to produce a minimal canonical form that represents all equal numbers the same way).
|
Do the multiplication with floats and convert to decimal afterwards.
```
x = 0
y = 123.456789123456
Decimal(x*y)
```
returns (on Python 3.5.2):
```
Decimal('0')
```
| 10,205
|
32,407,824
|
I am new to python programming. I need to read contents from a csv file and print based on a matching criteria. The file contains columns like this:
abc, A, xyz, W
gfk, B, abc, Y, xyz, F
I want to print the contents of the adjacent column based on the matching input string. For e.g. if the string is abc it should print A, and W for xyz, and "no match" for gfk for the first row. This should be executed for each row until the end of the file.
I have the following code. However, don't know to select the adjacent column.
```
c= ['abc','xyz','gfk']
with open('filer.csv', 'rt') as csvfile:
my_file = csv.reader(csvfile, delimiter=',')
for row in my_file:
for i in c:
if i in row:
print the contents of the adjacent cell
```
I would appreciate any help in completing this script.
Thank you
|
2015/09/04
|
[
"https://Stackoverflow.com/questions/32407824",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5298602/"
] |
Your approach made it more difficult to print adjacent values, because even if you used `enumerate` to get the indices, you would have to search the row again, after finding each pattern (after `if i in row:` you wouldn't immediately know where it was in the row). By structuring the data in a dictionary it becomes simpler:
```
patterns = ['abc','xyz','gfk']
with open('filer.csv') as fh:
reader = csv.reader(fh, skipinitialspace=True)
for line in reader:
print '---'
content = dict(zip(line[::2], line[1::2]))
for pattern in patterns:
print "{}: {}".format(pattern, content.get(pattern, 'no match'))
```
`zip(line[::2], line[1::2])` creates a list of tuples from the adjacent elements of the list, which can be turned into a dictionary, where the patterns you are looking for are keys and the corresponding letters are values.
|
You can adapt this for the CSV, but it basically do what you ask for
```
csv = [['abc', 'A', 'xyz', 'W'],
['gfk', 'B', 'abc', 'Y', 'xyz', 'F']]
match_string = ['abc','xyz','gfk']
for row in csv:
for i, column_content in enumerate(row):
if column_content in match_string:
print row[i + 1] # + 1 for the following column_content 'A', 'W', etc.
```
You should read about slicing : [Explain Python's slice notation](https://stackoverflow.com/questions/509211/explain-pythons-slice-notation)
| 10,211
|
45,622,443
|
I started using Retrofit in my android app consuming RESTful web services.
I managed to get it working with a simple object. But when trying with a more complex object it remains desperately empty. I put Okhttp in debug, and the json I'm expecting is present in the response, so I think there is a problem during the creation (or the filling) of the object. So what is the best way to debug such a situation? I tried setting breakpoints (not sure where put them), but did not find anything.
The interesting part of the code is here
```
ClientRequest clientRequest = new ClientRequest(sessionId, 1);
apiInterface = APIClient.getClient().create(APIInterface.class);
Call userCall = apiInterface.getUser(clientRequest);
userCall.enqueue(new Callback() {
@Override
public void onResponse(Call call, Response response) {
if (response.isSuccessful()) {
Client client = (Client) response.body();
Log.d("TAG Request", call.request().toString() + "");
Log.d("TAG Response Code ", response.code() + "");
Log.d("TAG Response Body ", response.body() + "");
Log.d("TAG Response Message ", response.message() + "");
Log.d("TAG Id ",client.getClientId() + "");
Log.d("TAG Nom de l'Entreprise", client.getCompanyName() + "");
Log.d("TAG Nom du Contact", client.getContactName() + "");
Log.d("TAG Numéro client", client.getCustomerNo() + "");
Log.d("TAG Nom d'utilisateur", client.getUsername() + "");
Log.d("TAG Ville", client.getCity() + "");
Log.d("TAG Pays", client.getCountry() + "");
} else {
Log.d("error message", response.errorBody().toString());
}
}
@Override
public void onFailure(Call call, Throwable t) {
call.cancel();
}
});
```
The API interface (the login works)
```
public interface APIInterface {
//@FormUrlEncoded
@POST("/remote/json.php?login")
Call<Token> doLogin(@Body Credential credential);
@POST("/remote/json.php?client_get")
Call<Client> getUser(@Body ClientRequest clientRequest);
}
```
The client class generated by <http://www.jsonschema2pojo.org/>
```
public class Client {
@SerializedName("client_id")
@Expose
public String clientId;
@SerializedName("sys_userid")
@Expose
public String sysUserid;
@SerializedName("sys_groupid")
@Expose
public String sysGroupid;
@SerializedName("sys_perm_user")
@Expose
public String sysPermUser;
@SerializedName("sys_perm_group")
@Expose
public String sysPermGroup;
@SerializedName("sys_perm_other")
@Expose
public String sysPermOther;
@SerializedName("company_name")
@Expose
public String companyName;
@SerializedName("company_id")
@Expose
public String companyId;
@SerializedName("gender")
@Expose
public String gender;
@SerializedName("contact_firstname")
@Expose
public String contactFirstname;
@SerializedName("contact_name")
@Expose
public String contactName;
@SerializedName("customer_no")
@Expose
public String customerNo;
@SerializedName("vat_id")
@Expose
public String vatId;
@SerializedName("street")
@Expose
public String street;
@SerializedName("zip")
@Expose
public String zip;
@SerializedName("city")
@Expose
public String city;
@SerializedName("state")
@Expose
public String state;
@SerializedName("country")
@Expose
public String country;
@SerializedName("telephone")
@Expose
public String telephone;
@SerializedName("mobile")
@Expose
public String mobile;
@SerializedName("fax")
@Expose
public String fax;
@SerializedName("email")
@Expose
public String email;
@SerializedName("internet")
@Expose
public String internet;
@SerializedName("icq")
@Expose
public String icq;
@SerializedName("notes")
@Expose
public String notes;
@SerializedName("bank_account_owner")
@Expose
public String bankAccountOwner;
@SerializedName("bank_account_number")
@Expose
public String bankAccountNumber;
@SerializedName("bank_code")
@Expose
public String bankCode;
@SerializedName("bank_name")
@Expose
public String bankName;
@SerializedName("bank_account_iban")
@Expose
public String bankAccountIban;
@SerializedName("bank_account_swift")
@Expose
public String bankAccountSwift;
@SerializedName("paypal_email")
@Expose
public String paypalEmail;
@SerializedName("default_mailserver")
@Expose
public String defaultMailserver;
@SerializedName("mail_servers")
@Expose
public String mailServers;
@SerializedName("limit_maildomain")
@Expose
public String limitMaildomain;
@SerializedName("limit_mailbox")
@Expose
public String limitMailbox;
@SerializedName("limit_mailalias")
@Expose
public String limitMailalias;
@SerializedName("limit_mailaliasdomain")
@Expose
public String limitMailaliasdomain;
@SerializedName("limit_mailforward")
@Expose
public String limitMailforward;
@SerializedName("limit_mailcatchall")
@Expose
public String limitMailcatchall;
@SerializedName("limit_mailrouting")
@Expose
public String limitMailrouting;
@SerializedName("limit_mailfilter")
@Expose
public String limitMailfilter;
@SerializedName("limit_fetchmail")
@Expose
public String limitFetchmail;
@SerializedName("limit_mailquota")
@Expose
public String limitMailquota;
@SerializedName("limit_spamfilter_wblist")
@Expose
public String limitSpamfilterWblist;
@SerializedName("limit_spamfilter_user")
@Expose
public String limitSpamfilterUser;
@SerializedName("limit_spamfilter_policy")
@Expose
public String limitSpamfilterPolicy;
@SerializedName("default_webserver")
@Expose
public String defaultWebserver;
@SerializedName("web_servers")
@Expose
public String webServers;
@SerializedName("limit_web_ip")
@Expose
public String limitWebIp;
@SerializedName("limit_web_domain")
@Expose
public String limitWebDomain;
@SerializedName("limit_web_quota")
@Expose
public String limitWebQuota;
@SerializedName("web_php_options")
@Expose
public String webPhpOptions;
@SerializedName("limit_cgi")
@Expose
public String limitCgi;
@SerializedName("limit_ssi")
@Expose
public String limitSsi;
@SerializedName("limit_perl")
@Expose
public String limitPerl;
@SerializedName("limit_ruby")
@Expose
public String limitRuby;
@SerializedName("limit_python")
@Expose
public String limitPython;
@SerializedName("force_suexec")
@Expose
public String forceSuexec;
@SerializedName("limit_hterror")
@Expose
public String limitHterror;
@SerializedName("limit_wildcard")
@Expose
public String limitWildcard;
@SerializedName("limit_ssl")
@Expose
public String limitSsl;
@SerializedName("limit_ssl_letsencrypt")
@Expose
public String limitSslLetsencrypt;
@SerializedName("limit_web_subdomain")
@Expose
public String limitWebSubdomain;
@SerializedName("limit_web_aliasdomain")
@Expose
public String limitWebAliasdomain;
@SerializedName("limit_ftp_user")
@Expose
public String limitFtpUser;
@SerializedName("limit_shell_user")
@Expose
public String limitShellUser;
@SerializedName("ssh_chroot")
@Expose
public String sshChroot;
@SerializedName("limit_webdav_user")
@Expose
public String limitWebdavUser;
@SerializedName("limit_backup")
@Expose
public String limitBackup;
@SerializedName("limit_directive_snippets")
@Expose
public String limitDirectiveSnippets;
@SerializedName("limit_aps")
@Expose
public String limitAps;
@SerializedName("default_dnsserver")
@Expose
public String defaultDnsserver;
@SerializedName("db_servers")
@Expose
public String dbServers;
@SerializedName("limit_dns_zone")
@Expose
public String limitDnsZone;
@SerializedName("default_slave_dnsserver")
@Expose
public String defaultSlaveDnsserver;
@SerializedName("limit_dns_slave_zone")
@Expose
public String limitDnsSlaveZone;
@SerializedName("limit_dns_record")
@Expose
public String limitDnsRecord;
@SerializedName("default_dbserver")
@Expose
public String defaultDbserver;
@SerializedName("dns_servers")
@Expose
public String dnsServers;
@SerializedName("limit_database")
@Expose
public String limitDatabase;
@SerializedName("limit_database_user")
@Expose
public String limitDatabaseUser;
@SerializedName("limit_database_quota")
@Expose
public String limitDatabaseQuota;
@SerializedName("limit_cron")
@Expose
public String limitCron;
@SerializedName("limit_cron_type")
@Expose
public String limitCronType;
@SerializedName("limit_cron_frequency")
@Expose
public String limitCronFrequency;
@SerializedName("limit_traffic_quota")
@Expose
public String limitTrafficQuota;
@SerializedName("limit_client")
@Expose
public String limitClient;
@SerializedName("limit_domainmodule")
@Expose
public String limitDomainmodule;
@SerializedName("limit_mailmailinglist")
@Expose
public String limitMailmailinglist;
@SerializedName("limit_openvz_vm")
@Expose
public String limitOpenvzVm;
@SerializedName("limit_openvz_vm_template_id")
@Expose
public String limitOpenvzVmTemplateId;
@SerializedName("parent_client_id")
@Expose
public String parentClientId;
@SerializedName("username")
@Expose
public String username;
@SerializedName("password")
@Expose
public String password;
@SerializedName("language")
@Expose
public String language;
@SerializedName("usertheme")
@Expose
public String usertheme;
@SerializedName("template_master")
@Expose
public String templateMaster;
@SerializedName("template_additional")
@Expose
public String templateAdditional;
@SerializedName("created_at")
@Expose
public String createdAt;
@SerializedName("locked")
@Expose
public String locked;
@SerializedName("canceled")
@Expose
public String canceled;
@SerializedName("can_use_api")
@Expose
public String canUseApi;
@SerializedName("tmp_data")
@Expose
public String tmpData;
@SerializedName("id_rsa")
@Expose
public String idRsa;
@SerializedName("ssh_rsa")
@Expose
public String sshRsa;
@SerializedName("customer_no_template")
@Expose
public String customerNoTemplate;
@SerializedName("customer_no_start")
@Expose
public String customerNoStart;
@SerializedName("customer_no_counter")
@Expose
public String customerNoCounter;
@SerializedName("added_date")
@Expose
public String addedDate;
@SerializedName("added_by")
@Expose
public String addedBy;
@SerializedName("default_xmppserver")
@Expose
public String defaultXmppserver;
@SerializedName("xmpp_servers")
@Expose
public String xmppServers;
@SerializedName("limit_xmpp_domain")
@Expose
public String limitXmppDomain;
@SerializedName("limit_xmpp_user")
@Expose
public String limitXmppUser;
@SerializedName("limit_xmpp_muc")
@Expose
public String limitXmppMuc;
@SerializedName("limit_xmpp_anon")
@Expose
public String limitXmppAnon;
@SerializedName("limit_xmpp_auth_options")
@Expose
public String limitXmppAuthOptions;
@SerializedName("limit_xmpp_vjud")
@Expose
public String limitXmppVjud;
@SerializedName("limit_xmpp_proxy")
@Expose
public String limitXmppProxy;
@SerializedName("limit_xmpp_status")
@Expose
public String limitXmppStatus;
@SerializedName("limit_xmpp_pastebin")
@Expose
public String limitXmppPastebin;
@SerializedName("limit_xmpp_httparchive")
@Expose
public String limitXmppHttparchive;
public Client() {
Log.d("TAG","Default Constructor");
}
public String getClientId() {
return clientId;
}
public String getSysUserid() {
return sysUserid;
}
public String getSysGroupid() {
return sysGroupid;
}
public String getSysPermUser() {
return sysPermUser;
}
public String getSysPermGroup() {
return sysPermGroup;
}
public String getSysPermOther() {
return sysPermOther;
}
public String getCompanyName() {
return companyName;
}
public String getCompanyId() {
return companyId;
}
public String getGender() {
return gender;
}
public String getContactFirstname() {
return contactFirstname;
}
public String getContactName() {
return contactName;
}
public String getCustomerNo() {
return customerNo;
}
public String getVatId() {
return vatId;
}
public String getStreet() {
return street;
}
public String getZip() {
return zip;
}
public String getCity() {
return city;
}
public String getState() {
return state;
}
public String getCountry() {
return country;
}
public String getTelephone() {
return telephone;
}
public String getMobile() {
return mobile;
}
public String getFax() {
return fax;
}
public String getEmail() {
return email;
}
public String getInternet() {
return internet;
}
public String getIcq() {
return icq;
}
public String getNotes() {
return notes;
}
public String getBankAccountOwner() {
return bankAccountOwner;
}
public String getBankAccountNumber() {
return bankAccountNumber;
}
public String getBankCode() {
return bankCode;
}
public String getBankName() {
return bankName;
}
public String getBankAccountIban() {
return bankAccountIban;
}
public String getBankAccountSwift() {
return bankAccountSwift;
}
public String getPaypalEmail() {
return paypalEmail;
}
public String getDefaultMailserver() {
return defaultMailserver;
}
public String getMailServers() {
return mailServers;
}
public String getLimitMaildomain() {
return limitMaildomain;
}
public String getLimitMailbox() {
return limitMailbox;
}
public String getLimitMailalias() {
return limitMailalias;
}
public String getLimitMailaliasdomain() {
return limitMailaliasdomain;
}
public String getLimitMailforward() {
return limitMailforward;
}
public String getLimitMailcatchall() {
return limitMailcatchall;
}
public String getLimitMailrouting() {
return limitMailrouting;
}
public String getLimitMailfilter() {
return limitMailfilter;
}
public String getLimitFetchmail() {
return limitFetchmail;
}
public String getLimitMailquota() {
return limitMailquota;
}
public String getLimitSpamfilterWblist() {
return limitSpamfilterWblist;
}
public String getLimitSpamfilterUser() {
return limitSpamfilterUser;
}
public String getLimitSpamfilterPolicy() {
return limitSpamfilterPolicy;
}
public String getDefaultWebserver() {
return defaultWebserver;
}
public String getWebServers() {
return webServers;
}
public String getLimitWebIp() {
return limitWebIp;
}
public String getLimitWebDomain() {
return limitWebDomain;
}
public String getLimitWebQuota() {
return limitWebQuota;
}
public String getWebPhpOptions() {
return webPhpOptions;
}
public String getLimitCgi() {
return limitCgi;
}
public String getLimitSsi() {
return limitSsi;
}
public String getLimitPerl() {
return limitPerl;
}
public String getLimitRuby() {
return limitRuby;
}
public String getLimitPython() {
return limitPython;
}
public String getForceSuexec() {
return forceSuexec;
}
public String getLimitHterror() {
return limitHterror;
}
public String getLimitWildcard() {
return limitWildcard;
}
public String getLimitSsl() {
return limitSsl;
}
public String getLimitSslLetsencrypt() {
return limitSslLetsencrypt;
}
public String getLimitWebSubdomain() {
return limitWebSubdomain;
}
public String getLimitWebAliasdomain() {
return limitWebAliasdomain;
}
public String getLimitFtpUser() {
return limitFtpUser;
}
public String getLimitShellUser() {
return limitShellUser;
}
public String getSshChroot() {
return sshChroot;
}
public String getLimitWebdavUser() {
return limitWebdavUser;
}
public String getLimitBackup() {
return limitBackup;
}
public String getLimitDirectiveSnippets() {
return limitDirectiveSnippets;
}
public String getLimitAps() {
return limitAps;
}
public String getDefaultDnsserver() {
return defaultDnsserver;
}
public String getDbServers() {
return dbServers;
}
public String getLimitDnsZone() {
return limitDnsZone;
}
public String getDefaultSlaveDnsserver() {
return defaultSlaveDnsserver;
}
public String getLimitDnsSlaveZone() {
return limitDnsSlaveZone;
}
public String getLimitDnsRecord() {
return limitDnsRecord;
}
public String getDefaultDbserver() {
return defaultDbserver;
}
public String getDnsServers() {
return dnsServers;
}
public String getLimitDatabase() {
return limitDatabase;
}
public String getLimitDatabaseUser() {
return limitDatabaseUser;
}
public String getLimitDatabaseQuota() {
return limitDatabaseQuota;
}
public String getLimitCron() {
return limitCron;
}
public String getLimitCronType() {
return limitCronType;
}
public String getLimitCronFrequency() {
return limitCronFrequency;
}
public String getLimitTrafficQuota() {
return limitTrafficQuota;
}
public String getLimitClient() {
return limitClient;
}
public String getLimitDomainmodule() {
return limitDomainmodule;
}
public String getLimitMailmailinglist() {
return limitMailmailinglist;
}
public String getLimitOpenvzVm() {
return limitOpenvzVm;
}
public String getLimitOpenvzVmTemplateId() {
return limitOpenvzVmTemplateId;
}
public String getParentClientId() {
return parentClientId;
}
public String getUsername() {
return username;
}
public String getPassword() {
return password;
}
public String getLanguage() {
return language;
}
public String getUsertheme() {
return usertheme;
}
public String getTemplateMaster() {
return templateMaster;
}
public String getTemplateAdditional() {
return templateAdditional;
}
public String getCreatedAt() {
return createdAt;
}
public String getLocked() {
return locked;
}
public String getCanceled() {
return canceled;
}
public String getCanUseApi() {
return canUseApi;
}
public String getTmpData() {
return tmpData;
}
public String getIdRsa() {
return idRsa;
}
public String getSshRsa() {
return sshRsa;
}
public String getCustomerNoTemplate() {
return customerNoTemplate;
}
public String getCustomerNoStart() {
return customerNoStart;
}
public String getCustomerNoCounter() {
return customerNoCounter;
}
public String getAddedDate() {
return addedDate;
}
public String getAddedBy() {
return addedBy;
}
public String getDefaultXmppserver() {
return defaultXmppserver;
}
public String getXmppServers() {
return xmppServers;
}
public String getLimitXmppDomain() {
return limitXmppDomain;
}
public String getLimitXmppUser() {
return limitXmppUser;
}
public String getLimitXmppMuc() {
return limitXmppMuc;
}
public String getLimitXmppAnon() {
return limitXmppAnon;
}
public String getLimitXmppAuthOptions() {
return limitXmppAuthOptions;
}
public String getLimitXmppVjud() {
return limitXmppVjud;
}
public String getLimitXmppProxy() {
return limitXmppProxy;
}
public String getLimitXmppStatus() {
return limitXmppStatus;
}
public String getLimitXmppPastebin() {
return limitXmppPastebin;
}
public String getLimitXmppHttparchive() {
return limitXmppHttparchive;
}
}
```
|
2017/08/10
|
[
"https://Stackoverflow.com/questions/45622443",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4208537/"
] |
Try first sending data to your WebServices using clients like [Postman](https://chrome.google.com/webstore/detail/postman/fhbjgbiflinjbdggehcddcbncdddomop), this is a very usefull tool. You can check what's your WebService sending to your Android app
|
I faced with the same problem by using retrofit on kotlin.
it was:
```
data class MtsApprovalResponseData(
@field:Element(name = "MessageId", required = false)
@Namespace(reference = WORKAROUND_NAMESPACE)
var messageId: String? = null,
@field:Element(name = "Version", required = false)
@Namespace(reference = WORKAROUND_NAMESPACE)
var version: String? = null
)
```
and if fixed emptiness by moving to small name parameters:
```
data class MtsApprovalResponseData(
@field:Element(name = "messageId", required = false)
@Namespace(reference = WORKAROUND_NAMESPACE)
var messageId: String? = null,
@field:Element(name = "version", required = false)
@Namespace(reference = WORKAROUND_NAMESPACE)
var version: String? = null
)
```
btw my retrofit bean are:
```
@Bean
fun soapRetrofit(): Retrofit {
val logging = HttpLoggingInterceptor()
logging.level = HttpLoggingInterceptor.Level.BODY
val httpClient = OkHttpClient.Builder().addInterceptor(logging).build()
return Retrofit.Builder()
.addConverterFactory(SimpleXmlConverterFactory.create())
.baseUrl(profile.serviceApiUrl)
.client(httpClient)
.build()
}
```
| 10,212
|
72,944,672
|
In one directory there are several folders that their names are as follows: 301, 302, ..., 600.
Each of these folders contain two folders with the name of `A` and `B`. I need to copy all the image files from A folders of each parent folder to the environment of that folder (copying images files from e.g. 600>A to 600 folder) and afterwards removing `A` and `B` folders of each parent folder. I found the solution from [this post](https://stackoverflow.com/questions/71269014/copy-files-from-a-folder-and-paste-to-another-directory-sub-folders-in-python) but I don't know how to copy the files into parent folders instead of sub-folder and also how to delete the sub-folders after copying and doing it for several folders.
```
import shutil
import os, sys
exepath = sys.argv[0]
directory = os.path.dirname(os.path.abspath(exepath))+"\\Files\\"
credit_folder = os.path.dirname(os.path.abspath(exepath))+"\\Credits\\"
os.chdir(credit_folder)
os.chdir(directory)
Source = credit_folder
Target = directory
files = os.listdir(Source)
folders = os.listdir(Target)
for file in files:
SourceCredits = os.path.join(Source,file)
for folder in folders:
TargetFolder = os.path.join(Target,folder)
shutil.copy2(SourceCredits, TargetFolder)
print(" \n ===> Credits Copy & Paste Sucessfully <=== \n ")
```
|
2022/07/11
|
[
"https://Stackoverflow.com/questions/72944672",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/11140344/"
] |
I recommend you to use the [Pathlib](https://docs.python.org/3/library/pathlib.html).
```
from pathlib import Path
import shutil
from tqdm import tqdm
folder_to_be_sorted = Path("/your/path/to/the/folder")
for folder_named_number_i in tqdm(list(folder_to_be_sorted.iterdir())):
# folder_named_number_i is 301, 302, ..., 600
A_folder = folder_named_number_i / "A"
B_folder = folder_named_number_i / "B"
# move files
for image_i in A_folder.iterdir():
shutil.move(str(image_i), folder_named_number_i)
# remove directories
shutil.rmtree(str(A_folder))
shutil.rmtree(str(B_folder))
```
The `os.path` is a more low-level module. I post another version here since you are using the `os` module in your question.
```
import shutil
import os
from tqdm import tqdm
folder_to_be_sorted = "/your/path/to/the/folder"
for folder_named_number_name in tqdm(os.listdir(folder_to_be_sorted)):
folder_named_number_i = os.path.join(folder_to_be_sorted, folder_named_number_name)
# folder_named_number_i is 301, 302, ..., 600
A_folder = os.path.join(folder_named_number_i, "A")
B_folder = os.path.join(folder_named_number_i, "B")
# move files
for image_i_name in os.listdir(A_folder):
image_i = os.path.join(A_folder, image_i_name)
shutil.move(str(image_i), folder_named_number_i)
# remove directories
shutil.rmtree(str(A_folder))
shutil.rmtree(str(B_folder))
```
By the codes above I suppose you want to transfrom
```
# /your/path/to/the/folder
# │
# └───301
# │ │
# │ └───A
# │ │ └───image_301_A_1.png
# │ │ └───image_301_A_2.png
# │ │ └───image_301_A_3.png
# │ │ └───...(other images)
# │ │
# │ └───B
# │ └───image_301_B_1.png
# │ └───image_301_B_2.png
# │ └───image_301_B_3.png
# │ └───...(other images)
# │
# └───302(like 301)
# :
# :
# └───600(like 301)
```
to:
```
# /your/path/to/the/folder
# │
# └───301
# │ │
# │ └───image_301_A_1.png
# │ └───image_301_A_2.png
# │ └───image_301_A_3.png
# │ └───...(other images in folder 301/A/)
# │
# └───302(like 301)
# :
# :
# └───600(like 301)
```
|
@hellohawii gave an excellent answer. Following code also works and you only need change value of *Source* when using.
```
import shutil
import os, sys
from tqdm import tqdm
exepath = sys.argv[0] # current path of code
Source = os.path.dirname(os.path.abspath(exepath))+"\\Credits\\" # path of folders:301, 302... 600
# Source = your_path_of_folders
files = os.listdir(Source) # get list of folders under 301 etc, in your situation: [A, B]
def get_parent_dir(path=None, offset=-1):
"""get parent dir of current path"""
result = path if path else __file__
for i in range(abs(offset)):
result = os.path.dirname(result)
return result
def del_files0(dir_path):
"""delete full folder"""
shutil.rmtree(dir_path)
for file_path in files:
current_path = os.path.join(Source, file_path) # current_path
if file_path == 'A': # select the folder to copy
file_list = os.listdir(current_path) # get file_list of selected folder
parent_path = get_parent_dir(current_path) # get parent dir path, namely target path
for file in tqdm(file_list):
shutil.copy(file, parent_path)
del_files0(current_path) # delete current path(folder)
print(" \n ===> Credits Copy & Paste & delete Successfully <=== \n ")
```
| 10,213
|
67,707,605
|
I am trying to use python to place orders through the TWS API. My problem is getting the next valid order ID.
Here is what I am using:
```
from ibapi.client import EClient
from ibapi.wrapper import EWrapper
from ibapi.common import TickerId
from ibapi import contract, order, common
from threading import Thread
class ib_class(EWrapper, EClient):
def __init__(self, addr, port, client_id):
EClient.__init__(self, self)
self.connect(addr, port, client_id) # Connect to TWS
thread = Thread(target=self.run, daemon=True) # Launch the client thread
thread.start()
def error(self, reqId:TickerId, errorCode:int, errorString:str):
if reqId > -1:
print("Error. Id: " , reqId, " Code: " , errorCode , " Msg: " , errorString)
def nextValidId(self, orderId: int):
self.nextValidId = orderId
ib_api = ib_class("127.0.0.1", 7496, 1)
orderID = ib_api.nextValidId(0)
```
And this gives me :
```
TypeError: 'int' object is not callable
```
|
2021/05/26
|
[
"https://Stackoverflow.com/questions/67707605",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/15524510/"
] |
The `nextValidId` method is a wrapper method. From the client, you need to call `reqIds` to get an Order ID.
```
ib_api.reqIds(-1)
```
The parameter of `reqIds` doesn't matter. Also, it doesn't have a return value. Instead, `reqIds` sends a message to IB, and when the response is received, the wrapper's `nextValidId` method will be called.
|
Add some delay after `reqIds`
-----------------------------
In the wrapper class for receiving updates from `ibapi`,
override `nextValidId` function as it is used by `ib_api.reqIds` to respond back.
```
from ibapi.wrapper import iswrapper
@iswrapper
def nextValidId(self, orderId: int):
super().nextValidId(orderId)
print("setting nextValidOrderId: %d" % orderId)
self.nextValidOrderId = orderId
```
Then, add some delay after calling `reqIds` as `TWS Api` might take some time to respond and access it via `ib_api.nextValidOrderId`
```
ib_api.reqIds(ib_api.nextValidOrderId)
time.sleep(3)
print("Next Order ID: ", ib_api.nextValidOrderId)
```
Note: `ib_api` holds the object for above `wrapper class`
| 10,214
|
68,164,039
|
I'm a python beginner and I'm trying to solve a cubic equation with two independent variables and one dependent variable. Here is the equation, which I am trying to solve for v:
3pv^3−(p+8t)v^2+9v−3=0
If I set p and t as individual values, I can solve the equation with my current code. But what I would like to do is set a single t and solve for a range of p values. I have tried making p an array, but when I run my code, the only return I get is "[]". I believe this is an issue with the way I am trying to print the solutions, i.e. I'm not asking the program to print the solutions as an array, but I'm not sure how to do this.
Here's my code thus far:
```
# cubic equation for solution to reduced volume
## define variables ##
# gas: specify
tc = 300
t1 = 315
t = t1/tc
from sympy.solvers import solve
from sympy import Symbol
v = Symbol('v')
import numpy as np
p = np.array(range(10))
print(solve(3*p*(v**3)-(v**2)*(p+8*t)+9*v-3,v))
```
Any hints on how to get the solutions of v for various p (at constant t) printed out properly?
|
2021/06/28
|
[
"https://Stackoverflow.com/questions/68164039",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/16334402/"
] |
You can simply use a for loop I think. This is what seems to work:
```
for p in range(10):
solution = solve(3*p*(v**3)-(v**2)*(p+8*t)+9*v-3, v)
print(solution)
```
Also, note that range(10) goes from 0 to 9 (inclusive)
|
Same idea as previous answer, but I think it's generally good practice to keep imports at the top and to separate equation and solution to separate lines etc. Makes it a bit easier to read
```
from sympy.solvers import solve
from sympy import Symbol
import numpy as np
tc = 300
t1 = 315
t = t1/tc
v = Symbol('v') # Variable we want to solve
p = np.array(range(10)) # Range of p's to test
for p_i in p: # Loop through each p, call them p_i
print(f"p = {p_i}")
equation = 3*p_i*(v**3)-(v**2)*(p_i+8*t)+9*v-3
solutions = solve(equation,v)
print(f"v = {solutions}\n")
```
| 10,215
|
54,853,332
|
I tried using pip install sendgrid, but got this error:
>
> Collecting sendgrid
> Using cached <https://files.pythonhosted.org/packages/24/21/9bea4c51f949497cdce11f46fd58f1a77c6fcccd926cc1bb4e14be39a5c0/sendgrid-5.6.0-py2.py3-none-any.whl>
> Requirement already satisfied: python-http-client>=3.0 in /home/avin/.local/lib/python2.7/site-packages (from sendgrid) (3.1.0)
> Installing collected packages: sendgrid
> Could not install packages due to an EnvironmentError: [Errno 13] Permission denied: '/usr/local/lib/python2.7/dist-packages/sendgrid-5.6.0.dist-info'
> Consider using the `--user` option or check the permissions.
>
>
>
I used the `--user` as suggested and it run ok:
>
> Collecting sendgrid
> Using cached <https://files.pythonhosted.org/packages/24/21/9bea4c51f949497cdce11f46fd58f1a77c6fcccd926cc1bb4e14be39a5c0/sendgrid-5.6.0-py2.py3-none-any.whl>
> Requirement already satisfied: python-http-client>=3.0 in /home/avin/.local/lib/python2.7/site-packages (from sendgrid) (3.1.0)
> Installing collected packages: sendgrid
> Successfully installed sendgrid-5.6.0
>
>
>
However, now, when running IPython, I can't `import sendgrid`...
>
> ImportError: No module named sendgrid
>
>
>
**pip -V = pip 19.0.3**
|
2019/02/24
|
[
"https://Stackoverflow.com/questions/54853332",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3512538/"
] |
This is a very useful command `pip install --ignore-installed <package>`
It will make your life easy :)
|
Solved.
It required another package that I missed: `pip install python-HTTP-Client`.
After that I no longer needed the `--user` and the imports worked fine
| 10,216
|
18,564,642
|
So I have been trying to install SimpleCV for some time now. I was finally able to install pygame, but now I have ran into a new error. I have used pip, easy\_install, and cloned the SimpleCV github repository to try to install SimpleCV, but I get this error from all:
```
ImportError: No module named scipy.ndimage
```
If it is helpful, this is the whole error message:
```
Traceback (most recent call last):
File "/usr/local/bin/simplecv", line 8, in <module>
load_entry_point('SimpleCV==1.3', 'console_scripts', 'simplecv')()
File"/System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python/pkg_resource s.py", line 318, in load_entry_point
return get_distribution(dist).load_entry_point(group, name)
File"/System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python/pkg_resources.py", line 2221, in load_entry_point
return ep.load()
File"/System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python/pkg_resources.py", line 1954, in load
entry = __import__(self.module_name, globals(),globals(), ['__name__'])
File "/Library/Python/2.7/site-packages/SimpleCV/__init__.py", line 3, in <module>
from SimpleCV.base import *
File "/Library/Python/2.7/site-packages/SimpleCV/base.py", line 22, in <module>
import scipy.ndimage as ndimage
ImportError: No module named scipy.ndimage
```
I am sorry if there is a simple solution to this, I have been trying and searching for solutions for well over an hour with no luck.
|
2013/09/02
|
[
"https://Stackoverflow.com/questions/18564642",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2364997/"
] |
From my understanding, what you trying to do is showing a custom dialog or alertview and only on its return value, the code should execute further.
So lets say, If you display a `UIAlertView` with `UITextView` inside for adding more information, you should move your `sendEmail` method to the callback of alertview's button. Similarly if you are going for some custom dialog, then you should write a callback in your custom dialog to main window and then in that callback, you should write code for sending email.
I think, a callback mechanism like is the only solution for what you are opting for and if you want everything to be written in same code block, then i you can make use of [blocks in objective-c](https://developer.apple.com/library/ios/documentation/cocoa/conceptual/ProgrammingWithObjectiveC/WorkingwithBlocks/WorkingwithBlocks.html)
|
The flow needs to be designed so that when the email is prompted it has set everything up and created a window with a button, at that point your code has finished and is not doing anything. Then when the button is pressed the next function/method is called that uses the data that was prepared before hand.
| 10,217
|
7,561,640
|
Executive summary: a Python module is linked against a different version of `libstdc++.dylib` than the Python executable. The result is that calls to `iostream` from the module crash.
Backstory
---------
I'm creating a Python module using SWIG on an older computer (running 10.5.8). For various reasons, I am using GCC 4.5 (installed via MacPorts) to do this, using Python 2.7 (installed via MacPorts, compiled using the system-default GCC 4.0.1).
Observed Behavior
-----------------
To make a long story short: calling `str( myObject )` in Python causes the C++ code in turn to call `std::operator<< <std::char_traits<char> >`. This generates the following error:
```
Python(487) malloc: *** error for object 0x69548c: Non-aligned pointer being freed
*** set a breakpoint in malloc_error_break to debug
```
Setting a breakpoint and calling `backtrace` when it fails gives:
```
#0 0x9734de68 in malloc_error_break ()
#1 0x97348ad0 in szone_error ()
#2 0x97e6fdfc in std::string::_Rep::_M_destroy ()
#3 0x97e71388 in std::basic_string<char, std::char_traits<char>, std::allocator<char> >::~basic_string ()
#4 0x97e6b748 in std::basic_stringbuf<char, std::char_traits<char>, std::allocator<char> >::overflow ()
#5 0x97e6e7a0 in std::basic_streambuf<char, std::char_traits<char> >::xsputn ()
#6 0x00641638 in std::__ostream_insert<char, std::char_traits<char> > ()
#7 0x006418d0 in std::operator<< <std::char_traits<char> > ()
#8 0x01083058 in meshLib::operator<< <tranSupport::Dimension<(unsigned short)1> > (os=@0xbfffc628, c=@0x5a3c50) at /Users/sethrj/_code/pytrt/meshlib/oned/Cell.cpp:21
#9 0x01008b14 in meshLib_Cell_Sl_tranSupport_Dimension_Sl_1u_Sg__Sg____str__ (self=0x5a3c50) at /Users/sethrj/_code/_build/pytrt-gcc45DEBUG/meshlib/swig/mesh_onedPYTHON_wrap.cxx:4439
#10 0x0101d150 in _wrap_Cell_T___str__ (args=0x17eb470) at /Users/sethrj/_code/_build/pytrt-gcc45DEBUG/meshlib/swig/mesh_onedPYTHON_wrap.cxx:8341
#11 0x002f2350 in PyEval_EvalFrameEx ()
#12 0x002f4bb4 in PyEval_EvalCodeEx ()
[snip]
```
Suspected issue
---------------
I believe the issue to be that my code links against a new version of libstdc++:
```
/opt/local/lib/gcc45/libstdc++.6.dylib (compatibility version 7.0.0, current version 7.14.0)
```
whereas the Python binary has a very indirect dependence on the system `libstdc++`, which loads first (output from `info shared` in gdb):
```
1 dyld - 0x8fe00000 dyld Y Y /usr/lib/dyld at 0x8fe00000 (offset 0x0) with prefix "__dyld_"
2 Python - 0x1000 exec Y Y /opt/local/Library/Frameworks/Python.framework/Versions/2.7/Resources/Python.app/Contents/MacOS/Python (offset 0x0)
(objfile is) /opt/local/bin/python
3 Python F 0x219000 dyld Y Y /opt/local/Library/Frameworks/Python.framework/Versions/2.7/Python at 0x219000 (offset 0x219000)
4 libSystem.B.dylib - 0x9723d000 dyld Y Y /usr/lib/libSystem.B.dylib at 0x9723d000 (offset -0x68dc3000)
(commpage objfile is) /usr/lib/libSystem.B.dylib[LC_SEGMENT.__DATA.__commpage]
5 CoreFoundation F 0x970b3000 dyld Y Y /System/Library/Frameworks/CoreFoundation.framework/Versions/A/CoreFoundation at 0x970b3000 (offset -0x68f4d000)
6 libgcc_s.1.dylib - 0x923e6000 dyld Y Y /usr/lib/libgcc_s.1.dylib at 0x923e6000 (offset -0x6dc1a000)
7 libmathCommon.A.dylib - 0x94af5000 dyld Y Y /usr/lib/system/libmathCommon.A.dylib at 0x94af5000 (offset -0x6b50b000)
8 libicucore.A.dylib - 0x97cf4000 dyld Y Y /usr/lib/libicucore.A.dylib at 0x97cf4000 (offset -0x6830c000)
9 libobjc.A.dylib - 0x926f0000 dyld Y Y /usr/lib/libobjc.A.dylib at 0x926f0000 (offset -0x6d910000)
(commpage objfile is) /usr/lib/libobjc.A.dylib[LC_SEGMENT.__DATA.__commpage]
10 libauto.dylib - 0x95eac000 dyld Y Y /usr/lib/libauto.dylib at 0x95eac000 (offset -0x6a154000)
11 libstdc++.6.0.4.dylib - 0x97e3d000 dyld Y Y /usr/lib/libstdc++.6.0.4.dylib at 0x97e3d000 (offset -0x681c3000)
12 _mesh_oned.so - 0x1000000 dyld Y Y /Users/sethrj/_code/_build/pytrt-gcc45DEBUG/meshlib/swig/_mesh_oned.so at 0x1000000 (offset 0x1000000)
13 libhdf5.7.dylib - 0x122c000 dyld Y Y /opt/local/lib/libhdf5.7.dylib at 0x122c000 (offset 0x122c000)
14 libz.1.2.5.dylib - 0x133000 dyld Y Y /opt/local/lib/libz.1.2.5.dylib at 0x133000 (offset 0x133000)
15 libstdc++.6.dylib - 0x600000 dyld Y Y /opt/local/lib/gcc45/libstdc++.6.dylib at 0x600000 (offset 0x600000)
[snip]
```
Note that the `malloc` error occurs in the memory address for the system `libstdc++`, not the one the shared library is linked against.
Attempted resolutions
---------------------
I tried to force MacPorts to build Python using GCC 4.5 rather than the Apple compiler, but the install phase fails because it needs to create a Mac "Framework", which vanilla GCC apparently doesn't do.
Even with the `-static-libstdc++` compiler flag, \_\_ostream\_insert calls the `std::basic_streambuf` from the system-loaded shared library.
I tried modifying DYLD\_LIBRARY\_PATH by prepending `/opt/local/lib/gcc45/` but without avail.
What can I do to get this to work? I'm at my wit's end.
More information
----------------
This problem seems to be [common to mac os x](http://www.google.com/search?client=safari&rls=en&q=xsputn+__ostream_insert+malloc_error_break). Notice how in all of the debug outputs show, the address jumps between the calls to `std::__ostream_insert` and `std::basic_streambuf::xsputn`: it's leaving the new GCC 4.5 code and jumping into the older shared library code in `/usr/bin`. Now, to find a workaround...
|
2011/09/26
|
[
"https://Stackoverflow.com/questions/7561640",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/118160/"
] |
Solved it. I discovered that this problem is not too uncommon when mixing GCC versions on the mac. After reading [this solution for mpich](http://www.mail-archive.com/libmesh-users@lists.sourceforge.net/msg02756.html) and checking the [mpich source code](http://anonscm.debian.org/gitweb/?p=debian-science/packages/mpich2.git;a=blob_plain;f=configure;hb=HEAD), I found that the solution is to add the following flag to gcc on mac systems:
```
-flat_namespace
```
I am so happy. I wish this hadn't taken me a week to figure out. :)
|
Run Python in GDB, set a breakpoint on `malloc_error_break`. That will show you what's being freed that's not allocated. I doubt that this is an error between ABIs between the versions of libstdc++.
| 10,219
|
70,190,565
|
Kinda long code by complete beginner ahead, please help out
I have a database with the following values:
| Sl.No | trips | sales | price |
| --- | --- | --- | --- |
| 1 | 5 | 20 | 220 |
| 2 | 8 | 30 | 330 |
| 3 | 9 | 45 | 440 |
| 4 | 3 | 38 | 880 |
I am trying to use mysql-connector and python to get the sum of the columns trips, sales and price as variables and use it to do some calculations in the python script.
This is what I have so far:
```
def sum_fun():
try:
con = mysql.connector.connect(host='localhost',
database='twitterdb', user='root', password='mypasword', charset='utf8')
if con.is_connected():
cursor = con.cursor(buffered=True)
def sumTrips():
cursor.execute("SELECT SUM(trips) FROM table_name")
sum1=cursor.fetchall()[0][0]
return int(sum1)
def sumSales():
cursor.execute("SELECT SUM(sales) FROM table_name")
sum2=cursor.fetchall()[0][0]
return int(sum2)
def sumPrice():
cursor.execute("SELECT SUM(price) FROM table_name")
sum3=cursor.fetchall()[0][0]
return int(sum3)
except Error as e:
print(e)
cursor.close()
con.close()
```
I would like to receive the the three sums as three variables `sum_trips`, `sum_sales` and `sum_price` and assign a point system for them such that:
```
trip_points=20*sum_trips
sales_points=30*sum_sales
price_points=40*sum_price
```
And then take the three variables `trip_points, sales_points, prices_points` and insert it into another table in the same database named `Points` with the column names same as variable names.
**I have been trying so hard to find an answer to this for so long. Any help or guidance would be much appreciated. I am a total beginner to most of this stuff so if there's a better way to achieve what I am looking for please do let me know. Thanks**
|
2021/12/01
|
[
"https://Stackoverflow.com/questions/70190565",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/17184842/"
] |
It’s *possible*, but this doesn't look like great design. What happens when a new stage is added? I suspect your issues will disappear if you make a separate "Stage" table with two columns: Stage\_Number and Stage\_Value. Then finding the last filled in stage is a simple MAX query (and the rejection comes from adding one more to that value).
When you hard-wire the stages, you are left with a rather clumsy construct like
```
CASE WHEN stage1 IS NULL THEN 'stage1'
ELSE WHEN stage2 IS NULL THEN 'stage2'
etc.
END CASE
```
which, as I say, is possible, but inelegant and inflexible.
|
You can convert the columns to a JSON structure, then unnest the keys, sort them and pick the first one:
```
select t.*,
(select x.col
from jsonb_each_text(jsonb_strip_nulls(to_jsonb(t) - 'id' - 'status')) as x(col, val)
order by x.col desc
limit 1) as final_stage
from the_table t
```
Note this won't work correctly if you have more than 9 columns as sorting by name will put `stage10` before `stage2` so with the above descending sort `stage2` will be returned. If you need that, you need to convert the name to a proper number that is sorted correctly:
```
order by regexp_replace(x.col, '[^0-9]', '', 'g')::int desc
```
| 10,220
|
24,175,446
|
I'd like to assign each pixel of a mat `matA` to some value according to values of `matB`, my code is a nested for-loop:
```
clock_t begint=clock();
for(size_t i=0; i<depthImg.rows; i++){
for(size_t j=0; j<depthImg.cols; j++){
datatype px=depthImg.at<datatype>(i, j);
if(px==0)
depthImg.at<datatype>(i, j)=lastDepthImg.at<datatype>(i, j);
}
}
cout<<"~~~~~~~~time: "<<clock()-begint<<endl;
```
and it costs about 40~70ms for a mat of size 640\*480.
I could do this easily in python numpy using fancy indexing:
```
In [18]: b=np.vstack((np.ones(3), np.arange(3)))
In [19]: b
Out[19]:
array([[ 1., 1., 1.],
[ 0., 1., 2.]])
In [22]: a=np.vstack((np.arange(3), np.zeros(3)))
In [23]: a=np.tile(a, (320, 160))
In [24]: a.shape
Out[24]: (640, 480)
In [25]: b=np.tile(b, (320, 160))
In [26]: %timeit a[a==0]=b[a==0]
100 loops, best of 3: 2.81 ms per loop
```
and this is much faster than my hand writing for-loop.
So is there such operation in opencv c++ api?
|
2014/06/12
|
[
"https://Stackoverflow.com/questions/24175446",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1150712/"
] |
I am unable to replicate your timing results on my machine Your C++ code runs in under 1ms on my machine. However, whenever you have slow iteration, `at<>()` should be immediately suspect. OpenCV has a [tutorial on iterating through images](http://docs.opencv.org/doc/tutorials/core/how_to_scan_images/how_to_scan_images.html#howtoscanimagesopencv), which I recommend.
However, for the operation you describe, there is a better way. `Mat::copyTo()` allows masked operations:
```
lastDepthImg.copyTo(depthImg, depthImg == 0);
```
This is both faster (about 2x as fast) and far more readable than your nested-loop solution. In addition, it may benefit from hardware optimizations like SSE.
|
In your C++ code, at every pixel you are making a function call, and passing in two indices which are getting converted into a flat index doing something like `i*depthImageCols + j`.
My C++ skills are mostly lacking, but using [this](http://docs.opencv.org/modules/core/doc/basic_structures.html#mat-begin) as a template, I guess you could try something like, which should get rid of most of that overhead:
```
MatIterator_<datatype> it1 = depthImg.begin<datatype>(),
it1_end = depthImg.end<datatype>();
MatConstIterator_<datatype> it2 = lastDepthImg.begin<datatype>();
for(; it1 != it1_end; ++it1, ++it2) {
if (*it1 == 0) {
*it1 = *it2;
}
}
```
| 10,221
|
19,872,942
|
Alright, so I've been wrestling with this problem for a good two hours now.
I want to use a settings module, local.py, when I run my server locally via this command:
```
$ python manage.py runserver --settings=mysite.settings.local
```
However, I see this error when I try to do this:
```
ImportError: Could not import settings 'mysite.settings.local' (Is it on sys.path?): No module named base
```
This is how my directory is laid out:
```
├── manage.py
├── media
├── myapp
│ ├── __init__.py
│ ├── models.py
│ ├── tests.py
│ └── views.py
└── mysite
├── __init__.py
├── __init__.pyc
├── settings
│ ├── __init__.py
│ ├── __init__.pyc
│ ├── local.py
│ └── local.pyc
├── urls.py
└── wsgi.py
```
Similar questions have been asked, but their solutions have not worked for me.
One suggestion was to include an initialization file in the settings folder, but, as you can see, this is what I have already done.
Need a hand here!
|
2013/11/09
|
[
"https://Stackoverflow.com/questions/19872942",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2744684/"
] |
The underlying value is the same anyway. How you display it is a matter of **formatting**. Formatting in SQL is usually unnecessary. Formatting should be done on the client that receives the data from the database system.
Don't change anything in SQL. Simply format your grid to display the required number of digits.
|
Convert number in code side.
ex. :
```
string.Format("{0:0.##}", 256.583); // "256.58"
string.Format("{0:0.##}", 256.586); // "256.59"
string.Format("{0:0.##}", 256.58); // "256.58"
string.Format("{0:0.##}", 256.5); // "256.5"
string.Format("{0:0.##}", 256.0); // "256"
//===============================
string.Format("{0:0.00}", 256.583); // "256.58"
string.Format("{0:0.00}", 256.586); // "256.59"
string.Format("{0:0.00}", 256.58); // "256.58"
string.Format("{0:0.00}", 256.5); // "256.50"
string.Format("{0:0.00}", 256.0); // "256.00"
//===============================
string.Format("{0:00.000}", 1.2345); // "01.235"
string.Format("{0:000.000}", 12.345); // "012.345"
string.Format("{0:0000.000}", 123.456); // "0123.456"
```
In your case :
```
<asp:TemplateField HeaderText="VatAmount">
<ItemTemplate>
<%# string.Format("{0:0.00}", Eval("VatAmount").ToString()); %>
</ItemTemplate>
</asp:TemplateField>
```
| 10,222
|
65,356,299
|
I am running VS Code (Version 1.52) with extensions Jupyter Notebook (2020.12) and Python (2020.12) on MacOS Catalina.
**Context:**
I have problems getting Intellisense to work properly in my Jupyter Notebooks in VS Code. Some have had some success with adding these config parameters to the global settings of VS Code:
```
"python.dataScience.runStartupCommands": [
"%config IPCompleter.greedy=True",
"%config IPCompleter.use_jedi = False"
]
```
I went ahead and added those as well but then had to realize that all settings under `python.dataScience` are `Unknown Configuration Setting`. Any idea why this is and how I could this get to work?
|
2020/12/18
|
[
"https://Stackoverflow.com/questions/65356299",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5197386/"
] |
Since Nov 2020 the Jupyter extension is seperated from Python extension for VS Code. The setting key has been renamed from `python.dataScience` to `jupyter`[^update](https://devblogs.microsoft.com/python/introducing-the-jupyter-extension-for-vs-code/)
So in your case please rename `python.dataScience.runStartupCommands` to `jupyter.runStartupCommands`
|
According to your description, you could refer to the following:
1. Whether in the "`.py`" file or the "`.ipynb`" file, we can use the shortcut key `"Ctrl+space`" to open the code suggested options:
[](https://i.stack.imgur.com/K3MlT.png)
2. It is recommended that you use the extension "Pylance", which provides outstanding language services for Python in VSCode, and related content will also be displayed in the Jupyter file:
[](https://i.stack.imgur.com/uU07N.png)
Combine these two methods:
[](https://i.stack.imgur.com/irIR3.png)
For setting "python.dataScience.runStartupCommands", as it shows "Unknown Configuration Setting", now we don't use it to set Jupyter's "Intellisense" in VSCode.
| 10,232
|
49,145,059
|
In a dynamic system my base values are all functions of time, `d(t)`. I create the variable `d` using `d = Function('d')(t)` where `t = S('t')`
Obviously it's very common to have derivatives of d (rates of change like velocity etc.). However the default printing of `diff(d(t))` gives:-
```
Derivative(d(t), t)
```
and using pretty printing in ipython (for e.g.) gives a better looking version of:-
```
d/dt (d(t))
```
The functions which include the derivatives of `d(t)` are fairly long in my problems however, and I'd like the printed representation to be something like `d'(t)` or `\dot(d)(t)` (Latex).
Is this possible in sympy? I can probably workaround this using `subs` but would prefer a generic **sympy\_print** function or something I could tweak.
|
2018/03/07
|
[
"https://Stackoverflow.com/questions/49145059",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4443898/"
] |
I do this by substitution. It is horribly stupid, but it works like a charm:
```
q = Function('q')(t)
q_d = Function('\\dot{q}')(t)
```
and then substitute with
```
alias = {q.diff(t):q_d, } # and higher derivatives etc..
hd = q.diff(t).subs(alias)
```
And the output hd has a pretty dot over it's head!
As I said: this is a work-around and works, but you have to be careful in order to substitute correctly (Also for `q_d.diff(t)`, which must be `q_d2` and so on! You can have one big list with all replacements for printing and just apply it after the relevant mathematical steps.)
|
The [vector printing](http://docs.sympy.org/latest/modules/physics/vector/api/printing.html) module that you already found is the only place where such printing is implemented in SymPy.
```
from sympy.physics.vector import dynamicsymbols
from sympy.physics.vector.printing import vpprint, vlatex
d = dynamicsymbols('d')
vpprint(d.diff()) # ḋ
vlatex(d.diff()) # '\\dot{d}'
```
The regular printers (pretty, LaTeX, etc) do not support either prime or dot notation for derivatives. Their `_print_Derivative` methods are written so that they also work for multivariable expressions, where one has to specify a variable by using some sort of d/dx notation.
It would be nice to have an option for shorter derivative notation in general.
| 10,233
|
38,645,486
|
I'm sure that this is a pretty simple problem and that I am just missing something incredibly obvious, but the answer to this predicament has eluded me for several hours now.
My project directory structure looks like this:
```
-PhysicsMaterial
-Macros
__init__.py
Macros.py
-Modules
__init__.py
AvgAccel.py
AvgVelocity.py
-UnitTests
__init__.py
AvgAccelUnitTest.py
AvgVelocityUnitTest.py
__init__.py
```
Criticisms aside on my naming conventions and directory structure here, I cannot seem to be able to use relative imports. I'm attempting to relative import a Module file to be tested in AvgAccelUnitTest.py:
```
from .Modules import AvgAccel as accel
```
However, I keep getting:
```
ValueError: Attempted relative import in non-package
```
Since I have all of my **init** files set up throughout my structure, and I also have the top directory added to my PYTHONPATH, I am stumped. Why is python not interpreting the package and importing the file correctly?
|
2016/07/28
|
[
"https://Stackoverflow.com/questions/38645486",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6649949/"
] |
This occurs because you're running the script as `__main__`. When you run a script like this:
```
python /path/to/package/module.py
```
That file is loaded as `__main__`, not as `package.module`, so it can't do relative imports because it isn't part of a package.
This can lead to strange errors where a class defined in your script gets defined twice, once as `__main__.Class` and again as `package.module.Class`, which can cause `isinstance` checks to fail and similar oddities. Because of this, you generally shouldn't run your modules directly.
For your tests, you can remove the `__init__.py` inside the tests directory and just use absolute instead of relative imports. In fact, your tests probably shouldn't be inside your package at all.
Alternatively, you could create a test runner script that imports your tests and runs them.
|
[How to fix "Attempted relative import in non-package" even with \_\_init\_\_.py](https://stackoverflow.com/questions/11536764/attempted-relative-import-in-non-package-even-with-init-py)
Well, guess it's on to using sys.path.append now. Clap and a half to @BrenBarn, @fireant, and @Ignacio Vazquez-Abrams
| 10,234
|
70,922,321
|
I've been coding with R for quite a while but I want to start learning and using python more for its machine learning applications. However, I'm quite confused as to how to properly install packages and set up the whole working environment. Unlike R where I suppose most people just use RStudio and directly install packages with `install.packages()`, there seems to be a variety of ways this can be done in python, including `pip install` `conda install` and there is also the issue of doing it in the command prompt or one of the IDEs. I've downloaded python 3.8.5 and anaconda3 and some of my most burning questions right now are:
1. When to use which command for installing packages? (and also should I always do it in the command prompt aka cmd on windows instead of inside jupyter notebook)
2. How to navigate the cmd syntax/coding (for example the python documentation for installing packages has this piece of code: `py -m pip install "SomeProject"` but I am completely unfamiliar with this syntax and how to use it - so in the long run do I also have to learn what goes on in the command prompt or does most of the operations occur in the IDE and I mostly don't have to touch the cmd?)
3. How to set up a working directory of sorts (like `setwd()` in R) such that my `.ipynb` files can be saved to other directories or even better if I can just directly start my IDE from another file destination?
I've tried looking at some online resources but they mostly deal with coding basics and the python language instead of these technical aspects of the set up, so I would greatly appreciate some advice on how to navigate and set up the python working environment in general. Thanks a lot!
|
2022/01/31
|
[
"https://Stackoverflow.com/questions/70922321",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/14540717/"
] |
I had to import the metada of the `Table` I manually defined
|
I came to this page because autogenerating migrations no longer worked with the upgrade to SQLAlchemy 1.4, as the metadata were no longer recognized and the automatically generated migration deleted every table (DROP table in upgrade, CREATE TABLE in downgrade).
I have first tried to import the tables metadata like this :
```
target_metadata = [orm.table_company.metadata, orm.table_user.metadata, orm.table_3.metadata, orm.table_4.metadata]
```
It resulted in the following error code:
```
alembic/autogenerate/api.py", line 462, in table_key_to_table
ValueError: Duplicate table keys across multiple MetaData objects: "tb_company", "tb_user", "tb_3", "tb_4"
```
I have found that rather than importing one metadata object per table, you can access it in a single pass with `target_metadata = orm.mapper_registry.metadata` :
SQL Alchemy 1.4
===============
adapters/orm.py
---------------
```py
from myapp import domain
from sqlalchemy.orm import registry
from sqlalchemy.schema import MetaData
metadata = MetaData()
mapper_registry = registry(metadata=metadata)
# define your tables here
table_user = Table(
"tb_user",
mapper_registry.metadata,
Column("id", Integer, primary_key=True, autoincrement=True),
Column(
"pk_user",
UUID(as_uuid=True),
primary_key=True,
server_default=text("uuid_generate_v4()"),
default=uuid.uuid4,
unique=True,
nullable=False,
),
Column(
"fk_company",
UUID(as_uuid=True),
ForeignKey("tb_company.pk_company"),
),
Column("first_name", String(255)),
Column("last_name", String(255)),
)
# map your domain objects to the tables
def start_mappers():
mapper_registry.map_imperatively(
domain.model.User,
table_user,
properties={
"company": relationship(
domain.Company,
backref="users"
)
},
)
```
alembic/env.py
--------------
```py
from myapp.adapters import orm
# (...)
target_metadata = orm.mapper_registry.metadata
```
SQL Alchemy 1.3
===============
Using classical / imperative mapping, Alembic could generate migrations from SQL Alchemy 1.3 with the following syntax:
adapters/orm.py
---------------
```py
from myapp import domain
# (...)
from sqlalchemy import Table, Column
from sqlalchemy.orm import mapper, relationship
from sqlalchemy.schema import MetaData
metadata = MetaData()
# define your tables here
table_user = Table(
"tb_user",
metadata,
Column("id", Integer, primary_key=True, autoincrement=True),
# (...)
)
# map your domain objects to the tables
def start_mappers():
user_mapper = mapper(
domain.User,
tb_user,
properties={
"company": relationship(
domain.Company,
backref="users"
),
},
)
```
alembic/env.py
--------------
```py
from myapp.adapters import orm
# (...)
target_metadata = orm.metadata
```
| 10,235
|
72,640,228
|
I'm trying to replace a single character '°' with '?' in an [edf](https://www.edfplus.info/specs/edf.html) file with binary encoding.([File](https://github.com/warren-manuel/sleep-edf/tree/main/Files)) I need to change all occurances of it in the first line.
I cannot open it without specifying read binary. (The following fails with UnicodeDecodeError: 'utf-8' codec can't decode byte 0xb0 in position 3008: invalid start byte)
```
with open('heartbeat-baseline-700001.edf') as fin:
lines = fin.readlines()
```
I ended up trying to replace it via this code
```
with open("heartbeat-baseline-700001.edf", "rb") as text_file:
lines = text_file.readlines()
lines[1] = lines[1].replace(str.encode('°'), str.encode('?'))
for i,line in enumerate(lines):
with open('heartbeat-baseline-700001_python.edf', 'wb') as fout:
fout.write(line)
```
What I end up with is a file that is exponentially smaller (7KB vs 79MB) and does not work.
What seems to be the issue with this code? Is there a simpler way to replace the character?
|
2022/06/16
|
[
"https://Stackoverflow.com/questions/72640228",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4126542/"
] |
You need to have **Advanced Access** to **business\_management** permission.
You can submit app review for advanced access, if it approved, you can call /bm-id/adaccount to create ad account with required parameter.
|
Also, you can't create an ad account on a real FB user. It works with test users well, but you need special permissions from FB to do this on a real user's account.
| 10,236
|
7,558,814
|
>
> **Possible Duplicate:**
>
> [Python replace multiple strings](https://stackoverflow.com/questions/6116978/python-replace-multiple-strings)
>
>
>
I am looking to replace `“ “`, `“\r”`, `“\n”`, `“<”`, `“>”`, `“’”` (single quote), and `‘”’` (double quote) with `“”` (empty). I’m also looking to replace `“;”` and `“|”` with `“,”`.
Would this be handled by `re.search` since I want to be able to search anywhere in the text, or would I use `re.sub`.
What would be the best way to handle this? I have found bits and pieces, but not where multiple regexes are handled.
|
2011/09/26
|
[
"https://Stackoverflow.com/questions/7558814",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/454488/"
] |
If you want to remove all occurrences of those characters, just put them all in a character class and do [`re.sub()`](http://docs.python.org/library/re.html#re.sub)
```
your_str = re.sub(r'[ \r\n\'"]+', '', your_str)
your_str = re.sub(r'[;|]', ',', your_str)
```
You have to call `re.sub()` for every replacement rule.
|
```
import re
reg = re.compile('([ \r\n\'"]+)|([;|]+)')
ss = 'bo ba\rbu\nbe\'bi"by-ja;ju|jo'
def repl(mat, di = {1:'',2:','}):
return di[mat.lastindex]
print reg.sub(repl,ss)
```
Note: '|' loses its speciality between brackets
| 10,237
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.