qid
int64 46k
74.7M
| question
stringlengths 54
37.8k
| date
stringlengths 10
10
| metadata
listlengths 3
3
| response_j
stringlengths 17
26k
| response_k
stringlengths 26
26k
|
|---|---|---|---|---|---|
48,402,276
|
I am taking a Udemy course. The problem I am working on is to take two strings and determine if they are 'one edit away' from each other. That means you can make a single change -- change one letter, add one letter, delete one letter -- from one string and have it become identical to the other.
Examples:
```
s1a = "abcde"
s1b = "abfde"
s2a = "abcde"
s2b = "abde"
s3a = "xyz"
s3b = "xyaz"
```
* `s1a` changes the `'c'` to an `'f'`.
* `s2a` deletes `'c'`.
* `s3a` adds `'a'`.
The instructors solution (and test suite):
```
def is_one_away(s1, s2):
if len(s1) - len(s2) >= 2 or len(s2) - len(s1) >= 2:
return False
elif len(s1) == len(s2):
return is_one_away_same_length(s1, s2)
elif len(s1) > len(s2):
return is_one_away_diff_lengths(s1, s2)
else:
return is_one_away_diff_lengths(s2, s1)
def is_one_away_same_length(s1, s2):
count_diff = 0
for i in range(len(s1)):
if not s1[i] == s2[i]:
count_diff += 1
if count_diff > 1:
return False
return True
# Assumption: len(s1) == len(s2) + 1
def is_one_away_diff_lengths(s1, s2):
i = 0
count_diff = 0
while i < len(s2):
if s1[i + count_diff] == s2[i]:
i += 1
else:
count_diff += 1
if count_diff > 1:
return False
return True
# NOTE: The following input values will be used for testing your solution.
print(is_one_away("abcde", "abcd")) # should return True
print(is_one_away("abde", "abcde")) # should return True
print(is_one_away("a", "a")) # should return True
print(is_one_away("abcdef", "abqdef")) # should return True
print(is_one_away("abcdef", "abccef")) # should return True
print(is_one_away("abcdef", "abcde")) # should return True
print(is_one_away("aaa", "abc")) # should return False
print(is_one_away("abcde", "abc")) # should return False
print(is_one_away("abc", "abcde")) # should return False
print(is_one_away("abc", "bcc")) # should return False
```
When I saw the problem I decided to tackle it using `set()`.
I found this very informative:
[Opposite of set.intersection in python?](https://stackoverflow.com/questions/29947844/opposite-of-set-intersection-in-python)
This is my attempted solution:
```
def is_one_away(s1, s2):
if len(set(s1).symmetric_difference(s2)) <= 1:
return True
if len(set(s1).symmetric_difference(s2)) == 2:
if len(set(s1).difference(s2)) == len(set(s2).difference(s1)):
return True
return False
return False
```
When I run my solution online (you can test within the course itself)
I am failing on the last test suite item:
```
False != True :
Input 1: abc
Input 2: bcc
Expected Result: False
Actual Result: True
```
I have tried and tried but I can't get the last test item to work (at least not without breaking a bunch of other stuff).
There is no guarantee that I can solve the full test suite with a `set()` based solution, but since I am one item away I was really wanting to see if I could get it done.
|
2018/01/23
|
[
"https://Stackoverflow.com/questions/48402276",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/7535419/"
] |
Here's a solution using differences found by list comprehension.
```
def one_away(s1, s2):
diff1 = [el for el in s1 if el not in s2]
diff2 = [el for el in s2 if el not in s1]
if len(diff1) < 2 and len(diff2) < 2:
return True
return False
```
Unlike a set-based solution, this one doesn't lose vital information about non-unique characters.
|
Here is solving one away where set is used to find the unique character. Done not completely using set but set is used to find the unique character in two given strings. List as a stack is used to pop item from both stacks, and then to compare them.
Using stack, pop items from both items and see if they match.
Find the unique character item, and when popped item matches with unique character, we need to pop first string again.
For example, pales and pale when s is found, we should pop again.
pales pale would be true because s is unique char and when s is popped, we will pop string1 again. If unmatched popped chars greater than 1 then we can return False.
```py
def one_away(string1: str, string2: str) -> bool:
string1, string2 = [ char for char in string1.replace(" ","").lower() ], [ char for char in string2.replace(" ","").lower() ]
if len(string2) > len(string1):
string1, string2 = string2, string1
len_s1, len_s2, unmatch_count = len(string1), len(string2), 0
if len_s1 == len_s2 or len_s1 - len_s2 == 1:
unique_char = list(set(string1) - set(string2)) or ["None"]
if unique_char[0] == "None" and len_s1 - len_s2 == 1:
return True # this is for "abcc" "abc"
while len(string1) > 0:
pop_1, pop_2 = string1.pop(), string2.pop()
if pop_1 == unique_char[0] and len_s1 - len_s2 == 1:
pop_1 = string1.pop()
if pop_1 != pop_2:
unmatch_count += 1
if unmatch_count > 1:
return False
return True
return False
```
For example test:
```py
strings = [("abc","ab"), ("pale", "bale"), ("ples", "pale"), ("palee", "pale"), ("pales", "pale"), ("pale", "ple"), ("abc", "cab"), ("pale", "bake")]
for string in strings:
print(f"{string} one_away: {one_away(string[0]), string[1]}")
```
```
('pale', 'bale') one_away: True
('ples', 'pale') one_away: False
('palee', 'pale') one_away: True
('pales', 'pale') one_away: True
('pale', 'ple') one_away: True
('abc', 'cab') one_away: False
('pale', 'bake') one_away: False
```
|
54,938,607
|
I have already read answer of this question [Image.open() cannot identify image file - Python?](https://stackoverflow.com/q/19230991/9235408), that question was solved by using `from PIL import Image`, but my situation is different. I am using `image_slicer`, and there I am getting these errors:
```
Traceback (most recent call last):
File "image_slice.py", line 17, in <module>
j=image_slicer.slice('file_name' , n_k)
File "/home/user_name/.local/lib/python3.5/site-
packages/image_slicer/main.py", line 114, in slice
im = Image.open(filename)
File "/home/user_name/.local/lib/python3.5/site-packages/PIL/Image.py", line 2687, in open
% (filename if filename else fp))
OSError: cannot identify image file 'file_name'
```
The full code is:
```
import os
from PIL import Image
import image_slicer
import numpy as np
import nibabel as nib
img = nib.load('/home/user_name/volume-20.nii')
img.shape
epi_img_data = img.get_data()
#epi_img_data.shape
n_i, n_j, n_k = epi_img_data.shape
center_i = (n_i - 1) // 2
center_j = (n_j - 1) // 2
center_k = (n_k - 1) // 2
centers = [center_i, center_j, center_k]
print("Co-ordinates in the voxel array: ", centers)
#for i in range(n_k):
j=image_slicer.slice('/home/user_name/volume-20.nii' , n_k)
```
However `nib.load()`, works fine, but `image_slicer` is not working.
All the nii images are **3D images**.
|
2019/03/01
|
[
"https://Stackoverflow.com/questions/54938607",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/9235408/"
] |
[Image slicer](https://image-slicer.readthedocs.io/en/latest/) is not intended for reading `nii` format. Here is the [list](https://pillow.readthedocs.io/en/5.1.x/handbook/image-file-formats.html#image-file-formats) of supported formats.
|
This error also occurs whenever the image file itself is corrupted. I once accidentally was in the process of deleting the subject image, until canceling mid-way through.
TL;DR - open image file to see if it's ok.
|
3,561,221
|
this is similar to the question in [merge sort in python](https://stackoverflow.com/questions/3559807/merge-sort-in-python)
I'm restating because I don't think I explained the problem very well over there.
basically I have a series of about 1000 files all containing domain names. altogether the data is > 1gig so I'm trying to avoid loading all the data into ram. each individual file has been sorted using .sort(get\_tld) which has sorted the data according to its TLD (not according to its domain name. sorted all the .com's together, .orgs together, etc)
a typical file might look like
```
something.ca
somethingelse.ca
somethingnew.com
another.net
whatever.org
etc.org
```
but obviosuly longer.
I now want to merge all the files into one, maintaining the sort so that in the end the one large file will still have all the .coms together, .orgs together, etc.
What I want to do basically is
```
open all the files
loop:
read 1 line from each open file
put them all in a list and sort with .sort(get_tld)
write each item from the list to a new file
```
the problem I'm having is that I can't figure out how to loop over the files
I can't use **with open() as** because I don't have 1 file open to loop over, I have many. Also they're all of variable length so I have to make sure to get all the way through the longest one.
any advice is much appreciated.
|
2010/08/24
|
[
"https://Stackoverflow.com/questions/3561221",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/410296/"
] |
Whether you're able to keep 1000 files at once is a separate issue and depends on your OS and its configuration; if not, you'll have to proceed in two steps -- merge groups of N files into temporary ones, then merge the temporary ones into the final-result file (two steps should suffice, as they let you merge a total of N squared files; as long as N is at least 32, merging 1000 files should therefore be possible). In any case, this is a separate issue from the "merge N input files into one output file" task (it's only an issue of whether you call that function once, or repeatedly).
The general idea for the function is to keep a priority queue (module `heapq` is good at that;-) with small lists containing the "sorting key" (the current TLD, in your case) followed by the last line read from the file, and finally the open file ready for reading the next line (and something distinct in between to ensure that the normal lexicographical order won't accidentally end up trying to compare two open files, which would fail). I think some code is probably the best way to explain the general idea, so next I'll edit this answer to supply the code (however I have no time to *test* it, so take it as pseudocode intended to communicate the idea;-).
```
import heapq
def merge(inputfiles, outputfile, key):
"""inputfiles: list of input, sorted files open for reading.
outputfile: output file open for writing.
key: callable supplying the "key" to use for each line.
"""
# prepare the heap: items are lists with [thekey, k, theline, thefile]
# where k is an arbitrary int guaranteed to be different for all items,
# theline is the last line read from thefile and not yet written out,
# (guaranteed to be a non-empty string), thekey is key(theline), and
# thefile is the open file
h = [(k, i.readline(), i) for k, i in enumerate(inputfiles)]
h = [[key(s), k, s, i] for k, s, i in h if s]
heapq.heapify(h)
while h:
# get and output the lowest available item (==available item w/lowest key)
item = heapq.heappop(h)
outputfile.write(item[2])
# replenish the item with the _next_ line from its file (if any)
item[2] = item[3].readline()
if not item[2]: continue # don't reinsert finished files
# compute the key, and re-insert the item appropriately
item[0] = key(item[2])
heapq.heappush(h, item)
```
Of course, in your case, as the `key` function you'll want one that extracts the top-level domain given a line that's a domain name (with trailing newline) -- in a previous question you were already pointed to the urlparse module as preferable to string manipulation for this purpose. If you do insist on string manipulation,
```
def tld(domain):
return domain.rsplit('.', 1)[-1].strip()
```
or something along these lines is probably a reasonable approach under this constraint.
If you use Python 2.6 or better, [heapq.merge](http://docs.python.org/library/heapq.html#heapq.merge) is the obvious alternative, but in that case you need to prepare the iterators yourself (including ensuring that "open file objects" never end up being compared by accident...) with a similar "decorate / undecorate" approach from that I use in the more portable code above.
|
You want to use merge sort, e.g. `heapq.merge`. I'm not sure if your OS allows you to open 1000 files simultaneously. If not you may have to do it in 2 or more passes.
|
3,561,221
|
this is similar to the question in [merge sort in python](https://stackoverflow.com/questions/3559807/merge-sort-in-python)
I'm restating because I don't think I explained the problem very well over there.
basically I have a series of about 1000 files all containing domain names. altogether the data is > 1gig so I'm trying to avoid loading all the data into ram. each individual file has been sorted using .sort(get\_tld) which has sorted the data according to its TLD (not according to its domain name. sorted all the .com's together, .orgs together, etc)
a typical file might look like
```
something.ca
somethingelse.ca
somethingnew.com
another.net
whatever.org
etc.org
```
but obviosuly longer.
I now want to merge all the files into one, maintaining the sort so that in the end the one large file will still have all the .coms together, .orgs together, etc.
What I want to do basically is
```
open all the files
loop:
read 1 line from each open file
put them all in a list and sort with .sort(get_tld)
write each item from the list to a new file
```
the problem I'm having is that I can't figure out how to loop over the files
I can't use **with open() as** because I don't have 1 file open to loop over, I have many. Also they're all of variable length so I have to make sure to get all the way through the longest one.
any advice is much appreciated.
|
2010/08/24
|
[
"https://Stackoverflow.com/questions/3561221",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/410296/"
] |
Whether you're able to keep 1000 files at once is a separate issue and depends on your OS and its configuration; if not, you'll have to proceed in two steps -- merge groups of N files into temporary ones, then merge the temporary ones into the final-result file (two steps should suffice, as they let you merge a total of N squared files; as long as N is at least 32, merging 1000 files should therefore be possible). In any case, this is a separate issue from the "merge N input files into one output file" task (it's only an issue of whether you call that function once, or repeatedly).
The general idea for the function is to keep a priority queue (module `heapq` is good at that;-) with small lists containing the "sorting key" (the current TLD, in your case) followed by the last line read from the file, and finally the open file ready for reading the next line (and something distinct in between to ensure that the normal lexicographical order won't accidentally end up trying to compare two open files, which would fail). I think some code is probably the best way to explain the general idea, so next I'll edit this answer to supply the code (however I have no time to *test* it, so take it as pseudocode intended to communicate the idea;-).
```
import heapq
def merge(inputfiles, outputfile, key):
"""inputfiles: list of input, sorted files open for reading.
outputfile: output file open for writing.
key: callable supplying the "key" to use for each line.
"""
# prepare the heap: items are lists with [thekey, k, theline, thefile]
# where k is an arbitrary int guaranteed to be different for all items,
# theline is the last line read from thefile and not yet written out,
# (guaranteed to be a non-empty string), thekey is key(theline), and
# thefile is the open file
h = [(k, i.readline(), i) for k, i in enumerate(inputfiles)]
h = [[key(s), k, s, i] for k, s, i in h if s]
heapq.heapify(h)
while h:
# get and output the lowest available item (==available item w/lowest key)
item = heapq.heappop(h)
outputfile.write(item[2])
# replenish the item with the _next_ line from its file (if any)
item[2] = item[3].readline()
if not item[2]: continue # don't reinsert finished files
# compute the key, and re-insert the item appropriately
item[0] = key(item[2])
heapq.heappush(h, item)
```
Of course, in your case, as the `key` function you'll want one that extracts the top-level domain given a line that's a domain name (with trailing newline) -- in a previous question you were already pointed to the urlparse module as preferable to string manipulation for this purpose. If you do insist on string manipulation,
```
def tld(domain):
return domain.rsplit('.', 1)[-1].strip()
```
or something along these lines is probably a reasonable approach under this constraint.
If you use Python 2.6 or better, [heapq.merge](http://docs.python.org/library/heapq.html#heapq.merge) is the obvious alternative, but in that case you need to prepare the iterators yourself (including ensuring that "open file objects" never end up being compared by accident...) with a similar "decorate / undecorate" approach from that I use in the more portable code above.
|
Why don't you divide the domains by first letter, so you would just split the source files into 26 or more files which could be named something like: domains-a.dat, domains-b.dat. Then you can load these entirely into RAM and sort them and write them out to a common file.
So:
3 input files split into 26+ source files
26+ source files could be loaded individually, sorted in RAM and then written to the combined file.
If 26 files are not enough, I'm sure you could split into even more files... domains-ab.dat. The point is that files are cheap and easy to work with (in Python and many other languages), and you should use them to your advantage.
|
3,561,221
|
this is similar to the question in [merge sort in python](https://stackoverflow.com/questions/3559807/merge-sort-in-python)
I'm restating because I don't think I explained the problem very well over there.
basically I have a series of about 1000 files all containing domain names. altogether the data is > 1gig so I'm trying to avoid loading all the data into ram. each individual file has been sorted using .sort(get\_tld) which has sorted the data according to its TLD (not according to its domain name. sorted all the .com's together, .orgs together, etc)
a typical file might look like
```
something.ca
somethingelse.ca
somethingnew.com
another.net
whatever.org
etc.org
```
but obviosuly longer.
I now want to merge all the files into one, maintaining the sort so that in the end the one large file will still have all the .coms together, .orgs together, etc.
What I want to do basically is
```
open all the files
loop:
read 1 line from each open file
put them all in a list and sort with .sort(get_tld)
write each item from the list to a new file
```
the problem I'm having is that I can't figure out how to loop over the files
I can't use **with open() as** because I don't have 1 file open to loop over, I have many. Also they're all of variable length so I have to make sure to get all the way through the longest one.
any advice is much appreciated.
|
2010/08/24
|
[
"https://Stackoverflow.com/questions/3561221",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/410296/"
] |
Whether you're able to keep 1000 files at once is a separate issue and depends on your OS and its configuration; if not, you'll have to proceed in two steps -- merge groups of N files into temporary ones, then merge the temporary ones into the final-result file (two steps should suffice, as they let you merge a total of N squared files; as long as N is at least 32, merging 1000 files should therefore be possible). In any case, this is a separate issue from the "merge N input files into one output file" task (it's only an issue of whether you call that function once, or repeatedly).
The general idea for the function is to keep a priority queue (module `heapq` is good at that;-) with small lists containing the "sorting key" (the current TLD, in your case) followed by the last line read from the file, and finally the open file ready for reading the next line (and something distinct in between to ensure that the normal lexicographical order won't accidentally end up trying to compare two open files, which would fail). I think some code is probably the best way to explain the general idea, so next I'll edit this answer to supply the code (however I have no time to *test* it, so take it as pseudocode intended to communicate the idea;-).
```
import heapq
def merge(inputfiles, outputfile, key):
"""inputfiles: list of input, sorted files open for reading.
outputfile: output file open for writing.
key: callable supplying the "key" to use for each line.
"""
# prepare the heap: items are lists with [thekey, k, theline, thefile]
# where k is an arbitrary int guaranteed to be different for all items,
# theline is the last line read from thefile and not yet written out,
# (guaranteed to be a non-empty string), thekey is key(theline), and
# thefile is the open file
h = [(k, i.readline(), i) for k, i in enumerate(inputfiles)]
h = [[key(s), k, s, i] for k, s, i in h if s]
heapq.heapify(h)
while h:
# get and output the lowest available item (==available item w/lowest key)
item = heapq.heappop(h)
outputfile.write(item[2])
# replenish the item with the _next_ line from its file (if any)
item[2] = item[3].readline()
if not item[2]: continue # don't reinsert finished files
# compute the key, and re-insert the item appropriately
item[0] = key(item[2])
heapq.heappush(h, item)
```
Of course, in your case, as the `key` function you'll want one that extracts the top-level domain given a line that's a domain name (with trailing newline) -- in a previous question you were already pointed to the urlparse module as preferable to string manipulation for this purpose. If you do insist on string manipulation,
```
def tld(domain):
return domain.rsplit('.', 1)[-1].strip()
```
or something along these lines is probably a reasonable approach under this constraint.
If you use Python 2.6 or better, [heapq.merge](http://docs.python.org/library/heapq.html#heapq.merge) is the obvious alternative, but in that case you need to prepare the iterators yourself (including ensuring that "open file objects" never end up being compared by accident...) with a similar "decorate / undecorate" approach from that I use in the more portable code above.
|
Your algorithm for merging sorted files is incorrect. What you do is read one line from each file, find the lowest-ranked item among all the lines read, and write it to the output file. Repeat this process (ignoring any files that are at EOF) until the end of all files has been reached.
|
3,561,221
|
this is similar to the question in [merge sort in python](https://stackoverflow.com/questions/3559807/merge-sort-in-python)
I'm restating because I don't think I explained the problem very well over there.
basically I have a series of about 1000 files all containing domain names. altogether the data is > 1gig so I'm trying to avoid loading all the data into ram. each individual file has been sorted using .sort(get\_tld) which has sorted the data according to its TLD (not according to its domain name. sorted all the .com's together, .orgs together, etc)
a typical file might look like
```
something.ca
somethingelse.ca
somethingnew.com
another.net
whatever.org
etc.org
```
but obviosuly longer.
I now want to merge all the files into one, maintaining the sort so that in the end the one large file will still have all the .coms together, .orgs together, etc.
What I want to do basically is
```
open all the files
loop:
read 1 line from each open file
put them all in a list and sort with .sort(get_tld)
write each item from the list to a new file
```
the problem I'm having is that I can't figure out how to loop over the files
I can't use **with open() as** because I don't have 1 file open to loop over, I have many. Also they're all of variable length so I have to make sure to get all the way through the longest one.
any advice is much appreciated.
|
2010/08/24
|
[
"https://Stackoverflow.com/questions/3561221",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/410296/"
] |
Whether you're able to keep 1000 files at once is a separate issue and depends on your OS and its configuration; if not, you'll have to proceed in two steps -- merge groups of N files into temporary ones, then merge the temporary ones into the final-result file (two steps should suffice, as they let you merge a total of N squared files; as long as N is at least 32, merging 1000 files should therefore be possible). In any case, this is a separate issue from the "merge N input files into one output file" task (it's only an issue of whether you call that function once, or repeatedly).
The general idea for the function is to keep a priority queue (module `heapq` is good at that;-) with small lists containing the "sorting key" (the current TLD, in your case) followed by the last line read from the file, and finally the open file ready for reading the next line (and something distinct in between to ensure that the normal lexicographical order won't accidentally end up trying to compare two open files, which would fail). I think some code is probably the best way to explain the general idea, so next I'll edit this answer to supply the code (however I have no time to *test* it, so take it as pseudocode intended to communicate the idea;-).
```
import heapq
def merge(inputfiles, outputfile, key):
"""inputfiles: list of input, sorted files open for reading.
outputfile: output file open for writing.
key: callable supplying the "key" to use for each line.
"""
# prepare the heap: items are lists with [thekey, k, theline, thefile]
# where k is an arbitrary int guaranteed to be different for all items,
# theline is the last line read from thefile and not yet written out,
# (guaranteed to be a non-empty string), thekey is key(theline), and
# thefile is the open file
h = [(k, i.readline(), i) for k, i in enumerate(inputfiles)]
h = [[key(s), k, s, i] for k, s, i in h if s]
heapq.heapify(h)
while h:
# get and output the lowest available item (==available item w/lowest key)
item = heapq.heappop(h)
outputfile.write(item[2])
# replenish the item with the _next_ line from its file (if any)
item[2] = item[3].readline()
if not item[2]: continue # don't reinsert finished files
# compute the key, and re-insert the item appropriately
item[0] = key(item[2])
heapq.heappush(h, item)
```
Of course, in your case, as the `key` function you'll want one that extracts the top-level domain given a line that's a domain name (with trailing newline) -- in a previous question you were already pointed to the urlparse module as preferable to string manipulation for this purpose. If you do insist on string manipulation,
```
def tld(domain):
return domain.rsplit('.', 1)[-1].strip()
```
or something along these lines is probably a reasonable approach under this constraint.
If you use Python 2.6 or better, [heapq.merge](http://docs.python.org/library/heapq.html#heapq.merge) is the obvious alternative, but in that case you need to prepare the iterators yourself (including ensuring that "open file objects" never end up being compared by accident...) with a similar "decorate / undecorate" approach from that I use in the more portable code above.
|
```
#! /usr/bin/env python
"""Usage: unconfuse.py file1 file2 ... fileN
Reads a list of domain names from each file, and writes them to standard output grouped by TLD.
"""
import sys, os
spools = {}
for name in sys.argv[1:]:
for line in file(name):
if (line == "\n"): continue
tld = line[line.rindex(".")+1:-1]
spool = spools.get(tld, None)
if (spool == None):
spool = file(tld + ".spool", "w+")
spools[tld] = spool
spool.write(line)
for tld in sorted(spools.iterkeys()):
spool = spools[tld]
spool.seek(0)
for line in spool:
sys.stdout.write(line)
spool.close()
os.remove(spool.name)
```
|
3,561,221
|
this is similar to the question in [merge sort in python](https://stackoverflow.com/questions/3559807/merge-sort-in-python)
I'm restating because I don't think I explained the problem very well over there.
basically I have a series of about 1000 files all containing domain names. altogether the data is > 1gig so I'm trying to avoid loading all the data into ram. each individual file has been sorted using .sort(get\_tld) which has sorted the data according to its TLD (not according to its domain name. sorted all the .com's together, .orgs together, etc)
a typical file might look like
```
something.ca
somethingelse.ca
somethingnew.com
another.net
whatever.org
etc.org
```
but obviosuly longer.
I now want to merge all the files into one, maintaining the sort so that in the end the one large file will still have all the .coms together, .orgs together, etc.
What I want to do basically is
```
open all the files
loop:
read 1 line from each open file
put them all in a list and sort with .sort(get_tld)
write each item from the list to a new file
```
the problem I'm having is that I can't figure out how to loop over the files
I can't use **with open() as** because I don't have 1 file open to loop over, I have many. Also they're all of variable length so I have to make sure to get all the way through the longest one.
any advice is much appreciated.
|
2010/08/24
|
[
"https://Stackoverflow.com/questions/3561221",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/410296/"
] |
You want to use merge sort, e.g. `heapq.merge`. I'm not sure if your OS allows you to open 1000 files simultaneously. If not you may have to do it in 2 or more passes.
|
Your algorithm for merging sorted files is incorrect. What you do is read one line from each file, find the lowest-ranked item among all the lines read, and write it to the output file. Repeat this process (ignoring any files that are at EOF) until the end of all files has been reached.
|
3,561,221
|
this is similar to the question in [merge sort in python](https://stackoverflow.com/questions/3559807/merge-sort-in-python)
I'm restating because I don't think I explained the problem very well over there.
basically I have a series of about 1000 files all containing domain names. altogether the data is > 1gig so I'm trying to avoid loading all the data into ram. each individual file has been sorted using .sort(get\_tld) which has sorted the data according to its TLD (not according to its domain name. sorted all the .com's together, .orgs together, etc)
a typical file might look like
```
something.ca
somethingelse.ca
somethingnew.com
another.net
whatever.org
etc.org
```
but obviosuly longer.
I now want to merge all the files into one, maintaining the sort so that in the end the one large file will still have all the .coms together, .orgs together, etc.
What I want to do basically is
```
open all the files
loop:
read 1 line from each open file
put them all in a list and sort with .sort(get_tld)
write each item from the list to a new file
```
the problem I'm having is that I can't figure out how to loop over the files
I can't use **with open() as** because I don't have 1 file open to loop over, I have many. Also they're all of variable length so I have to make sure to get all the way through the longest one.
any advice is much appreciated.
|
2010/08/24
|
[
"https://Stackoverflow.com/questions/3561221",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/410296/"
] |
You want to use merge sort, e.g. `heapq.merge`. I'm not sure if your OS allows you to open 1000 files simultaneously. If not you may have to do it in 2 or more passes.
|
```
#! /usr/bin/env python
"""Usage: unconfuse.py file1 file2 ... fileN
Reads a list of domain names from each file, and writes them to standard output grouped by TLD.
"""
import sys, os
spools = {}
for name in sys.argv[1:]:
for line in file(name):
if (line == "\n"): continue
tld = line[line.rindex(".")+1:-1]
spool = spools.get(tld, None)
if (spool == None):
spool = file(tld + ".spool", "w+")
spools[tld] = spool
spool.write(line)
for tld in sorted(spools.iterkeys()):
spool = spools[tld]
spool.seek(0)
for line in spool:
sys.stdout.write(line)
spool.close()
os.remove(spool.name)
```
|
3,561,221
|
this is similar to the question in [merge sort in python](https://stackoverflow.com/questions/3559807/merge-sort-in-python)
I'm restating because I don't think I explained the problem very well over there.
basically I have a series of about 1000 files all containing domain names. altogether the data is > 1gig so I'm trying to avoid loading all the data into ram. each individual file has been sorted using .sort(get\_tld) which has sorted the data according to its TLD (not according to its domain name. sorted all the .com's together, .orgs together, etc)
a typical file might look like
```
something.ca
somethingelse.ca
somethingnew.com
another.net
whatever.org
etc.org
```
but obviosuly longer.
I now want to merge all the files into one, maintaining the sort so that in the end the one large file will still have all the .coms together, .orgs together, etc.
What I want to do basically is
```
open all the files
loop:
read 1 line from each open file
put them all in a list and sort with .sort(get_tld)
write each item from the list to a new file
```
the problem I'm having is that I can't figure out how to loop over the files
I can't use **with open() as** because I don't have 1 file open to loop over, I have many. Also they're all of variable length so I have to make sure to get all the way through the longest one.
any advice is much appreciated.
|
2010/08/24
|
[
"https://Stackoverflow.com/questions/3561221",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/410296/"
] |
Why don't you divide the domains by first letter, so you would just split the source files into 26 or more files which could be named something like: domains-a.dat, domains-b.dat. Then you can load these entirely into RAM and sort them and write them out to a common file.
So:
3 input files split into 26+ source files
26+ source files could be loaded individually, sorted in RAM and then written to the combined file.
If 26 files are not enough, I'm sure you could split into even more files... domains-ab.dat. The point is that files are cheap and easy to work with (in Python and many other languages), and you should use them to your advantage.
|
Your algorithm for merging sorted files is incorrect. What you do is read one line from each file, find the lowest-ranked item among all the lines read, and write it to the output file. Repeat this process (ignoring any files that are at EOF) until the end of all files has been reached.
|
3,561,221
|
this is similar to the question in [merge sort in python](https://stackoverflow.com/questions/3559807/merge-sort-in-python)
I'm restating because I don't think I explained the problem very well over there.
basically I have a series of about 1000 files all containing domain names. altogether the data is > 1gig so I'm trying to avoid loading all the data into ram. each individual file has been sorted using .sort(get\_tld) which has sorted the data according to its TLD (not according to its domain name. sorted all the .com's together, .orgs together, etc)
a typical file might look like
```
something.ca
somethingelse.ca
somethingnew.com
another.net
whatever.org
etc.org
```
but obviosuly longer.
I now want to merge all the files into one, maintaining the sort so that in the end the one large file will still have all the .coms together, .orgs together, etc.
What I want to do basically is
```
open all the files
loop:
read 1 line from each open file
put them all in a list and sort with .sort(get_tld)
write each item from the list to a new file
```
the problem I'm having is that I can't figure out how to loop over the files
I can't use **with open() as** because I don't have 1 file open to loop over, I have many. Also they're all of variable length so I have to make sure to get all the way through the longest one.
any advice is much appreciated.
|
2010/08/24
|
[
"https://Stackoverflow.com/questions/3561221",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/410296/"
] |
Why don't you divide the domains by first letter, so you would just split the source files into 26 or more files which could be named something like: domains-a.dat, domains-b.dat. Then you can load these entirely into RAM and sort them and write them out to a common file.
So:
3 input files split into 26+ source files
26+ source files could be loaded individually, sorted in RAM and then written to the combined file.
If 26 files are not enough, I'm sure you could split into even more files... domains-ab.dat. The point is that files are cheap and easy to work with (in Python and many other languages), and you should use them to your advantage.
|
```
#! /usr/bin/env python
"""Usage: unconfuse.py file1 file2 ... fileN
Reads a list of domain names from each file, and writes them to standard output grouped by TLD.
"""
import sys, os
spools = {}
for name in sys.argv[1:]:
for line in file(name):
if (line == "\n"): continue
tld = line[line.rindex(".")+1:-1]
spool = spools.get(tld, None)
if (spool == None):
spool = file(tld + ".spool", "w+")
spools[tld] = spool
spool.write(line)
for tld in sorted(spools.iterkeys()):
spool = spools[tld]
spool.seek(0)
for line in spool:
sys.stdout.write(line)
spool.close()
os.remove(spool.name)
```
|
3,561,221
|
this is similar to the question in [merge sort in python](https://stackoverflow.com/questions/3559807/merge-sort-in-python)
I'm restating because I don't think I explained the problem very well over there.
basically I have a series of about 1000 files all containing domain names. altogether the data is > 1gig so I'm trying to avoid loading all the data into ram. each individual file has been sorted using .sort(get\_tld) which has sorted the data according to its TLD (not according to its domain name. sorted all the .com's together, .orgs together, etc)
a typical file might look like
```
something.ca
somethingelse.ca
somethingnew.com
another.net
whatever.org
etc.org
```
but obviosuly longer.
I now want to merge all the files into one, maintaining the sort so that in the end the one large file will still have all the .coms together, .orgs together, etc.
What I want to do basically is
```
open all the files
loop:
read 1 line from each open file
put them all in a list and sort with .sort(get_tld)
write each item from the list to a new file
```
the problem I'm having is that I can't figure out how to loop over the files
I can't use **with open() as** because I don't have 1 file open to loop over, I have many. Also they're all of variable length so I have to make sure to get all the way through the longest one.
any advice is much appreciated.
|
2010/08/24
|
[
"https://Stackoverflow.com/questions/3561221",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/410296/"
] |
Your algorithm for merging sorted files is incorrect. What you do is read one line from each file, find the lowest-ranked item among all the lines read, and write it to the output file. Repeat this process (ignoring any files that are at EOF) until the end of all files has been reached.
|
```
#! /usr/bin/env python
"""Usage: unconfuse.py file1 file2 ... fileN
Reads a list of domain names from each file, and writes them to standard output grouped by TLD.
"""
import sys, os
spools = {}
for name in sys.argv[1:]:
for line in file(name):
if (line == "\n"): continue
tld = line[line.rindex(".")+1:-1]
spool = spools.get(tld, None)
if (spool == None):
spool = file(tld + ".spool", "w+")
spools[tld] = spool
spool.write(line)
for tld in sorted(spools.iterkeys()):
spool = spools[tld]
spool.seek(0)
for line in spool:
sys.stdout.write(line)
spool.close()
os.remove(spool.name)
```
|
15,351,515
|
I wrote my own implementation of the `ISession` [interface](http://docs.pylonsproject.org/projects/pyramid/en/1.0-branch/_modules/pyramid/interfaces.html#ISession) of Pyramid which should store the Session in a database. Everything works real nice, but somehow `pyramid_tm` throws up on this. As soon as it is activated it says this:
```
DetachedInstanceError: Instance <Session at 0x38036d0> is not bound to a Session;
attribute refresh operation cannot proceed
```
(Don't get confused here: The `<Session ...>` is the class name for the model, the "... to a Session" most likely refers to SQLAlchemy's Session (which I call `DBSession` to avoid confusion).
I have looked through mailing lists and SO and it seems anytime someone has the problem, they are
* spawning a new thread or
* manually call `transaction.commit()`
I do neither of those things. However, the specialty here is, that my session gets passed around by Pyramid a lot. First I do `DBSession.add(session)` and then `return session`. I can afterwards work with the session, flash new messages etc.
However, it seems once the request finishes, I get this exception. Here is the full traceback:
```
Traceback (most recent call last):
File "/home/javex/data/Arbeit/libraries/python/web_projects/pyramid/lib/python2.7/site-packages/waitress-0.8.1-py2.7.egg/waitress/channel.py", line 329, in service
task.service()
File "/home/javex/data/Arbeit/libraries/python/web_projects/pyramid/lib/python2.7/site-packages/waitress-0.8.1-py2.7.egg/waitress/task.py", line 173, in service
self.execute()
File "/home/javex/data/Arbeit/libraries/python/web_projects/pyramid/lib/python2.7/site-packages/waitress-0.8.1-py2.7.egg/waitress/task.py", line 380, in execute
app_iter = self.channel.server.application(env, start_response)
File "/home/javex/data/Arbeit/libraries/python/web_projects/pyramid/lib/python2.7/site-packages/pyramid/router.py", line 251, in __call__
response = self.invoke_subrequest(request, use_tweens=True)
File "/home/javex/data/Arbeit/libraries/python/web_projects/pyramid/lib/python2.7/site-packages/pyramid/router.py", line 231, in invoke_subrequest
request._process_response_callbacks(response)
File "/home/javex/data/Arbeit/libraries/python/web_projects/pyramid/lib/python2.7/site-packages/pyramid/request.py", line 243, in _process_response_callbacks
callback(self, response)
File "/home/javex/data/Arbeit/libraries/python/web_projects/pyramid/miniblog/miniblog/models.py", line 218, in _set_cookie
print("Setting cookie %s with value %s for session with id %s" % (self._cookie_name, self._cookie, self.id))
File "build/bdist.linux-x86_64/egg/sqlalchemy/orm/attributes.py", line 168, in __get__
return self.impl.get(instance_state(instance),dict_)
File "build/bdist.linux-x86_64/egg/sqlalchemy/orm/attributes.py", line 451, in get
value = callable_(passive)
File "build/bdist.linux-x86_64/egg/sqlalchemy/orm/state.py", line 285, in __call__
self.manager.deferred_scalar_loader(self, toload)
File "build/bdist.linux-x86_64/egg/sqlalchemy/orm/mapper.py", line 1668, in _load_scalar_attributes
(state_str(state)))
DetachedInstanceError: Instance <Session at 0x7f4a1c04e710> is not bound to a Session; attribute refresh operation cannot proceed
```
For this case, I deactivated the debug toolbar. The error gets thrown from there once I activate it. It seems the problem here is accessing the object at any point.
I realize I could try to detach it somehow, but this doesn't seem like the right way as the element couldn't be modified without explicitly adding it to a session again.
So when I don't spawn new threads and I don't explicitly call commit, I guess the transaction is committing before the request is fully gone and afterwards there is again access to it. How do I handle this problem?
|
2013/03/12
|
[
"https://Stackoverflow.com/questions/15351515",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1326104/"
] |
I believe what you're seeing here is a quirk to the fact that response callbacks and finished callbacks are actually executed after tweens. They are positioned just between your app's egress, and middleware. `pyramid_tm`, being a tween, is committing the transaction before your response callback executes - causing the error upon later access.
Getting the order of these things correct is difficult. A possibility off the top of my head is to register your own tween **under** `pyramid_tm` that performs a flush on the session, grabs the id, and sets the cookie on the response.
I sympathize with this issue, as anything that happens after the transaction has been committed is a real gray area in Pyramid where it's not always clear that the session should not be touched. I'll make a note to continue thinking about how to improve this workflow for Pyramid in the future.
|
I first tried with registering a tween and it worked somehow, but the data did not get saved. I then stumpled upon the [SQLAlchemy Event System](http://docs.sqlalchemy.org/en/latest/core/event.html). I found the [after\_commit](http://docs.sqlalchemy.org/en/latest/orm/events.html#sqlalchemy.orm.events.SessionEvents.after_commit) event. Using this, I could set up the detaching of the session object after the commit was done by `pyramid_tm`. I think this provides the full fexibility and doesn't impose any requirements on the order.
My final solution:
```
from sqlalchemy.event import listen
from sqlalchemy.orm import Session as SASession
def detach(db_session):
from pyramid.threadlocal import get_current_request
request = get_current_request()
log.debug("Expunging (detaching) session for DBSession")
db_session.expunge(request.session)
listen(SASession, 'after_commit', detach)
```
Only drawback: It requires calling [get\_current\_request()](http://docs.pylonsproject.org/projects/pyramid/en/latest/api/threadlocal.html#pyramid.threadlocal.get_current_request) which is discouraged. However, I saw no way of passing the session in any way, as the event gets called by SQLAlchemy. I thought of some ugly wrapping stuff but I think that would have been way to risky and unstable.
|
51,118,801
|
i am very new in python (and programming in general) and here is my issue. i would like to replace (or delete) a part of a string from a txt file which contains hundreds or thousands of lines. each line starts with the very same string which i want to delete.
i have not found a method to delete it so i tried a replace it with empty string but for some reason it doesn't work.
here is what i have written:
```
file = "C:/Users/experimental/Desktop/testfile siera.txt"
siera_log = open(file)
text_to_replace = "Chart: Bar Backtest: NQU8-CME [CB] 1 Min #1 | Study: free dll = 0 |"
for each_line in siera_log:
new_line = each_line.replace("text_to_replace", " ")
print(new_line)
```
when i print it to check if it was done, i can see that the lines are as they were before. no change was made.
can anyone help me to find out why?
|
2018/06/30
|
[
"https://Stackoverflow.com/questions/51118801",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/8741601/"
] |
>
> each line starts with the very same string which i want to delete.
>
>
>
The problem is you're passing a string `"text_to_replace"` rather than the variable `text_to_replace`.
But, for this specific problem, you could just remove the first *n* characters from each line:
```
text_to_replace = "Chart: Bar Backtest: NQU8-CME [CB] 1 Min #1 | Study: free dll = 0 |"
n = len(text_to_replace)
for each_line in siera_log:
new_line = each_line[n:]
print(new_line)
```
|
If you quote a variable it becomes a string literal and won't be evaluated as a variable.
Change your line for replacement to:
```
new_line = each_line.replace(text_to_replace, " ")
```
|
69,054,921
|
I want to run a docker container for `Ganache` on my MacBook M1, but get the following error:
```
The requested image's platform (linux/amd64) does not match the detected host platform (linux/arm64/v8) and no specific platform was requested
```
After this line nothing else will happen anymore and the whole process is stuck, although the qemu-system-aarch64 is running on 100% CPU according to Activity Monitor until I press `CTRL`+`C`.
My docker-files come from [this repository](https://github.com/unlock-protocol/unlock/blob/master/docker/docker-compose.override.yml). After running into the same issues there I tried to isolate the root cause and came up with the smallest setup that will run into the same error.
This is the output of `docker-compose up --build`:
```
Building ganache
Sending build context to Docker daemon 196.6kB
Step 1/17 : FROM trufflesuite/ganache-cli:v6.9.1
---> 40b011a5f8e5
Step 2/17 : LABEL Unlock <ops@unlock-protocol.com>
---> Using cache
---> aad8a72dac4e
Step 3/17 : RUN apk add --no-cache git openssh bash
---> Using cache
---> 4ca6312438bd
Step 4/17 : RUN apk add --no-cache python python-dev py-pip build-base && pip install virtualenv
---> Using cache
---> 0be290f541ed
Step 5/17 : RUN npm install -g npm@6.4.1
---> Using cache
---> d906d229a768
Step 6/17 : RUN npm install -g yarn
---> [Warning] The requested image's platform (linux/amd64) does not match the detected host platform (linux/arm64/v8) and no specific platform was requested
---> Running in 991c1d804fdf
```
**docker-compose.yml:**
```
version: '3.2'
services:
ganache:
restart: always
build:
context: ./development
dockerfile: ganache.dockerfile
env_file: ../.env.dev.local
ports:
- 8545:8545
ganache-standup:
image: ganache-standup
build:
context: ./development
dockerfile: ganache.dockerfile
env_file: ../.env.dev.local
entrypoint: ['node', '/standup/prepare-ganache-for-unlock.js']
depends_on:
- ganache
```
**ganache.dockerfile:**
The ganache.dockerfile can be found [here](https://github.com/unlock-protocol/unlock/blob/master/docker/development/ganache.dockerfile).
Running the whole project on an older iMac with Intel-processor works fine.
|
2021/09/04
|
[
"https://Stackoverflow.com/questions/69054921",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6727976/"
] |
On M1 MacBook Pro, I've had success using `docker run --platform linux/amd64`
**Example**
```
docker run --platform linux/amd64 node
```
|
With docker-compose you also have the `platform` option.
```
version: "2.4"
services:
zookeeper:
image: confluentinc/cp-zookeeper:7.1.1
hostname: zookeeper
container_name: zookeeper
platform: linux/amd64
ports:
- "2181:2181"
```
|
69,054,921
|
I want to run a docker container for `Ganache` on my MacBook M1, but get the following error:
```
The requested image's platform (linux/amd64) does not match the detected host platform (linux/arm64/v8) and no specific platform was requested
```
After this line nothing else will happen anymore and the whole process is stuck, although the qemu-system-aarch64 is running on 100% CPU according to Activity Monitor until I press `CTRL`+`C`.
My docker-files come from [this repository](https://github.com/unlock-protocol/unlock/blob/master/docker/docker-compose.override.yml). After running into the same issues there I tried to isolate the root cause and came up with the smallest setup that will run into the same error.
This is the output of `docker-compose up --build`:
```
Building ganache
Sending build context to Docker daemon 196.6kB
Step 1/17 : FROM trufflesuite/ganache-cli:v6.9.1
---> 40b011a5f8e5
Step 2/17 : LABEL Unlock <ops@unlock-protocol.com>
---> Using cache
---> aad8a72dac4e
Step 3/17 : RUN apk add --no-cache git openssh bash
---> Using cache
---> 4ca6312438bd
Step 4/17 : RUN apk add --no-cache python python-dev py-pip build-base && pip install virtualenv
---> Using cache
---> 0be290f541ed
Step 5/17 : RUN npm install -g npm@6.4.1
---> Using cache
---> d906d229a768
Step 6/17 : RUN npm install -g yarn
---> [Warning] The requested image's platform (linux/amd64) does not match the detected host platform (linux/arm64/v8) and no specific platform was requested
---> Running in 991c1d804fdf
```
**docker-compose.yml:**
```
version: '3.2'
services:
ganache:
restart: always
build:
context: ./development
dockerfile: ganache.dockerfile
env_file: ../.env.dev.local
ports:
- 8545:8545
ganache-standup:
image: ganache-standup
build:
context: ./development
dockerfile: ganache.dockerfile
env_file: ../.env.dev.local
entrypoint: ['node', '/standup/prepare-ganache-for-unlock.js']
depends_on:
- ganache
```
**ganache.dockerfile:**
The ganache.dockerfile can be found [here](https://github.com/unlock-protocol/unlock/blob/master/docker/development/ganache.dockerfile).
Running the whole project on an older iMac with Intel-processor works fine.
|
2021/09/04
|
[
"https://Stackoverflow.com/questions/69054921",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6727976/"
] |
If you're planning to run the image in your laptop, you need to build it for the cpu architecture of that particular machine. You can provide the `--platform` option to docker build (or even to `docker-compose`) to define the target platform you want to build the image for.
For example:
```
docker build --platform linux/amd64 .
```
|
You might need to run
```
docker run --rm --privileged multiarch/qemu-user-static --reset -p yes
```
in order to register foreign file formats with the kernel.
|
69,054,921
|
I want to run a docker container for `Ganache` on my MacBook M1, but get the following error:
```
The requested image's platform (linux/amd64) does not match the detected host platform (linux/arm64/v8) and no specific platform was requested
```
After this line nothing else will happen anymore and the whole process is stuck, although the qemu-system-aarch64 is running on 100% CPU according to Activity Monitor until I press `CTRL`+`C`.
My docker-files come from [this repository](https://github.com/unlock-protocol/unlock/blob/master/docker/docker-compose.override.yml). After running into the same issues there I tried to isolate the root cause and came up with the smallest setup that will run into the same error.
This is the output of `docker-compose up --build`:
```
Building ganache
Sending build context to Docker daemon 196.6kB
Step 1/17 : FROM trufflesuite/ganache-cli:v6.9.1
---> 40b011a5f8e5
Step 2/17 : LABEL Unlock <ops@unlock-protocol.com>
---> Using cache
---> aad8a72dac4e
Step 3/17 : RUN apk add --no-cache git openssh bash
---> Using cache
---> 4ca6312438bd
Step 4/17 : RUN apk add --no-cache python python-dev py-pip build-base && pip install virtualenv
---> Using cache
---> 0be290f541ed
Step 5/17 : RUN npm install -g npm@6.4.1
---> Using cache
---> d906d229a768
Step 6/17 : RUN npm install -g yarn
---> [Warning] The requested image's platform (linux/amd64) does not match the detected host platform (linux/arm64/v8) and no specific platform was requested
---> Running in 991c1d804fdf
```
**docker-compose.yml:**
```
version: '3.2'
services:
ganache:
restart: always
build:
context: ./development
dockerfile: ganache.dockerfile
env_file: ../.env.dev.local
ports:
- 8545:8545
ganache-standup:
image: ganache-standup
build:
context: ./development
dockerfile: ganache.dockerfile
env_file: ../.env.dev.local
entrypoint: ['node', '/standup/prepare-ganache-for-unlock.js']
depends_on:
- ganache
```
**ganache.dockerfile:**
The ganache.dockerfile can be found [here](https://github.com/unlock-protocol/unlock/blob/master/docker/development/ganache.dockerfile).
Running the whole project on an older iMac with Intel-processor works fine.
|
2021/09/04
|
[
"https://Stackoverflow.com/questions/69054921",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6727976/"
] |
On M1 MacBook Pro, I've had success using `docker run --platform linux/amd64`
**Example**
```
docker run --platform linux/amd64 node
```
|
You might need to run
```
docker run --rm --privileged multiarch/qemu-user-static --reset -p yes
```
in order to register foreign file formats with the kernel.
|
69,054,921
|
I want to run a docker container for `Ganache` on my MacBook M1, but get the following error:
```
The requested image's platform (linux/amd64) does not match the detected host platform (linux/arm64/v8) and no specific platform was requested
```
After this line nothing else will happen anymore and the whole process is stuck, although the qemu-system-aarch64 is running on 100% CPU according to Activity Monitor until I press `CTRL`+`C`.
My docker-files come from [this repository](https://github.com/unlock-protocol/unlock/blob/master/docker/docker-compose.override.yml). After running into the same issues there I tried to isolate the root cause and came up with the smallest setup that will run into the same error.
This is the output of `docker-compose up --build`:
```
Building ganache
Sending build context to Docker daemon 196.6kB
Step 1/17 : FROM trufflesuite/ganache-cli:v6.9.1
---> 40b011a5f8e5
Step 2/17 : LABEL Unlock <ops@unlock-protocol.com>
---> Using cache
---> aad8a72dac4e
Step 3/17 : RUN apk add --no-cache git openssh bash
---> Using cache
---> 4ca6312438bd
Step 4/17 : RUN apk add --no-cache python python-dev py-pip build-base && pip install virtualenv
---> Using cache
---> 0be290f541ed
Step 5/17 : RUN npm install -g npm@6.4.1
---> Using cache
---> d906d229a768
Step 6/17 : RUN npm install -g yarn
---> [Warning] The requested image's platform (linux/amd64) does not match the detected host platform (linux/arm64/v8) and no specific platform was requested
---> Running in 991c1d804fdf
```
**docker-compose.yml:**
```
version: '3.2'
services:
ganache:
restart: always
build:
context: ./development
dockerfile: ganache.dockerfile
env_file: ../.env.dev.local
ports:
- 8545:8545
ganache-standup:
image: ganache-standup
build:
context: ./development
dockerfile: ganache.dockerfile
env_file: ../.env.dev.local
entrypoint: ['node', '/standup/prepare-ganache-for-unlock.js']
depends_on:
- ganache
```
**ganache.dockerfile:**
The ganache.dockerfile can be found [here](https://github.com/unlock-protocol/unlock/blob/master/docker/development/ganache.dockerfile).
Running the whole project on an older iMac with Intel-processor works fine.
|
2021/09/04
|
[
"https://Stackoverflow.com/questions/69054921",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6727976/"
] |
If you're planning to run the image in your laptop, you need to build it for the cpu architecture of that particular machine. You can provide the `--platform` option to docker build (or even to `docker-compose`) to define the target platform you want to build the image for.
For example:
```
docker build --platform linux/amd64 .
```
|
You should have the docker buildx installed. If you don't have the docker-desktop you can download the binary buildx from github: <https://github.com/docker/buildx/>
After installation you can build your image like Theofilos Papapanagiotou said
<downloaded\_path>/buildx --platform linux/amd64 ...
|
69,054,921
|
I want to run a docker container for `Ganache` on my MacBook M1, but get the following error:
```
The requested image's platform (linux/amd64) does not match the detected host platform (linux/arm64/v8) and no specific platform was requested
```
After this line nothing else will happen anymore and the whole process is stuck, although the qemu-system-aarch64 is running on 100% CPU according to Activity Monitor until I press `CTRL`+`C`.
My docker-files come from [this repository](https://github.com/unlock-protocol/unlock/blob/master/docker/docker-compose.override.yml). After running into the same issues there I tried to isolate the root cause and came up with the smallest setup that will run into the same error.
This is the output of `docker-compose up --build`:
```
Building ganache
Sending build context to Docker daemon 196.6kB
Step 1/17 : FROM trufflesuite/ganache-cli:v6.9.1
---> 40b011a5f8e5
Step 2/17 : LABEL Unlock <ops@unlock-protocol.com>
---> Using cache
---> aad8a72dac4e
Step 3/17 : RUN apk add --no-cache git openssh bash
---> Using cache
---> 4ca6312438bd
Step 4/17 : RUN apk add --no-cache python python-dev py-pip build-base && pip install virtualenv
---> Using cache
---> 0be290f541ed
Step 5/17 : RUN npm install -g npm@6.4.1
---> Using cache
---> d906d229a768
Step 6/17 : RUN npm install -g yarn
---> [Warning] The requested image's platform (linux/amd64) does not match the detected host platform (linux/arm64/v8) and no specific platform was requested
---> Running in 991c1d804fdf
```
**docker-compose.yml:**
```
version: '3.2'
services:
ganache:
restart: always
build:
context: ./development
dockerfile: ganache.dockerfile
env_file: ../.env.dev.local
ports:
- 8545:8545
ganache-standup:
image: ganache-standup
build:
context: ./development
dockerfile: ganache.dockerfile
env_file: ../.env.dev.local
entrypoint: ['node', '/standup/prepare-ganache-for-unlock.js']
depends_on:
- ganache
```
**ganache.dockerfile:**
The ganache.dockerfile can be found [here](https://github.com/unlock-protocol/unlock/blob/master/docker/development/ganache.dockerfile).
Running the whole project on an older iMac with Intel-processor works fine.
|
2021/09/04
|
[
"https://Stackoverflow.com/questions/69054921",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6727976/"
] |
Build the image by passing the list of architecture
Try this:
```
docker buildx build --platform linux/amd64,linux/arm64,linux/arm/v7 -t username/demo:latest --push .
```
Note: ensure to place "." at the end
|
You should have the docker buildx installed. If you don't have the docker-desktop you can download the binary buildx from github: <https://github.com/docker/buildx/>
After installation you can build your image like Theofilos Papapanagiotou said
<downloaded\_path>/buildx --platform linux/amd64 ...
|
69,054,921
|
I want to run a docker container for `Ganache` on my MacBook M1, but get the following error:
```
The requested image's platform (linux/amd64) does not match the detected host platform (linux/arm64/v8) and no specific platform was requested
```
After this line nothing else will happen anymore and the whole process is stuck, although the qemu-system-aarch64 is running on 100% CPU according to Activity Monitor until I press `CTRL`+`C`.
My docker-files come from [this repository](https://github.com/unlock-protocol/unlock/blob/master/docker/docker-compose.override.yml). After running into the same issues there I tried to isolate the root cause and came up with the smallest setup that will run into the same error.
This is the output of `docker-compose up --build`:
```
Building ganache
Sending build context to Docker daemon 196.6kB
Step 1/17 : FROM trufflesuite/ganache-cli:v6.9.1
---> 40b011a5f8e5
Step 2/17 : LABEL Unlock <ops@unlock-protocol.com>
---> Using cache
---> aad8a72dac4e
Step 3/17 : RUN apk add --no-cache git openssh bash
---> Using cache
---> 4ca6312438bd
Step 4/17 : RUN apk add --no-cache python python-dev py-pip build-base && pip install virtualenv
---> Using cache
---> 0be290f541ed
Step 5/17 : RUN npm install -g npm@6.4.1
---> Using cache
---> d906d229a768
Step 6/17 : RUN npm install -g yarn
---> [Warning] The requested image's platform (linux/amd64) does not match the detected host platform (linux/arm64/v8) and no specific platform was requested
---> Running in 991c1d804fdf
```
**docker-compose.yml:**
```
version: '3.2'
services:
ganache:
restart: always
build:
context: ./development
dockerfile: ganache.dockerfile
env_file: ../.env.dev.local
ports:
- 8545:8545
ganache-standup:
image: ganache-standup
build:
context: ./development
dockerfile: ganache.dockerfile
env_file: ../.env.dev.local
entrypoint: ['node', '/standup/prepare-ganache-for-unlock.js']
depends_on:
- ganache
```
**ganache.dockerfile:**
The ganache.dockerfile can be found [here](https://github.com/unlock-protocol/unlock/blob/master/docker/development/ganache.dockerfile).
Running the whole project on an older iMac with Intel-processor works fine.
|
2021/09/04
|
[
"https://Stackoverflow.com/questions/69054921",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6727976/"
] |
With docker-compose you also have the `platform` option.
```
version: "2.4"
services:
zookeeper:
image: confluentinc/cp-zookeeper:7.1.1
hostname: zookeeper
container_name: zookeeper
platform: linux/amd64
ports:
- "2181:2181"
```
|
You might need to run
```
docker run --rm --privileged multiarch/qemu-user-static --reset -p yes
```
in order to register foreign file formats with the kernel.
|
69,054,921
|
I want to run a docker container for `Ganache` on my MacBook M1, but get the following error:
```
The requested image's platform (linux/amd64) does not match the detected host platform (linux/arm64/v8) and no specific platform was requested
```
After this line nothing else will happen anymore and the whole process is stuck, although the qemu-system-aarch64 is running on 100% CPU according to Activity Monitor until I press `CTRL`+`C`.
My docker-files come from [this repository](https://github.com/unlock-protocol/unlock/blob/master/docker/docker-compose.override.yml). After running into the same issues there I tried to isolate the root cause and came up with the smallest setup that will run into the same error.
This is the output of `docker-compose up --build`:
```
Building ganache
Sending build context to Docker daemon 196.6kB
Step 1/17 : FROM trufflesuite/ganache-cli:v6.9.1
---> 40b011a5f8e5
Step 2/17 : LABEL Unlock <ops@unlock-protocol.com>
---> Using cache
---> aad8a72dac4e
Step 3/17 : RUN apk add --no-cache git openssh bash
---> Using cache
---> 4ca6312438bd
Step 4/17 : RUN apk add --no-cache python python-dev py-pip build-base && pip install virtualenv
---> Using cache
---> 0be290f541ed
Step 5/17 : RUN npm install -g npm@6.4.1
---> Using cache
---> d906d229a768
Step 6/17 : RUN npm install -g yarn
---> [Warning] The requested image's platform (linux/amd64) does not match the detected host platform (linux/arm64/v8) and no specific platform was requested
---> Running in 991c1d804fdf
```
**docker-compose.yml:**
```
version: '3.2'
services:
ganache:
restart: always
build:
context: ./development
dockerfile: ganache.dockerfile
env_file: ../.env.dev.local
ports:
- 8545:8545
ganache-standup:
image: ganache-standup
build:
context: ./development
dockerfile: ganache.dockerfile
env_file: ../.env.dev.local
entrypoint: ['node', '/standup/prepare-ganache-for-unlock.js']
depends_on:
- ganache
```
**ganache.dockerfile:**
The ganache.dockerfile can be found [here](https://github.com/unlock-protocol/unlock/blob/master/docker/development/ganache.dockerfile).
Running the whole project on an older iMac with Intel-processor works fine.
|
2021/09/04
|
[
"https://Stackoverflow.com/questions/69054921",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6727976/"
] |
If you're planning to run the image in your laptop, you need to build it for the cpu architecture of that particular machine. You can provide the `--platform` option to docker build (or even to `docker-compose`) to define the target platform you want to build the image for.
For example:
```
docker build --platform linux/amd64 .
```
|
Build the image by passing the list of architecture
Try this:
```
docker buildx build --platform linux/amd64,linux/arm64,linux/arm/v7 -t username/demo:latest --push .
```
Note: ensure to place "." at the end
|
69,054,921
|
I want to run a docker container for `Ganache` on my MacBook M1, but get the following error:
```
The requested image's platform (linux/amd64) does not match the detected host platform (linux/arm64/v8) and no specific platform was requested
```
After this line nothing else will happen anymore and the whole process is stuck, although the qemu-system-aarch64 is running on 100% CPU according to Activity Monitor until I press `CTRL`+`C`.
My docker-files come from [this repository](https://github.com/unlock-protocol/unlock/blob/master/docker/docker-compose.override.yml). After running into the same issues there I tried to isolate the root cause and came up with the smallest setup that will run into the same error.
This is the output of `docker-compose up --build`:
```
Building ganache
Sending build context to Docker daemon 196.6kB
Step 1/17 : FROM trufflesuite/ganache-cli:v6.9.1
---> 40b011a5f8e5
Step 2/17 : LABEL Unlock <ops@unlock-protocol.com>
---> Using cache
---> aad8a72dac4e
Step 3/17 : RUN apk add --no-cache git openssh bash
---> Using cache
---> 4ca6312438bd
Step 4/17 : RUN apk add --no-cache python python-dev py-pip build-base && pip install virtualenv
---> Using cache
---> 0be290f541ed
Step 5/17 : RUN npm install -g npm@6.4.1
---> Using cache
---> d906d229a768
Step 6/17 : RUN npm install -g yarn
---> [Warning] The requested image's platform (linux/amd64) does not match the detected host platform (linux/arm64/v8) and no specific platform was requested
---> Running in 991c1d804fdf
```
**docker-compose.yml:**
```
version: '3.2'
services:
ganache:
restart: always
build:
context: ./development
dockerfile: ganache.dockerfile
env_file: ../.env.dev.local
ports:
- 8545:8545
ganache-standup:
image: ganache-standup
build:
context: ./development
dockerfile: ganache.dockerfile
env_file: ../.env.dev.local
entrypoint: ['node', '/standup/prepare-ganache-for-unlock.js']
depends_on:
- ganache
```
**ganache.dockerfile:**
The ganache.dockerfile can be found [here](https://github.com/unlock-protocol/unlock/blob/master/docker/development/ganache.dockerfile).
Running the whole project on an older iMac with Intel-processor works fine.
|
2021/09/04
|
[
"https://Stackoverflow.com/questions/69054921",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6727976/"
] |
With docker-compose you also have the `platform` option.
```
version: "2.4"
services:
zookeeper:
image: confluentinc/cp-zookeeper:7.1.1
hostname: zookeeper
container_name: zookeeper
platform: linux/amd64
ports:
- "2181:2181"
```
|
You should have the docker buildx installed. If you don't have the docker-desktop you can download the binary buildx from github: <https://github.com/docker/buildx/>
After installation you can build your image like Theofilos Papapanagiotou said
<downloaded\_path>/buildx --platform linux/amd64 ...
|
69,054,921
|
I want to run a docker container for `Ganache` on my MacBook M1, but get the following error:
```
The requested image's platform (linux/amd64) does not match the detected host platform (linux/arm64/v8) and no specific platform was requested
```
After this line nothing else will happen anymore and the whole process is stuck, although the qemu-system-aarch64 is running on 100% CPU according to Activity Monitor until I press `CTRL`+`C`.
My docker-files come from [this repository](https://github.com/unlock-protocol/unlock/blob/master/docker/docker-compose.override.yml). After running into the same issues there I tried to isolate the root cause and came up with the smallest setup that will run into the same error.
This is the output of `docker-compose up --build`:
```
Building ganache
Sending build context to Docker daemon 196.6kB
Step 1/17 : FROM trufflesuite/ganache-cli:v6.9.1
---> 40b011a5f8e5
Step 2/17 : LABEL Unlock <ops@unlock-protocol.com>
---> Using cache
---> aad8a72dac4e
Step 3/17 : RUN apk add --no-cache git openssh bash
---> Using cache
---> 4ca6312438bd
Step 4/17 : RUN apk add --no-cache python python-dev py-pip build-base && pip install virtualenv
---> Using cache
---> 0be290f541ed
Step 5/17 : RUN npm install -g npm@6.4.1
---> Using cache
---> d906d229a768
Step 6/17 : RUN npm install -g yarn
---> [Warning] The requested image's platform (linux/amd64) does not match the detected host platform (linux/arm64/v8) and no specific platform was requested
---> Running in 991c1d804fdf
```
**docker-compose.yml:**
```
version: '3.2'
services:
ganache:
restart: always
build:
context: ./development
dockerfile: ganache.dockerfile
env_file: ../.env.dev.local
ports:
- 8545:8545
ganache-standup:
image: ganache-standup
build:
context: ./development
dockerfile: ganache.dockerfile
env_file: ../.env.dev.local
entrypoint: ['node', '/standup/prepare-ganache-for-unlock.js']
depends_on:
- ganache
```
**ganache.dockerfile:**
The ganache.dockerfile can be found [here](https://github.com/unlock-protocol/unlock/blob/master/docker/development/ganache.dockerfile).
Running the whole project on an older iMac with Intel-processor works fine.
|
2021/09/04
|
[
"https://Stackoverflow.com/questions/69054921",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6727976/"
] |
On M1 MacBook Pro, I've had success using `docker run --platform linux/amd64`
**Example**
```
docker run --platform linux/amd64 node
```
|
Build the image by passing the list of architecture
Try this:
```
docker buildx build --platform linux/amd64,linux/arm64,linux/arm/v7 -t username/demo:latest --push .
```
Note: ensure to place "." at the end
|
69,054,921
|
I want to run a docker container for `Ganache` on my MacBook M1, but get the following error:
```
The requested image's platform (linux/amd64) does not match the detected host platform (linux/arm64/v8) and no specific platform was requested
```
After this line nothing else will happen anymore and the whole process is stuck, although the qemu-system-aarch64 is running on 100% CPU according to Activity Monitor until I press `CTRL`+`C`.
My docker-files come from [this repository](https://github.com/unlock-protocol/unlock/blob/master/docker/docker-compose.override.yml). After running into the same issues there I tried to isolate the root cause and came up with the smallest setup that will run into the same error.
This is the output of `docker-compose up --build`:
```
Building ganache
Sending build context to Docker daemon 196.6kB
Step 1/17 : FROM trufflesuite/ganache-cli:v6.9.1
---> 40b011a5f8e5
Step 2/17 : LABEL Unlock <ops@unlock-protocol.com>
---> Using cache
---> aad8a72dac4e
Step 3/17 : RUN apk add --no-cache git openssh bash
---> Using cache
---> 4ca6312438bd
Step 4/17 : RUN apk add --no-cache python python-dev py-pip build-base && pip install virtualenv
---> Using cache
---> 0be290f541ed
Step 5/17 : RUN npm install -g npm@6.4.1
---> Using cache
---> d906d229a768
Step 6/17 : RUN npm install -g yarn
---> [Warning] The requested image's platform (linux/amd64) does not match the detected host platform (linux/arm64/v8) and no specific platform was requested
---> Running in 991c1d804fdf
```
**docker-compose.yml:**
```
version: '3.2'
services:
ganache:
restart: always
build:
context: ./development
dockerfile: ganache.dockerfile
env_file: ../.env.dev.local
ports:
- 8545:8545
ganache-standup:
image: ganache-standup
build:
context: ./development
dockerfile: ganache.dockerfile
env_file: ../.env.dev.local
entrypoint: ['node', '/standup/prepare-ganache-for-unlock.js']
depends_on:
- ganache
```
**ganache.dockerfile:**
The ganache.dockerfile can be found [here](https://github.com/unlock-protocol/unlock/blob/master/docker/development/ganache.dockerfile).
Running the whole project on an older iMac with Intel-processor works fine.
|
2021/09/04
|
[
"https://Stackoverflow.com/questions/69054921",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6727976/"
] |
With docker-compose you also have the `platform` option.
```
version: "2.4"
services:
zookeeper:
image: confluentinc/cp-zookeeper:7.1.1
hostname: zookeeper
container_name: zookeeper
platform: linux/amd64
ports:
- "2181:2181"
```
|
Build the image by passing the list of architecture
Try this:
```
docker buildx build --platform linux/amd64,linux/arm64,linux/arm/v7 -t username/demo:latest --push .
```
Note: ensure to place "." at the end
|
49,924,302
|
I have couple of date string with following pattern MM DD(st, nd, rd, th) YYYY HH:MM am. what is the most pythonic way for me to replace (st, nd, rd, th) as empty string ''?
```
s = ['st', 'nd', 'rd', 'th']
string = 'Mar 1st 2017 00:00 am'
string = 'Mar 2nd 2017 00:00 am'
string = 'Mar 3rd 2017 00:00 am'
string = 'Mar 4th 2017 00:00 am'
for i in s:
a = string.replace(i, '')
a = [string.replace(i, '') for i in s][0]
print(a)
```
|
2018/04/19
|
[
"https://Stackoverflow.com/questions/49924302",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6373357/"
] |
The most pythonic way is to use `dateutil`.
```
from dateutil.parser import parse
import datetime
t = parse("Mar 2nd 2017 00:00 am")
# you can access the month, hour, minute, etc:
t.hour # 0
t.minute # 0
t.month # 3
```
And then, you can use `t.strftime()`, where the formatting of the resulting string is any of these: <http://strftime.org/>
If you want a more *appropriate representation* of the time(like for example in your proper locale), then you could do `t.strftime("%c")`, or you could easily format it to the answer you wanted above.
This is much safer than a regex match because `dateutil` is a part of the standard library, and returns to you a concise `datetime` object.
|
You could use a regular expression as follows:
```
import re
strings = ['Mar 1st 2017 00:00 am', 'Mar 2nd 2017 00:00 am', 'Mar 3rd 2017 00:00 am', 'Mar 4th 2017 00:00 am']
for string in strings:
print(re.sub('(.*? \d+)(.*?)( .*)', r'\1\3', string))
```
This would give you:
```none
Mar 1 2017 00:00 am
Mar 2 2017 00:00 am
Mar 3 2017 00:00 am
Mar 4 2017 00:00 am
```
If you want to restrict it do just `st` `nd` `rd` `th`:
```
print(re.sub('(.*? \d+)(st|nd|rd|th)( .*)', r'\1\3', string))
```
|
30,522,420
|
I'm going through the new book "Data Science from Scratch: First Principles with Python" and I think I've found an errata.
When I run the code I get `"TypeError: 'int' object has no attribute '__getitem__'".` I think this is because when I try to select `friend["friends"]`, `friend` is an integer that I can't subset. Is that correct? How can I continue the exercises so that I get the desired output? It should be a list of friend of friends (foaf). I know there's repetition problems but those are fixed later...
```
users = [
{"id": 0, "name": "Ashley"},
{"id": 1, "name": "Ben"},
{"id": 2, "name": "Conrad"},
{"id": 3, "name": "Doug"},
{"id": 4, "name": "Evin"},
{"id": 5, "name": "Florian"},
{"id": 6, "name": "Gerald"}
]
#create list of tuples where each tuple represents a friendships between ids
friendships = [(0,1), (0,2), (0,5), (1,2), (1,5), (2,3), (2,5), (3,4), (4,5), (4,6)]
#add friends key to each user
for user in users:
user["friends"] = []
#go through friendships and add each one to the friends key in users
for i, j in friendships:
users[i]["friends"].append(j)
users[j]["friends"].append(i)
def friends_of_friend_ids_bad(user):
#foaf is friend of friend
return [foaf["id"]
for friend in user["friends"]
for foaf in friend["friends"]]
print friends_of_friend_ids_bad(users[0])
```
Full traceback:
```
Traceback (most recent call last):
File "/Users/marlon/Desktop/test.py", line 57, in <module>
print friends_of_friend_ids_bad(users[0])
File "/Users/marlon/Desktop/test.py", line 55, in friends_of_friend_ids_bad
for foaf in friend["friends"]]
TypeError: 'int' object has no attribute '__getitem__'
[Finished in 0.6s with exit code 1]
[shell_cmd: python -u "/Users/marlon/Desktop/test.py"]
[dir: /Users/marlon/Desktop]
[path: /usr/bin:/bin:/usr/sbin:/sbin]
```
How I think it can be fixed:
I think you need users as a second argument and then do "for foaf in users[friend]["friends"]" instead of "for foaf in friend["friends"]
|
2015/05/29
|
[
"https://Stackoverflow.com/questions/30522420",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2469211/"
] |
Yes, you've found an incorrect piece of code in the book.
Implementation for `friends_of_friend_ids_bad` function should be like this:
```
def friends_of_friend_ids_bad(user):
#foaf is friend of friend
return [users[foaf]["id"]
for friend in user["friends"]
for foaf in users[friend]["friends"]]
```
`user["friends"]` is a list of integers, thus `friend` is an integer and `friend["friends"]` will raise `TypeError` exception
---
**UPD**
It seems, that the problem in the book was not about `friends_of_friend_ids_bad` function but about populating `friends` lists.
Replace
```
for i, j in friendships:
users[i]["friends"].append(j)
users[j]["friends"].append(i)
```
with
```
for i, j in friendships:
users[i]["friends"].append(users[j])
users[j]["friends"].append(users[i])
```
Then `friends_of_friend_ids_bad` and `friends_of_friend_ids` will work as intended.
|
The error is on:
```
return [foaf["id"] for friend in user["friends"] for foaf in friend["friends"]]
```
In the second for loop, you're trying to access `__getitem__` of `users[0]["friends"]`, which is exactly 5 (ints don't have `__getitem__`).
You're trying to store on the list `foaf["id"]` for each friend in `user["friends"]` and for each foaf `friend["friends"]`. The problem is that foaf gets from `friend["friends"]` the number from a tuple inside friendships that was stored on users, and then you try to access `["id"]` from it, trying to call `__getitem__` from an integer value.
That's the exact cause of your problem.
|
65,154,521
|
When I want to selenium click this code button , selenium write me this error
This is my code:
```
#LOGIN IN WEBSITE
browser = webdriver.Firefox()
browser.get("http://class.apphafez.ir/")
username_input = browser.find_element_by_css_selector("input[name='UserName']")
password_input = browser.find_element_by_css_selector("input[name='Password']")
username_input.send_keys(username_entry.get())
password_input.send_keys(password_entry.get())
button_go = browser.find_element_by_xpath("//button[@type='submit']")
button_go.click()
#GO CLASS
wait = WebDriverWait(browser , 10)
go_to_class = wait.until(EC.element_to_be_clickable((By.XPATH , ("//div[@class='btn btn- palegreen enterClassBtn'"))))
go_to_class.click()
```
this is site code :
```
<div class="databox-row padding-10">
<button data-bind="attr: { 'data-weekscheduleId' : Id}" style="width:100%" class="btn btn-palegreen enterClassBtn" data-weekscheduleid="320">"i want to ckick here"</button>
```
This is my program error :
```
File "hafezlearn.py", line 33, in login_use
go_to_class = wait.until(EC.element_to_be_clickable((By.XPATH , ("//div[@class='btn btn- palegreen enterClassBtn'"))))
File "/usr/local/lib/python3.8/dist-packages/selenium/webdriver/support/wait.py", line 80, in until
raise TimeoutException(message, screen, stacktrace)
selenium.common.exceptions.TimeoutException: Message:
</div>
```
|
2020/12/05
|
[
"https://Stackoverflow.com/questions/65154521",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/13937766/"
] |
You were close enough. The value of the *class* attribute is **`btn btn-palegreen enterClassBtn`** but not `btn btn- palegreen enterClassBtn` and you can't add extra spaces within the attribute value.
---
Solution
--------
To click on the element you need to induce [WebDriverWait](https://stackoverflow.com/questions/59130200/selenium-wait-until-element-is-present-visible-and-interactable/59130336#59130336) for the `element_to_be_clickable()` and you can use either of the following [Locator Strategies](https://stackoverflow.com/questions/48369043/official-locator-strategies-for-the-webdriver/48376890#48376890):
* Using `CSS_SELECTOR`:
```
WebDriverWait(driver, 20).until(EC.element_to_be_clickable((By.CSS_SELECTOR, "button.btn.btn-palegreen.enterClassBtn[data-bind*='data-weekscheduleId']"))).click()
```
* Using `XPATH`:
```
WebDriverWait(driver, 20).until(EC.element_to_be_clickable((By.XPATH, "//button[@class='btn btn-palegreen enterClassBtn' and text()='i want to ckick here'][contains(@data-bind, 'data-weekscheduleId')]"))).click()
```
* **Note**: You have to add the following imports :
```
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.common.by import By
from selenium.webdriver.support import expected_conditions as EC
```
|
Multiple class names for css values are tough to handle. usually easiest way is to use a css selector:
```
button.btn.btn-palegreen.enterClassBtn
```
Specifically:
```
go_to_class = wait.until(EC.element_to_be_clickable((By.CSS_SELECTOR , ("button.btn.btn-palegreen.enterClassBtn"))))
```
See also [How to get elements with multiple classes](https://stackoverflow.com/questions/7184562/how-to-get-elements-with-multiple-classes/7184581)
|
33,801,170
|
Let's say I have an ndarray with 100 elements, and I want to select the first 4 elements, skip 6 and go ahead like this (in other words, select the first 4 elements every 10 elements).
I tried with python slicing with step but I think it's not working in my case. How can I do that? I'm using Pandas and numpy, can they help? I searched around but I have found nothing like that kind of slicing. Thanks!
|
2015/11/19
|
[
"https://Stackoverflow.com/questions/33801170",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5580662/"
] |
You could reshape the array to a `10x10`, then use slicing to pick the first 4 elements of each row. Then flatten the reshaped, sliced array:
```
In [46]: print a
[ 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24
25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49
50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74
75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99]
In [47]: print a.reshape((10,-1))[:,:4].flatten()
[ 0 1 2 3 10 11 12 13 20 21 22 23 30 31 32 33 40 41 42 43 50 51 52 53 60
61 62 63 70 71 72 73 80 81 82 83 90 91 92 93]
```
|
Use `% 10`:
```
print [i for i in range(100) if i % 10 in (0, 1, 2, 3)]
[0, 1, 2, 3, 10, 11, 12, 13, 20, 21, 22, 23, 30, 31, 32, 33, 40, 41, 42, 43, 50, 51, 52, 53, 60, 61, 62, 63, 70, 71, 72, 73, 80, 81, 82, 83, 90, 91, 92, 93]
```
|
33,801,170
|
Let's say I have an ndarray with 100 elements, and I want to select the first 4 elements, skip 6 and go ahead like this (in other words, select the first 4 elements every 10 elements).
I tried with python slicing with step but I think it's not working in my case. How can I do that? I'm using Pandas and numpy, can they help? I searched around but I have found nothing like that kind of slicing. Thanks!
|
2015/11/19
|
[
"https://Stackoverflow.com/questions/33801170",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5580662/"
] |
You could use [`NumPy slicing`](http://docs.scipy.org/doc/numpy-1.10.0/reference/arrays.indexing.html#basic-slicing-and-indexing) to solve your case.
For a `1D` array case -
```
A.reshape(-1,10)[:,:4].reshape(-1)
```
This can be extended to a `2D` array case with the selection to be made along the first axis -
```
A.reshape(-1,10,A.shape[1])[:,:4].reshape(-1,A.shape[1])
```
|
Use `% 10`:
```
print [i for i in range(100) if i % 10 in (0, 1, 2, 3)]
[0, 1, 2, 3, 10, 11, 12, 13, 20, 21, 22, 23, 30, 31, 32, 33, 40, 41, 42, 43, 50, 51, 52, 53, 60, 61, 62, 63, 70, 71, 72, 73, 80, 81, 82, 83, 90, 91, 92, 93]
```
|
33,801,170
|
Let's say I have an ndarray with 100 elements, and I want to select the first 4 elements, skip 6 and go ahead like this (in other words, select the first 4 elements every 10 elements).
I tried with python slicing with step but I think it's not working in my case. How can I do that? I'm using Pandas and numpy, can they help? I searched around but I have found nothing like that kind of slicing. Thanks!
|
2015/11/19
|
[
"https://Stackoverflow.com/questions/33801170",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5580662/"
] |
Use `% 10`:
```
print [i for i in range(100) if i % 10 in (0, 1, 2, 3)]
[0, 1, 2, 3, 10, 11, 12, 13, 20, 21, 22, 23, 30, 31, 32, 33, 40, 41, 42, 43, 50, 51, 52, 53, 60, 61, 62, 63, 70, 71, 72, 73, 80, 81, 82, 83, 90, 91, 92, 93]
```
|
In the example in OP, the input array is divisible by `m+n`. If it's not, then you could use the below function `take_n_skip_m`. It expands on @Divakar's answer by padding the input array to make it reshapeable into a proper 2D matrix; slice, flatten and slice again to get the desired outcome:
```
def take_n_skip_m(arr, n=4, m=6):
# in case len(arr) is not divisible by (n+m), get the remainder
remainder = len(arr) % (n+m)
# will pad arr with (n+m-remainder) 0s at the back
pad_size = (0, n+m-remainder)
# pad arr; reshape to create 2D array; take first n of each row; flatten 2D->1D
sliced_arr = np.pad(arr, pad_size).reshape(-1, n+m)[:, :n].flatten()
# remove any remaining padding constant if there is any (which depends on whether remainder >= n or not)
return sliced_arr if remainder >= n else sliced_arr[:remainder-n]
```
Examples:
```
>>> out = take_n_skip_m(np.arange(20), n=5, m=4)
>>> print(out)
[ 0 1 2 3 4 9 10 11 12 13 18 19]
>>> out = take_n_skip_m(np.arange(20), n=5, m=6)
>>> print(out)
[ 0 1 2 3 4 11 12 13 14 15]
```
|
33,801,170
|
Let's say I have an ndarray with 100 elements, and I want to select the first 4 elements, skip 6 and go ahead like this (in other words, select the first 4 elements every 10 elements).
I tried with python slicing with step but I think it's not working in my case. How can I do that? I'm using Pandas and numpy, can they help? I searched around but I have found nothing like that kind of slicing. Thanks!
|
2015/11/19
|
[
"https://Stackoverflow.com/questions/33801170",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5580662/"
] |
You could reshape the array to a `10x10`, then use slicing to pick the first 4 elements of each row. Then flatten the reshaped, sliced array:
```
In [46]: print a
[ 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24
25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49
50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74
75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99]
In [47]: print a.reshape((10,-1))[:,:4].flatten()
[ 0 1 2 3 10 11 12 13 20 21 22 23 30 31 32 33 40 41 42 43 50 51 52 53 60
61 62 63 70 71 72 73 80 81 82 83 90 91 92 93]
```
|
```
shorter_arr = arr[np.arange(len(arr))%10 < 4]
```
|
33,801,170
|
Let's say I have an ndarray with 100 elements, and I want to select the first 4 elements, skip 6 and go ahead like this (in other words, select the first 4 elements every 10 elements).
I tried with python slicing with step but I think it's not working in my case. How can I do that? I'm using Pandas and numpy, can they help? I searched around but I have found nothing like that kind of slicing. Thanks!
|
2015/11/19
|
[
"https://Stackoverflow.com/questions/33801170",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5580662/"
] |
You could use [`NumPy slicing`](http://docs.scipy.org/doc/numpy-1.10.0/reference/arrays.indexing.html#basic-slicing-and-indexing) to solve your case.
For a `1D` array case -
```
A.reshape(-1,10)[:,:4].reshape(-1)
```
This can be extended to a `2D` array case with the selection to be made along the first axis -
```
A.reshape(-1,10,A.shape[1])[:,:4].reshape(-1,A.shape[1])
```
|
```
shorter_arr = arr[np.arange(len(arr))%10 < 4]
```
|
33,801,170
|
Let's say I have an ndarray with 100 elements, and I want to select the first 4 elements, skip 6 and go ahead like this (in other words, select the first 4 elements every 10 elements).
I tried with python slicing with step but I think it's not working in my case. How can I do that? I'm using Pandas and numpy, can they help? I searched around but I have found nothing like that kind of slicing. Thanks!
|
2015/11/19
|
[
"https://Stackoverflow.com/questions/33801170",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5580662/"
] |
```
shorter_arr = arr[np.arange(len(arr))%10 < 4]
```
|
In the example in OP, the input array is divisible by `m+n`. If it's not, then you could use the below function `take_n_skip_m`. It expands on @Divakar's answer by padding the input array to make it reshapeable into a proper 2D matrix; slice, flatten and slice again to get the desired outcome:
```
def take_n_skip_m(arr, n=4, m=6):
# in case len(arr) is not divisible by (n+m), get the remainder
remainder = len(arr) % (n+m)
# will pad arr with (n+m-remainder) 0s at the back
pad_size = (0, n+m-remainder)
# pad arr; reshape to create 2D array; take first n of each row; flatten 2D->1D
sliced_arr = np.pad(arr, pad_size).reshape(-1, n+m)[:, :n].flatten()
# remove any remaining padding constant if there is any (which depends on whether remainder >= n or not)
return sliced_arr if remainder >= n else sliced_arr[:remainder-n]
```
Examples:
```
>>> out = take_n_skip_m(np.arange(20), n=5, m=4)
>>> print(out)
[ 0 1 2 3 4 9 10 11 12 13 18 19]
>>> out = take_n_skip_m(np.arange(20), n=5, m=6)
>>> print(out)
[ 0 1 2 3 4 11 12 13 14 15]
```
|
33,801,170
|
Let's say I have an ndarray with 100 elements, and I want to select the first 4 elements, skip 6 and go ahead like this (in other words, select the first 4 elements every 10 elements).
I tried with python slicing with step but I think it's not working in my case. How can I do that? I'm using Pandas and numpy, can they help? I searched around but I have found nothing like that kind of slicing. Thanks!
|
2015/11/19
|
[
"https://Stackoverflow.com/questions/33801170",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5580662/"
] |
You could use [`NumPy slicing`](http://docs.scipy.org/doc/numpy-1.10.0/reference/arrays.indexing.html#basic-slicing-and-indexing) to solve your case.
For a `1D` array case -
```
A.reshape(-1,10)[:,:4].reshape(-1)
```
This can be extended to a `2D` array case with the selection to be made along the first axis -
```
A.reshape(-1,10,A.shape[1])[:,:4].reshape(-1,A.shape[1])
```
|
You could reshape the array to a `10x10`, then use slicing to pick the first 4 elements of each row. Then flatten the reshaped, sliced array:
```
In [46]: print a
[ 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24
25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49
50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74
75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99]
In [47]: print a.reshape((10,-1))[:,:4].flatten()
[ 0 1 2 3 10 11 12 13 20 21 22 23 30 31 32 33 40 41 42 43 50 51 52 53 60
61 62 63 70 71 72 73 80 81 82 83 90 91 92 93]
```
|
33,801,170
|
Let's say I have an ndarray with 100 elements, and I want to select the first 4 elements, skip 6 and go ahead like this (in other words, select the first 4 elements every 10 elements).
I tried with python slicing with step but I think it's not working in my case. How can I do that? I'm using Pandas and numpy, can they help? I searched around but I have found nothing like that kind of slicing. Thanks!
|
2015/11/19
|
[
"https://Stackoverflow.com/questions/33801170",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5580662/"
] |
You could reshape the array to a `10x10`, then use slicing to pick the first 4 elements of each row. Then flatten the reshaped, sliced array:
```
In [46]: print a
[ 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24
25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49
50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74
75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99]
In [47]: print a.reshape((10,-1))[:,:4].flatten()
[ 0 1 2 3 10 11 12 13 20 21 22 23 30 31 32 33 40 41 42 43 50 51 52 53 60
61 62 63 70 71 72 73 80 81 82 83 90 91 92 93]
```
|
In the example in OP, the input array is divisible by `m+n`. If it's not, then you could use the below function `take_n_skip_m`. It expands on @Divakar's answer by padding the input array to make it reshapeable into a proper 2D matrix; slice, flatten and slice again to get the desired outcome:
```
def take_n_skip_m(arr, n=4, m=6):
# in case len(arr) is not divisible by (n+m), get the remainder
remainder = len(arr) % (n+m)
# will pad arr with (n+m-remainder) 0s at the back
pad_size = (0, n+m-remainder)
# pad arr; reshape to create 2D array; take first n of each row; flatten 2D->1D
sliced_arr = np.pad(arr, pad_size).reshape(-1, n+m)[:, :n].flatten()
# remove any remaining padding constant if there is any (which depends on whether remainder >= n or not)
return sliced_arr if remainder >= n else sliced_arr[:remainder-n]
```
Examples:
```
>>> out = take_n_skip_m(np.arange(20), n=5, m=4)
>>> print(out)
[ 0 1 2 3 4 9 10 11 12 13 18 19]
>>> out = take_n_skip_m(np.arange(20), n=5, m=6)
>>> print(out)
[ 0 1 2 3 4 11 12 13 14 15]
```
|
33,801,170
|
Let's say I have an ndarray with 100 elements, and I want to select the first 4 elements, skip 6 and go ahead like this (in other words, select the first 4 elements every 10 elements).
I tried with python slicing with step but I think it's not working in my case. How can I do that? I'm using Pandas and numpy, can they help? I searched around but I have found nothing like that kind of slicing. Thanks!
|
2015/11/19
|
[
"https://Stackoverflow.com/questions/33801170",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5580662/"
] |
You could use [`NumPy slicing`](http://docs.scipy.org/doc/numpy-1.10.0/reference/arrays.indexing.html#basic-slicing-and-indexing) to solve your case.
For a `1D` array case -
```
A.reshape(-1,10)[:,:4].reshape(-1)
```
This can be extended to a `2D` array case with the selection to be made along the first axis -
```
A.reshape(-1,10,A.shape[1])[:,:4].reshape(-1,A.shape[1])
```
|
In the example in OP, the input array is divisible by `m+n`. If it's not, then you could use the below function `take_n_skip_m`. It expands on @Divakar's answer by padding the input array to make it reshapeable into a proper 2D matrix; slice, flatten and slice again to get the desired outcome:
```
def take_n_skip_m(arr, n=4, m=6):
# in case len(arr) is not divisible by (n+m), get the remainder
remainder = len(arr) % (n+m)
# will pad arr with (n+m-remainder) 0s at the back
pad_size = (0, n+m-remainder)
# pad arr; reshape to create 2D array; take first n of each row; flatten 2D->1D
sliced_arr = np.pad(arr, pad_size).reshape(-1, n+m)[:, :n].flatten()
# remove any remaining padding constant if there is any (which depends on whether remainder >= n or not)
return sliced_arr if remainder >= n else sliced_arr[:remainder-n]
```
Examples:
```
>>> out = take_n_skip_m(np.arange(20), n=5, m=4)
>>> print(out)
[ 0 1 2 3 4 9 10 11 12 13 18 19]
>>> out = take_n_skip_m(np.arange(20), n=5, m=6)
>>> print(out)
[ 0 1 2 3 4 11 12 13 14 15]
```
|
51,500,519
|
I can't use boto3 to connect to S3 with a role arn provided 100% programmatically.
```python
session = boto3.Session(role_arn="arn:aws:iam::****:role/*****",
RoleSessionName="****")
s3_client = boto3.client('s3',
aws_access_key_id="****",
aws_secret_access_key="****")
for b in s3_client.list_buckets()["Buckets"]:
print (b["Name"])
```
I can't provide arn info to Session and also client and there is no assume\_role() on a client based on s3.
I found a way with a sts temporary token but I don't like that.
```python
sess = boto3.Session(aws_access_key_id="*****",
aws_secret_access_key="*****")
sts_connection = sess.client('sts')
assume_role_object = sts_connection.assume_role(RoleArn="arn:aws:iam::***:role/******",
RoleSessionName="**",
DurationSeconds=3600)
session = boto3.Session(
aws_access_key_id=assume_role_object['Credentials']['AccessKeyId'],
aws_secret_access_key=assume_role_object['Credentials']['SecretAccessKey'],
aws_session_token=assume_role_object['Credentials']['SessionToken'])
s3_client = session.client('s3')
for b in s3_client.list_buckets()["Buckets"]:
print (b["Name"])
```
Do you have any idea ?
|
2018/07/24
|
[
"https://Stackoverflow.com/questions/51500519",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6227500/"
] |
You need to understand how temporary credentials are created.
First you need to create a client using your current access keys. These credentials are then used to verify that you have the permissions to call assume\_role and have the rights to issue credentials from the IAM role.
If someone could do it your way, there would be a HUGE security hole with assume\_role. Your rights must be validated first, then you can issue temporary credentials.
|
Firstly, *never* put an Access Key and Secret Key in your code. Always store credentials in a `~/.aws/credentials` file (eg via `aws configure`). This avoids embarrassing situations where your credentials are accidentally released to the world. Also, if you are running on an Amazon EC2 instance, then simply assign an IAM Role to the instance and it will automatically obtain credentials.
An easy way to assume a role in `boto3` is to store the role details in the credentials file with a separate **profile**. You can then reference the profile when creating a client and boto3 will automatically call `assume-role` on your behalf.
See: [boto3: Assume Role Provider](https://boto3.readthedocs.io/en/latest/guide/configuration.html#assume-role-provider)
|
9,433,021
|
Is there a way to group names together in python, to repeatedly assign to them *en masse*?
While we can do:
```
a,b,c = (1,2,3)
```
I would like to be able to do something like:
```
names = a,b,c
*names = (3,2,1) # this syntax doesn't work
a,b,c == (3,2,1) #=> True
```
Is there a built-in syntax for this? If not, I assume it would be possible with an object that overloads its assignment operator. In that case, is there an existing implementation, and would this concept have any unexpected failure modes?
The point is not to use the names as data, but rather to be able to use the actual names as variables that each refer to their own individual item, and to be able to use the list as a list, and to avoid code like:
```
a = 1
b = 2
c = 3
sequence = (a,b,c)
```
|
2012/02/24
|
[
"https://Stackoverflow.com/questions/9433021",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/21640/"
] |
You should go one level up in your data abstraction. You are not trying to access the entries by their individual names -- you rather use `names` to denote the whole collection of values, so a simple list might be what you want.
If you want both, a name for the collection *and* names for the individual items, then a dictionary might be the way to go:
```
names = "a b c".split()
d = dict(zip(names, (1, 2, 3)))
d.update(zip(names, (3, 2, 1)))
```
If you need something like this repeatedly, you might want to define a class with the names as attributes:
```
class X(object):
def __init__(self, a, b, c):
self.update(a, b, c)
def update(self, a, b, c)
self.a, self.b, self.c = a, b, c
x = X(1, 2, 3)
x.update(3, 2, 1)
print x.a, x.b. x.c
```
This reflects that you want to block `a`, `b` and `c` to some common structure, but keep the option to access them individually by name.
|
You should use a [**`dict`**](http://docs.python.org/library/stdtypes.html#mapping-types-dict):
```
>>> d = {"a": 1, "b": 2, "c": 3}
>>> d.update({"a": 8})
>>> print(d)
{"a": 8, "c": 3, "b": 2}
```
|
9,433,021
|
Is there a way to group names together in python, to repeatedly assign to them *en masse*?
While we can do:
```
a,b,c = (1,2,3)
```
I would like to be able to do something like:
```
names = a,b,c
*names = (3,2,1) # this syntax doesn't work
a,b,c == (3,2,1) #=> True
```
Is there a built-in syntax for this? If not, I assume it would be possible with an object that overloads its assignment operator. In that case, is there an existing implementation, and would this concept have any unexpected failure modes?
The point is not to use the names as data, but rather to be able to use the actual names as variables that each refer to their own individual item, and to be able to use the list as a list, and to avoid code like:
```
a = 1
b = 2
c = 3
sequence = (a,b,c)
```
|
2012/02/24
|
[
"https://Stackoverflow.com/questions/9433021",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/21640/"
] |
You should go one level up in your data abstraction. You are not trying to access the entries by their individual names -- you rather use `names` to denote the whole collection of values, so a simple list might be what you want.
If you want both, a name for the collection *and* names for the individual items, then a dictionary might be the way to go:
```
names = "a b c".split()
d = dict(zip(names, (1, 2, 3)))
d.update(zip(names, (3, 2, 1)))
```
If you need something like this repeatedly, you might want to define a class with the names as attributes:
```
class X(object):
def __init__(self, a, b, c):
self.update(a, b, c)
def update(self, a, b, c)
self.a, self.b, self.c = a, b, c
x = X(1, 2, 3)
x.update(3, 2, 1)
print x.a, x.b. x.c
```
This reflects that you want to block `a`, `b` and `c` to some common structure, but keep the option to access them individually by name.
|
I've realised that "exotic" syntax is probably unnecessary. Instead the following achieves what I wanted: (1) to avoid repeating the names and (2) to capture them as a sequence:
```
sequence = (a,b,c) = (1,2,3)
```
Of course, this won't allow:
```
*names = (3,2,1) # this syntax doesn't work
a,b,c == (3,2,1) #=> True
```
So, it won't facilitate repeated assignment to the same group of names without writing out those names repeatedly (except in a loop).
|
9,433,021
|
Is there a way to group names together in python, to repeatedly assign to them *en masse*?
While we can do:
```
a,b,c = (1,2,3)
```
I would like to be able to do something like:
```
names = a,b,c
*names = (3,2,1) # this syntax doesn't work
a,b,c == (3,2,1) #=> True
```
Is there a built-in syntax for this? If not, I assume it would be possible with an object that overloads its assignment operator. In that case, is there an existing implementation, and would this concept have any unexpected failure modes?
The point is not to use the names as data, but rather to be able to use the actual names as variables that each refer to their own individual item, and to be able to use the list as a list, and to avoid code like:
```
a = 1
b = 2
c = 3
sequence = (a,b,c)
```
|
2012/02/24
|
[
"https://Stackoverflow.com/questions/9433021",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/21640/"
] |
This?
```
>>> from collections import namedtuple
>>> names = namedtuple( 'names', ['a','b','c'] )
>>> thing= names(3,2,1)
>>> thing.a
3
>>> thing.b
2
>>> thing.c
1
```
|
I've realised that "exotic" syntax is probably unnecessary. Instead the following achieves what I wanted: (1) to avoid repeating the names and (2) to capture them as a sequence:
```
sequence = (a,b,c) = (1,2,3)
```
Of course, this won't allow:
```
*names = (3,2,1) # this syntax doesn't work
a,b,c == (3,2,1) #=> True
```
So, it won't facilitate repeated assignment to the same group of names without writing out those names repeatedly (except in a loop).
|
9,433,021
|
Is there a way to group names together in python, to repeatedly assign to them *en masse*?
While we can do:
```
a,b,c = (1,2,3)
```
I would like to be able to do something like:
```
names = a,b,c
*names = (3,2,1) # this syntax doesn't work
a,b,c == (3,2,1) #=> True
```
Is there a built-in syntax for this? If not, I assume it would be possible with an object that overloads its assignment operator. In that case, is there an existing implementation, and would this concept have any unexpected failure modes?
The point is not to use the names as data, but rather to be able to use the actual names as variables that each refer to their own individual item, and to be able to use the list as a list, and to avoid code like:
```
a = 1
b = 2
c = 3
sequence = (a,b,c)
```
|
2012/02/24
|
[
"https://Stackoverflow.com/questions/9433021",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/21640/"
] |
You should use a [**`dict`**](http://docs.python.org/library/stdtypes.html#mapping-types-dict):
```
>>> d = {"a": 1, "b": 2, "c": 3}
>>> d.update({"a": 8})
>>> print(d)
{"a": 8, "c": 3, "b": 2}
```
|
Not sure whether this is what you want...
```
>>> a,b,c = (1,2,3)
>>> names = (a,b,c)
>>> names
(1, 2, 3)
>>> (a,b,c) == names
True
>>> (a,b,c) == (1,2,3)
True
```
|
9,433,021
|
Is there a way to group names together in python, to repeatedly assign to them *en masse*?
While we can do:
```
a,b,c = (1,2,3)
```
I would like to be able to do something like:
```
names = a,b,c
*names = (3,2,1) # this syntax doesn't work
a,b,c == (3,2,1) #=> True
```
Is there a built-in syntax for this? If not, I assume it would be possible with an object that overloads its assignment operator. In that case, is there an existing implementation, and would this concept have any unexpected failure modes?
The point is not to use the names as data, but rather to be able to use the actual names as variables that each refer to their own individual item, and to be able to use the list as a list, and to avoid code like:
```
a = 1
b = 2
c = 3
sequence = (a,b,c)
```
|
2012/02/24
|
[
"https://Stackoverflow.com/questions/9433021",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/21640/"
] |
This?
```
>>> from collections import namedtuple
>>> names = namedtuple( 'names', ['a','b','c'] )
>>> thing= names(3,2,1)
>>> thing.a
3
>>> thing.b
2
>>> thing.c
1
```
|
Not sure whether this is what you want...
```
>>> a,b,c = (1,2,3)
>>> names = (a,b,c)
>>> names
(1, 2, 3)
>>> (a,b,c) == names
True
>>> (a,b,c) == (1,2,3)
True
```
|
9,433,021
|
Is there a way to group names together in python, to repeatedly assign to them *en masse*?
While we can do:
```
a,b,c = (1,2,3)
```
I would like to be able to do something like:
```
names = a,b,c
*names = (3,2,1) # this syntax doesn't work
a,b,c == (3,2,1) #=> True
```
Is there a built-in syntax for this? If not, I assume it would be possible with an object that overloads its assignment operator. In that case, is there an existing implementation, and would this concept have any unexpected failure modes?
The point is not to use the names as data, but rather to be able to use the actual names as variables that each refer to their own individual item, and to be able to use the list as a list, and to avoid code like:
```
a = 1
b = 2
c = 3
sequence = (a,b,c)
```
|
2012/02/24
|
[
"https://Stackoverflow.com/questions/9433021",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/21640/"
] |
Python has such an elegant namespace system:
```
#!/usr/bin/env python
class GenericContainer(object):
def __init__(self, *args, **kwargs):
self._names = []
self._names.extend(args)
self.set(**kwargs)
def set(self, *args, **kwargs):
for i, value in enumerate(args):
self.__dict__[self._names[i]] = value
for name, value in kwargs.items():
if name not in self._names:
self._names.append(name)
self.__dict__[name] = value
def zip(self, names, values):
self.set(**dict(zip(names, values)))
def main():
x = GenericContainer('a', 'b', 'c')
x.set(1, 2, 3, d=4)
x.a = 10
print (x.a, x.b, x.c, x.d,)
y = GenericContainer(a=1, b=2, c=3)
y.set(3, 2, 1)
print (y.a, y.b, y.c,)
y.set(**dict(zip(('a', 'b', 'c'), (1, 2, 3))))
print (y.a, y.b, y.c,)
names = 'x', 'y', 'z'
y.zip(names, (4, 5, 6))
print (y.x, y.y, y.z,)
if __name__ == '__main__':
main()
```
Each instance of GenericContainer is an isolated namespace. IMHO it is better than messing with the local namespace even if you are programming under a pure procedural paradigm.
|
Not sure whether this is what you want...
```
>>> a,b,c = (1,2,3)
>>> names = (a,b,c)
>>> names
(1, 2, 3)
>>> (a,b,c) == names
True
>>> (a,b,c) == (1,2,3)
True
```
|
9,433,021
|
Is there a way to group names together in python, to repeatedly assign to them *en masse*?
While we can do:
```
a,b,c = (1,2,3)
```
I would like to be able to do something like:
```
names = a,b,c
*names = (3,2,1) # this syntax doesn't work
a,b,c == (3,2,1) #=> True
```
Is there a built-in syntax for this? If not, I assume it would be possible with an object that overloads its assignment operator. In that case, is there an existing implementation, and would this concept have any unexpected failure modes?
The point is not to use the names as data, but rather to be able to use the actual names as variables that each refer to their own individual item, and to be able to use the list as a list, and to avoid code like:
```
a = 1
b = 2
c = 3
sequence = (a,b,c)
```
|
2012/02/24
|
[
"https://Stackoverflow.com/questions/9433021",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/21640/"
] |
This?
```
>>> from collections import namedtuple
>>> names = namedtuple( 'names', ['a','b','c'] )
>>> thing= names(3,2,1)
>>> thing.a
3
>>> thing.b
2
>>> thing.c
1
```
|
Well, you shouldn't do this, since it's potentially unsafe, but you can [use the `exec` statement](http://docs.python.org/reference/simple_stmts.html#exec)
```
>>> names = "a, b, c"
>>> tup = 1,2,3
>>> exec names + "=" + repr(tup)
>>> a, b, c
(1, 2, 3)
```
|
9,433,021
|
Is there a way to group names together in python, to repeatedly assign to them *en masse*?
While we can do:
```
a,b,c = (1,2,3)
```
I would like to be able to do something like:
```
names = a,b,c
*names = (3,2,1) # this syntax doesn't work
a,b,c == (3,2,1) #=> True
```
Is there a built-in syntax for this? If not, I assume it would be possible with an object that overloads its assignment operator. In that case, is there an existing implementation, and would this concept have any unexpected failure modes?
The point is not to use the names as data, but rather to be able to use the actual names as variables that each refer to their own individual item, and to be able to use the list as a list, and to avoid code like:
```
a = 1
b = 2
c = 3
sequence = (a,b,c)
```
|
2012/02/24
|
[
"https://Stackoverflow.com/questions/9433021",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/21640/"
] |
I've realised that "exotic" syntax is probably unnecessary. Instead the following achieves what I wanted: (1) to avoid repeating the names and (2) to capture them as a sequence:
```
sequence = (a,b,c) = (1,2,3)
```
Of course, this won't allow:
```
*names = (3,2,1) # this syntax doesn't work
a,b,c == (3,2,1) #=> True
```
So, it won't facilitate repeated assignment to the same group of names without writing out those names repeatedly (except in a loop).
|
Python has such an elegant namespace system:
```
#!/usr/bin/env python
class GenericContainer(object):
def __init__(self, *args, **kwargs):
self._names = []
self._names.extend(args)
self.set(**kwargs)
def set(self, *args, **kwargs):
for i, value in enumerate(args):
self.__dict__[self._names[i]] = value
for name, value in kwargs.items():
if name not in self._names:
self._names.append(name)
self.__dict__[name] = value
def zip(self, names, values):
self.set(**dict(zip(names, values)))
def main():
x = GenericContainer('a', 'b', 'c')
x.set(1, 2, 3, d=4)
x.a = 10
print (x.a, x.b, x.c, x.d,)
y = GenericContainer(a=1, b=2, c=3)
y.set(3, 2, 1)
print (y.a, y.b, y.c,)
y.set(**dict(zip(('a', 'b', 'c'), (1, 2, 3))))
print (y.a, y.b, y.c,)
names = 'x', 'y', 'z'
y.zip(names, (4, 5, 6))
print (y.x, y.y, y.z,)
if __name__ == '__main__':
main()
```
Each instance of GenericContainer is an isolated namespace. IMHO it is better than messing with the local namespace even if you are programming under a pure procedural paradigm.
|
9,433,021
|
Is there a way to group names together in python, to repeatedly assign to them *en masse*?
While we can do:
```
a,b,c = (1,2,3)
```
I would like to be able to do something like:
```
names = a,b,c
*names = (3,2,1) # this syntax doesn't work
a,b,c == (3,2,1) #=> True
```
Is there a built-in syntax for this? If not, I assume it would be possible with an object that overloads its assignment operator. In that case, is there an existing implementation, and would this concept have any unexpected failure modes?
The point is not to use the names as data, but rather to be able to use the actual names as variables that each refer to their own individual item, and to be able to use the list as a list, and to avoid code like:
```
a = 1
b = 2
c = 3
sequence = (a,b,c)
```
|
2012/02/24
|
[
"https://Stackoverflow.com/questions/9433021",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/21640/"
] |
You should use a [**`dict`**](http://docs.python.org/library/stdtypes.html#mapping-types-dict):
```
>>> d = {"a": 1, "b": 2, "c": 3}
>>> d.update({"a": 8})
>>> print(d)
{"a": 8, "c": 3, "b": 2}
```
|
Python has such an elegant namespace system:
```
#!/usr/bin/env python
class GenericContainer(object):
def __init__(self, *args, **kwargs):
self._names = []
self._names.extend(args)
self.set(**kwargs)
def set(self, *args, **kwargs):
for i, value in enumerate(args):
self.__dict__[self._names[i]] = value
for name, value in kwargs.items():
if name not in self._names:
self._names.append(name)
self.__dict__[name] = value
def zip(self, names, values):
self.set(**dict(zip(names, values)))
def main():
x = GenericContainer('a', 'b', 'c')
x.set(1, 2, 3, d=4)
x.a = 10
print (x.a, x.b, x.c, x.d,)
y = GenericContainer(a=1, b=2, c=3)
y.set(3, 2, 1)
print (y.a, y.b, y.c,)
y.set(**dict(zip(('a', 'b', 'c'), (1, 2, 3))))
print (y.a, y.b, y.c,)
names = 'x', 'y', 'z'
y.zip(names, (4, 5, 6))
print (y.x, y.y, y.z,)
if __name__ == '__main__':
main()
```
Each instance of GenericContainer is an isolated namespace. IMHO it is better than messing with the local namespace even if you are programming under a pure procedural paradigm.
|
9,433,021
|
Is there a way to group names together in python, to repeatedly assign to them *en masse*?
While we can do:
```
a,b,c = (1,2,3)
```
I would like to be able to do something like:
```
names = a,b,c
*names = (3,2,1) # this syntax doesn't work
a,b,c == (3,2,1) #=> True
```
Is there a built-in syntax for this? If not, I assume it would be possible with an object that overloads its assignment operator. In that case, is there an existing implementation, and would this concept have any unexpected failure modes?
The point is not to use the names as data, but rather to be able to use the actual names as variables that each refer to their own individual item, and to be able to use the list as a list, and to avoid code like:
```
a = 1
b = 2
c = 3
sequence = (a,b,c)
```
|
2012/02/24
|
[
"https://Stackoverflow.com/questions/9433021",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/21640/"
] |
You should go one level up in your data abstraction. You are not trying to access the entries by their individual names -- you rather use `names` to denote the whole collection of values, so a simple list might be what you want.
If you want both, a name for the collection *and* names for the individual items, then a dictionary might be the way to go:
```
names = "a b c".split()
d = dict(zip(names, (1, 2, 3)))
d.update(zip(names, (3, 2, 1)))
```
If you need something like this repeatedly, you might want to define a class with the names as attributes:
```
class X(object):
def __init__(self, a, b, c):
self.update(a, b, c)
def update(self, a, b, c)
self.a, self.b, self.c = a, b, c
x = X(1, 2, 3)
x.update(3, 2, 1)
print x.a, x.b. x.c
```
This reflects that you want to block `a`, `b` and `c` to some common structure, but keep the option to access them individually by name.
|
Not sure whether this is what you want...
```
>>> a,b,c = (1,2,3)
>>> names = (a,b,c)
>>> names
(1, 2, 3)
>>> (a,b,c) == names
True
>>> (a,b,c) == (1,2,3)
True
```
|
40,367,569
|
I am trying to set up a Python extension (Gambit, <http://gambit.sourceforge.net/gambit13/build.html>) and am getting an error when trying to build setup.py:
>
> Traceback (most recent call last): File "setup.py", line 32, in <module>
>
>
> m.Extension.**dict** = m.\_Extension.**dict**
>
>
> AttributeError: attribute '**dict**' of 'type' objects is not writable
>
>
>
This seems to be an issue with a certain type of (older) setup.py file. I created a minimal example based on <https://pypi.python.org/pypi/setuptools_cython/0.2>:
```
#Using Python 3.6 on Windows 10 (64-bit)
from setuptools import setup
#from distutils.extension import Extension
#^That line can be included or excluded without changing the error
import sys
if 'setuptools.extension' in sys.modules:
m = sys.modules['setuptools.extension']
m.Extension.__dict__ = m._Extension.__dict__
```
Other packages have had similar problems in the past (see arcitc issue #17 on Github) and apparently fixed it by some Python magic which goes above my head (arctic's setup.py no longer includes the relevant lines).
Any thoughts on what could cause the issue? If so, are there any changes I can make to setup.py to avoid this error without breaking the underlying functionality?
|
2016/11/01
|
[
"https://Stackoverflow.com/questions/40367569",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2537443/"
] |
You are getting **[NullPointerException](https://docs.oracle.com/javase/7/docs/api/java/lang/NullPointerException.html)** at ***[android.support.v4.widget.drawerlayout](https://developer.android.com/reference/android/support/v4/widget/DrawerLayout.html)***
>
> NullPointerException is thrown when an application attempts to use an
> object reference that has the null value.
>
>
>
How can a NullPointerException launches only on release apk?
------------------------------------------------------------
>
> When you prepare your application for release, you configure, build,
> and test a release version of your application. The configuration
> tasks are straightforward, involving basic code cleanup and code
> modification tasks that help optimize your application.
>
>
>
1. Read **[Prepare for Release](https://developer.android.com/studio/publish/preparing.html)**
2. set **`minifyEnabled false`**
You can customise your `build.gradle` like this
```
buildTypes {
debug {
minifyEnabled false
proguardFiles getDefaultProguardFile('proguard-android.txt'), 'proguard-rules.pro'
}
release {
minifyEnabled false
proguardFiles getDefaultProguardFile('proguard-android.txt'), 'proguard-rules.pro'
debuggable false
zipAlignEnabled true
jniDebuggable false
renderscriptDebuggable false
}
}
```
Make sure using stable **support library and build tools** .
```
compileSdkVersion 24
buildToolsVersion "24.0.2"
compile 'com.android.support:appcompat-v7:24.2.0'
compile 'com.android.support:design:24.2.0'
```
**Project Level**
```
classpath 'com.android.tools.build:gradle:2.1.2' // or 2.2.2
```
**Then**
>
> On the main menu, choose File | Invalidate Caches/Restart. The
> Invalidate Caches message appears informing you that the caches will
> be invalidated and rebuilt on the next start. Use buttons in the
> dialog to invalidate caches, restart Android Studio .
>
>
>
**Note :** You can provide us your `build.gradle` . Disable `"instant run"` Facility .
.
|
```
android{
buildTypes{
release{
minifyEnabled false
}
}
}
```
Try this in your build.grade.
Or
Try to restart your Android Studio as well as your computer.As is known to all,Android Studio may perform stupid occasionally.
|
40,367,569
|
I am trying to set up a Python extension (Gambit, <http://gambit.sourceforge.net/gambit13/build.html>) and am getting an error when trying to build setup.py:
>
> Traceback (most recent call last): File "setup.py", line 32, in <module>
>
>
> m.Extension.**dict** = m.\_Extension.**dict**
>
>
> AttributeError: attribute '**dict**' of 'type' objects is not writable
>
>
>
This seems to be an issue with a certain type of (older) setup.py file. I created a minimal example based on <https://pypi.python.org/pypi/setuptools_cython/0.2>:
```
#Using Python 3.6 on Windows 10 (64-bit)
from setuptools import setup
#from distutils.extension import Extension
#^That line can be included or excluded without changing the error
import sys
if 'setuptools.extension' in sys.modules:
m = sys.modules['setuptools.extension']
m.Extension.__dict__ = m._Extension.__dict__
```
Other packages have had similar problems in the past (see arcitc issue #17 on Github) and apparently fixed it by some Python magic which goes above my head (arctic's setup.py no longer includes the relevant lines).
Any thoughts on what could cause the issue? If so, are there any changes I can make to setup.py to avoid this error without breaking the underlying functionality?
|
2016/11/01
|
[
"https://Stackoverflow.com/questions/40367569",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2537443/"
] |
You are getting **[NullPointerException](https://docs.oracle.com/javase/7/docs/api/java/lang/NullPointerException.html)** at ***[android.support.v4.widget.drawerlayout](https://developer.android.com/reference/android/support/v4/widget/DrawerLayout.html)***
>
> NullPointerException is thrown when an application attempts to use an
> object reference that has the null value.
>
>
>
How can a NullPointerException launches only on release apk?
------------------------------------------------------------
>
> When you prepare your application for release, you configure, build,
> and test a release version of your application. The configuration
> tasks are straightforward, involving basic code cleanup and code
> modification tasks that help optimize your application.
>
>
>
1. Read **[Prepare for Release](https://developer.android.com/studio/publish/preparing.html)**
2. set **`minifyEnabled false`**
You can customise your `build.gradle` like this
```
buildTypes {
debug {
minifyEnabled false
proguardFiles getDefaultProguardFile('proguard-android.txt'), 'proguard-rules.pro'
}
release {
minifyEnabled false
proguardFiles getDefaultProguardFile('proguard-android.txt'), 'proguard-rules.pro'
debuggable false
zipAlignEnabled true
jniDebuggable false
renderscriptDebuggable false
}
}
```
Make sure using stable **support library and build tools** .
```
compileSdkVersion 24
buildToolsVersion "24.0.2"
compile 'com.android.support:appcompat-v7:24.2.0'
compile 'com.android.support:design:24.2.0'
```
**Project Level**
```
classpath 'com.android.tools.build:gradle:2.1.2' // or 2.2.2
```
**Then**
>
> On the main menu, choose File | Invalidate Caches/Restart. The
> Invalidate Caches message appears informing you that the caches will
> be invalidated and rebuilt on the next start. Use buttons in the
> dialog to invalidate caches, restart Android Studio .
>
>
>
**Note :** You can provide us your `build.gradle` . Disable `"instant run"` Facility .
.
|
if you are using proguard at release,
decrease your gradle version to 2.1.2
```
classpath 'com.android.tools.build:gradle:2.1.2'
```
|
40,367,569
|
I am trying to set up a Python extension (Gambit, <http://gambit.sourceforge.net/gambit13/build.html>) and am getting an error when trying to build setup.py:
>
> Traceback (most recent call last): File "setup.py", line 32, in <module>
>
>
> m.Extension.**dict** = m.\_Extension.**dict**
>
>
> AttributeError: attribute '**dict**' of 'type' objects is not writable
>
>
>
This seems to be an issue with a certain type of (older) setup.py file. I created a minimal example based on <https://pypi.python.org/pypi/setuptools_cython/0.2>:
```
#Using Python 3.6 on Windows 10 (64-bit)
from setuptools import setup
#from distutils.extension import Extension
#^That line can be included or excluded without changing the error
import sys
if 'setuptools.extension' in sys.modules:
m = sys.modules['setuptools.extension']
m.Extension.__dict__ = m._Extension.__dict__
```
Other packages have had similar problems in the past (see arcitc issue #17 on Github) and apparently fixed it by some Python magic which goes above my head (arctic's setup.py no longer includes the relevant lines).
Any thoughts on what could cause the issue? If so, are there any changes I can make to setup.py to avoid this error without breaking the underlying functionality?
|
2016/11/01
|
[
"https://Stackoverflow.com/questions/40367569",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2537443/"
] |
You are getting **[NullPointerException](https://docs.oracle.com/javase/7/docs/api/java/lang/NullPointerException.html)** at ***[android.support.v4.widget.drawerlayout](https://developer.android.com/reference/android/support/v4/widget/DrawerLayout.html)***
>
> NullPointerException is thrown when an application attempts to use an
> object reference that has the null value.
>
>
>
How can a NullPointerException launches only on release apk?
------------------------------------------------------------
>
> When you prepare your application for release, you configure, build,
> and test a release version of your application. The configuration
> tasks are straightforward, involving basic code cleanup and code
> modification tasks that help optimize your application.
>
>
>
1. Read **[Prepare for Release](https://developer.android.com/studio/publish/preparing.html)**
2. set **`minifyEnabled false`**
You can customise your `build.gradle` like this
```
buildTypes {
debug {
minifyEnabled false
proguardFiles getDefaultProguardFile('proguard-android.txt'), 'proguard-rules.pro'
}
release {
minifyEnabled false
proguardFiles getDefaultProguardFile('proguard-android.txt'), 'proguard-rules.pro'
debuggable false
zipAlignEnabled true
jniDebuggable false
renderscriptDebuggable false
}
}
```
Make sure using stable **support library and build tools** .
```
compileSdkVersion 24
buildToolsVersion "24.0.2"
compile 'com.android.support:appcompat-v7:24.2.0'
compile 'com.android.support:design:24.2.0'
```
**Project Level**
```
classpath 'com.android.tools.build:gradle:2.1.2' // or 2.2.2
```
**Then**
>
> On the main menu, choose File | Invalidate Caches/Restart. The
> Invalidate Caches message appears informing you that the caches will
> be invalidated and rebuilt on the next start. Use buttons in the
> dialog to invalidate caches, restart Android Studio .
>
>
>
**Note :** You can provide us your `build.gradle` . Disable `"instant run"` Facility .
.
|
```
buildTypes {
release {
minifyEnabled false
shrinkResources false
proguardFiles getDefaultProguardFile('proguard-android.txt'), 'proguard-rules.pro'
}
}
lintOptions {
abortOnError false
checkReleaseBuilds false
disable 'MissingTranslation'
}
```
Try this or just clean project and restart the project.
Or Invalidate caches/restart From File Option.
File>>Invalidate caches/restart
|
40,367,569
|
I am trying to set up a Python extension (Gambit, <http://gambit.sourceforge.net/gambit13/build.html>) and am getting an error when trying to build setup.py:
>
> Traceback (most recent call last): File "setup.py", line 32, in <module>
>
>
> m.Extension.**dict** = m.\_Extension.**dict**
>
>
> AttributeError: attribute '**dict**' of 'type' objects is not writable
>
>
>
This seems to be an issue with a certain type of (older) setup.py file. I created a minimal example based on <https://pypi.python.org/pypi/setuptools_cython/0.2>:
```
#Using Python 3.6 on Windows 10 (64-bit)
from setuptools import setup
#from distutils.extension import Extension
#^That line can be included or excluded without changing the error
import sys
if 'setuptools.extension' in sys.modules:
m = sys.modules['setuptools.extension']
m.Extension.__dict__ = m._Extension.__dict__
```
Other packages have had similar problems in the past (see arcitc issue #17 on Github) and apparently fixed it by some Python magic which goes above my head (arctic's setup.py no longer includes the relevant lines).
Any thoughts on what could cause the issue? If so, are there any changes I can make to setup.py to avoid this error without breaking the underlying functionality?
|
2016/11/01
|
[
"https://Stackoverflow.com/questions/40367569",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2537443/"
] |
You are getting **[NullPointerException](https://docs.oracle.com/javase/7/docs/api/java/lang/NullPointerException.html)** at ***[android.support.v4.widget.drawerlayout](https://developer.android.com/reference/android/support/v4/widget/DrawerLayout.html)***
>
> NullPointerException is thrown when an application attempts to use an
> object reference that has the null value.
>
>
>
How can a NullPointerException launches only on release apk?
------------------------------------------------------------
>
> When you prepare your application for release, you configure, build,
> and test a release version of your application. The configuration
> tasks are straightforward, involving basic code cleanup and code
> modification tasks that help optimize your application.
>
>
>
1. Read **[Prepare for Release](https://developer.android.com/studio/publish/preparing.html)**
2. set **`minifyEnabled false`**
You can customise your `build.gradle` like this
```
buildTypes {
debug {
minifyEnabled false
proguardFiles getDefaultProguardFile('proguard-android.txt'), 'proguard-rules.pro'
}
release {
minifyEnabled false
proguardFiles getDefaultProguardFile('proguard-android.txt'), 'proguard-rules.pro'
debuggable false
zipAlignEnabled true
jniDebuggable false
renderscriptDebuggable false
}
}
```
Make sure using stable **support library and build tools** .
```
compileSdkVersion 24
buildToolsVersion "24.0.2"
compile 'com.android.support:appcompat-v7:24.2.0'
compile 'com.android.support:design:24.2.0'
```
**Project Level**
```
classpath 'com.android.tools.build:gradle:2.1.2' // or 2.2.2
```
**Then**
>
> On the main menu, choose File | Invalidate Caches/Restart. The
> Invalidate Caches message appears informing you that the caches will
> be invalidated and rebuilt on the next start. Use buttons in the
> dialog to invalidate caches, restart Android Studio .
>
>
>
**Note :** You can provide us your `build.gradle` . Disable `"instant run"` Facility .
.
|
Here are some steps you can take to fix these types of errors and make sure your app doesn't crash on future platform updates:
* If your app uses private platform libraries, you should update it to include its own copy of those libraries or use the public NDK APIs.
* If your app uses a third-party library that accesses private symbols, contact the library author to update the library.
* Make sure you package all your non-NDK libraries with your APK.
* Use standard JNI functions instead of getJavaVM and getJNIEnv from libandroid\_runtime.so:
>
>
> ```
> AndroidRuntime::getJavaVM -> GetJavaVM from <jni.h>
> AndroidRuntime::getJNIEnv -> JavaVM::GetEnv or
> JavaVM::AttachCurrentThread from <jni.h>.
>
> ```
>
> Use \_\_system\_property\_get instead of the private property\_get symbol
> from libcutils.so. To do this, use \_\_system\_property\_get with the
> following include:
>
>
>
include
=======
>
> Note: The availability and contents of system properties is not tested through CTS. A better fix would be to avoid using these properties altogether.
> Use a local version of the SSL\_ctrl symbol from libcrypto.so. For example, you should statically link libcyrpto.a in your .so file, or
> include a dynamically linked version of libcrypto.so from
> BoringSSL/OpenSSL and package it in your APK.
>
>
>
|
40,367,569
|
I am trying to set up a Python extension (Gambit, <http://gambit.sourceforge.net/gambit13/build.html>) and am getting an error when trying to build setup.py:
>
> Traceback (most recent call last): File "setup.py", line 32, in <module>
>
>
> m.Extension.**dict** = m.\_Extension.**dict**
>
>
> AttributeError: attribute '**dict**' of 'type' objects is not writable
>
>
>
This seems to be an issue with a certain type of (older) setup.py file. I created a minimal example based on <https://pypi.python.org/pypi/setuptools_cython/0.2>:
```
#Using Python 3.6 on Windows 10 (64-bit)
from setuptools import setup
#from distutils.extension import Extension
#^That line can be included or excluded without changing the error
import sys
if 'setuptools.extension' in sys.modules:
m = sys.modules['setuptools.extension']
m.Extension.__dict__ = m._Extension.__dict__
```
Other packages have had similar problems in the past (see arcitc issue #17 on Github) and apparently fixed it by some Python magic which goes above my head (arctic's setup.py no longer includes the relevant lines).
Any thoughts on what could cause the issue? If so, are there any changes I can make to setup.py to avoid this error without breaking the underlying functionality?
|
2016/11/01
|
[
"https://Stackoverflow.com/questions/40367569",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2537443/"
] |
You are getting **[NullPointerException](https://docs.oracle.com/javase/7/docs/api/java/lang/NullPointerException.html)** at ***[android.support.v4.widget.drawerlayout](https://developer.android.com/reference/android/support/v4/widget/DrawerLayout.html)***
>
> NullPointerException is thrown when an application attempts to use an
> object reference that has the null value.
>
>
>
How can a NullPointerException launches only on release apk?
------------------------------------------------------------
>
> When you prepare your application for release, you configure, build,
> and test a release version of your application. The configuration
> tasks are straightforward, involving basic code cleanup and code
> modification tasks that help optimize your application.
>
>
>
1. Read **[Prepare for Release](https://developer.android.com/studio/publish/preparing.html)**
2. set **`minifyEnabled false`**
You can customise your `build.gradle` like this
```
buildTypes {
debug {
minifyEnabled false
proguardFiles getDefaultProguardFile('proguard-android.txt'), 'proguard-rules.pro'
}
release {
minifyEnabled false
proguardFiles getDefaultProguardFile('proguard-android.txt'), 'proguard-rules.pro'
debuggable false
zipAlignEnabled true
jniDebuggable false
renderscriptDebuggable false
}
}
```
Make sure using stable **support library and build tools** .
```
compileSdkVersion 24
buildToolsVersion "24.0.2"
compile 'com.android.support:appcompat-v7:24.2.0'
compile 'com.android.support:design:24.2.0'
```
**Project Level**
```
classpath 'com.android.tools.build:gradle:2.1.2' // or 2.2.2
```
**Then**
>
> On the main menu, choose File | Invalidate Caches/Restart. The
> Invalidate Caches message appears informing you that the caches will
> be invalidated and rebuilt on the next start. Use buttons in the
> dialog to invalidate caches, restart Android Studio .
>
>
>
**Note :** You can provide us your `build.gradle` . Disable `"instant run"` Facility .
.
|
According to the android api reference - [Android Developer Api Reference](https://developer.android.com/reference/android/support/v4/widget/DrawerLayout.html,)
***If your layout configures more than one drawer view per vertical edge of the window, an exception will be thrown at runtime.*** I suspect your drawer layout is incorrect. Also check that the layout gravity of the drawer is set to "start"
```
android:layout_gravity="start"
```
|
55,373,867
|
I have very basic producer-consumer code written with pika framework in python. The problem is - consumer side runs too slow on messages in queue. I ran some tests and found out that i can speed up the workflow up to 27 times with multiprocessing. The problem is - I don't know what is the right way to add multiprocessing functionality to my code.
```py
import pika
import json
from datetime import datetime
from functions import download_xmls
def callback(ch, method, properties, body):
print('Got something')
body = json.loads(body)
type = body[-1]['Type']
print('Object type in work currently ' + type)
cnums = [x['cadnum'] for x in body[:-1]]
print('Got {} cnums to work with'.format(len(cnums)))
date_start = datetime.now()
download_xmls(type,cnums)
date_end = datetime.now()
ch.basic_ack(delivery_tag=method.delivery_tag)
print('Download complete in {} seconds'.format((date_end-date_start).total_seconds()))
def consume(queue_name = 'bot-test'):
parameters = pika.URLParameters('server@address')
connection = pika.BlockingConnection(parameters)
channel = connection.channel()
channel.queue_declare(queue=queue_name, durable=True)
channel.basic_qos(prefetch_count=1)
channel.basic_consume(callback, queue='bot-test')
print(' [*] Waiting for messages. To exit press CTRL+C')
channel.start_consuming()
```
How do I start with adding multiprocessing functionality from here?
|
2019/03/27
|
[
"https://Stackoverflow.com/questions/55373867",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/7047471/"
] |
Pika has extensive [example code](https://github.com/pika/pika/blob/0.13.1/examples/basic_consumer_threaded.py) that I recommend you check out. Note that this code is for **example** use only. In the case of doing work on threads, you will have to use a more intelligent way to manage your threads.
The goal is to not block the thread that runs Pika's IO loop, and to call back into the IO loop correctly from your worker threads. That's why `add_callback_threadsafe` exists and is used in that code.
---
**NOTE:** the RabbitMQ team monitors the `rabbitmq-users` [mailing list](https://groups.google.com/forum/#!forum/rabbitmq-users) and only sometimes answers questions on StackOverflow.
|
```
import pika
import json
from multiprocessing import Process
from datetime import datetime
from functions import download_xmls
import multiprocessing
import concurrent.futures
def do_job(body):
body = json.loads(body)
type = body[-1]['Type']
print('Object type in work currently ' + type)
cnums = [x['cadnum'] for x in body[:-1]]
print('Got {} cnums to work with'.format(len(cnums)))
date_start = datetime.now()
download_xmls(type,cnums)
date_end = datetime.now()
ch.basic_ack(delivery_tag=method.delivery_tag)
print('Download complete in {} seconds'.format((date_end-date_start).total_seconds()))
def callback(ch, method, properties, body):
print('Got something')
p = Process(target=do_job,args=(body))
p.start()
p.join()
def consume(queue_name = 'bot-test'):
parameters = pika.URLParameters('server@address')
connection = pika.BlockingConnection(parameters)
channel = connection.channel()
channel.queue_declare(queue=queue_name, durable=True)
channel.basic_qos(prefetch_count=1)
channel.basic_consume(callback, queue='bot-test')
print(' [*] Waiting for messages. To exit press CTRL+C')
channel.start_consuming()
def get_workers():
try:
return multiprocessing.cpu_count()
except NotImplementedError:
return 4
workers = get_workers()
with concurrent.futures.ProcessPoolExecutor() as executor:
for i in range(workers):
executor.submit(consume)
```
Above is just simple demo how you can include multiprocessing execution here. I recommend you to go through documentation to further optimise the code and achieve what you require.
<https://docs.python.org/3/library/multiprocessing.html#the-process-class>
|
42,740,284
|
I have question that I am having a hard time understanding what the code might look like so I will explain the best I can. I am trying to view and search a NUL byte and replace it with with another NUL type byte, but the computer needs to be able to tell the difference between the different NUL bytes. an Example would be Hex code 00 would equal NUL and hex code 01 equals SOH. lets say I wanted to create code to replace those with each other. code example
```
TextFile1 = Line.Replace('NUL','SOH')
TextFile2.write(TextFile1)
```
Yes I have read a LOT of different posts just trying to understand to put it into working code. first problem is I can't just copy and paste the output of hex 00 into the python module it just won't paste. reading on that shows 0x00 type formats are used to represent that but I'm having issues finding the correct representation for python 3.x
```
Print (\x00)
output = nothing shows #I'm trying to get output of 'NUL' or as hex would show '.' either works fine --Edited
```
so how to get the module to understand that I'm trying to represent HEX 00 or 'NUL' and represent as '.' and do the same for SOH, Not just limited to those types of NUL characters but just using those as exmple because I want to use all 256 HEX characters. but beable to tell the difference when pasting into another program just like a hex editor would do. maybe I need to get the two programs on the same encoding type not really sure. I just need a very simple example text as how I would search and replace none representable Hexadecimal characters and find and replace them in notepad or notepad++, from what I have read, only notepad++ has the ability to do so.
|
2017/03/11
|
[
"https://Stackoverflow.com/questions/42740284",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/7620511/"
] |
If you are on Python 3, you should really work with `bytes` objects. Python 3 strings are sequences of unicode code points. To work with byte-strings, use `bytes` (which is pretty much the same as a Python 2 string, which used the "sequence of bytes" model).
```
>>> bytes([97, 98, 99])
b'abc'
>>>
```
Note, to write a `bytes` literal, prepend a `b` before the opening quote in your string.
To answer your question, to find the representation of `0x00` and `0x01` just look at:
```
>>> bytes([0x00, 0x01])
b'\x00\x01'
```
Note, `0x00` and `0` are the same type, they are just different literal syntaxes (hex literal versus decimal literal).
```
>>> bytes([0, 1])
b'\x00\x01'
```
I have no idea what you mean with regards to Notepad++.
Here is an example, though, of replacing a null byte with something else:
```
>>> byte_string = bytes([97, 98, 0, 99])
>>> byte_string
b'ab\x00c'
>>> print(byte_string)
b'ab\x00c'
>>> byte_string.replace(b'\x00', b'NONE')
b'abNONEc'
>>> print(byte_string.replace(b'\x00', b'NONE'))
b'abNONEc'
```
|
another equivalent way to get the value of `\x00` in python is `chr(0)` i like that way a little better over the literal versions
|
33,697,263
|
i try to install snap7 (to read from a S7-1200) with it's python-snap7 0.4 wrapper but i get always a traceback with the following simple code.
```
from time import sleep
import snap7
from snap7.util import *
import struct
plc = snap7.client.Client()
```
Traceback:
```
>>>
Traceback (most recent call last):
File "Y:\Lonnox\Projekte\Bibliothek\Python und SPS\S7-1200 Test.py", line 6, in <module>
plc = snap7.client.Client()
File "C:\Python34\lib\site-packages\snap7\client.py", line 30, in __init__
self.library = load_library()
File "C:\Python34\lib\site-packages\snap7\common.py", line 54, in load_library
return Snap7Library(lib_location).cdll
File "C:\Python34\lib\site-packages\snap7\common.py", line 46, in __init__
raise Snap7Exception(msg)
snap7.snap7exceptions.Snap7Exception: can't find snap7 library. If installed, try running ldconfig
```
The steps i do to install snap7 and python wrapper are:
1. Download snap7 from sourceforge and copy snap7.dll and snap7.lib to system32 folder of windows 8
2. Install wrapper by using pip install python-snap7
How to install snap7 on windows correctly ?
[log of pip install][1]
|
2015/11/13
|
[
"https://Stackoverflow.com/questions/33697263",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4801693/"
] |
After some try and error experiments and with some infos of snap7 involved developers, i fixed the problem. The folder where the snap7.dll and .lib file are located must be present in the Enviroment variables of Windows. Alternative you can copy the files to the Python install dir if you have checked the "add path" option from the Python installer.
See the picture for Details: Edit Enviroment Vars
[edit enviroment vars](http://i.stack.imgur.com/mwaLI.png)
To give a good starting point for everyone who is a greenhorn like me here is a minimal snap7 tutorial to read variables of a DB from a S7 1212C PLC with Python3:
```
import snap7
from snap7.util import *
import struct
plc = snap7.client.Client()
plc.connect("10.112.115.10",0,1)
#---Read DB---
db = plc.db_read(1234,0,14)
real = struct.iter_unpack("!f",db[:12] )
print( "3 x Real Vars:", [f for f, in real] )
print( "3 x Bool Vars:", db[12]&1==1, db[12]&2==2, db[12]&4==4 )
plc.disconnect()
```
IP and Subnetmask
-----------------
The IP of the PLC must be in the range of the subnetmask of the PC LAN Device. If the IP of the LAN device is 10.112.115.1 and the submask is 255.255.255.0 this give you a range of 10.112.115.2 to 10.112.115.255 for your PLC. Every PLC IP outside this range will give you a "Unreachable peer" Error.
Firewall
--------
Make sure that your firewall allow the communication between your PC and PLC.
PLC Data Location
-----------------
If you are unfamilar with STEP 7/ TIA Portal. Look for the "Online Diagnostics" Button and see the pictures to find the location of your data.
[DB Number and Variable Offsets](http://i.stack.imgur.com/X0zXH.png)
PLC Configuration
-----------------
Beside a PLC program that uses the variables you want to read, the PLC need no additional parts to communicate with snap7. The services that are needed to communicate with snap7 are started by the firmware on power on.
|
Try this:
Search the snap7 folder for snap7.dll and snap7.lib files
Copy the snap7.dll and snap7.lib into the "C:/PythonXX/site-packages/snap7 " directory and run you code again. You can figure out this in the common.py file in the same directory.
|
33,697,263
|
i try to install snap7 (to read from a S7-1200) with it's python-snap7 0.4 wrapper but i get always a traceback with the following simple code.
```
from time import sleep
import snap7
from snap7.util import *
import struct
plc = snap7.client.Client()
```
Traceback:
```
>>>
Traceback (most recent call last):
File "Y:\Lonnox\Projekte\Bibliothek\Python und SPS\S7-1200 Test.py", line 6, in <module>
plc = snap7.client.Client()
File "C:\Python34\lib\site-packages\snap7\client.py", line 30, in __init__
self.library = load_library()
File "C:\Python34\lib\site-packages\snap7\common.py", line 54, in load_library
return Snap7Library(lib_location).cdll
File "C:\Python34\lib\site-packages\snap7\common.py", line 46, in __init__
raise Snap7Exception(msg)
snap7.snap7exceptions.Snap7Exception: can't find snap7 library. If installed, try running ldconfig
```
The steps i do to install snap7 and python wrapper are:
1. Download snap7 from sourceforge and copy snap7.dll and snap7.lib to system32 folder of windows 8
2. Install wrapper by using pip install python-snap7
How to install snap7 on windows correctly ?
[log of pip install][1]
|
2015/11/13
|
[
"https://Stackoverflow.com/questions/33697263",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4801693/"
] |
After some try and error experiments and with some infos of snap7 involved developers, i fixed the problem. The folder where the snap7.dll and .lib file are located must be present in the Enviroment variables of Windows. Alternative you can copy the files to the Python install dir if you have checked the "add path" option from the Python installer.
See the picture for Details: Edit Enviroment Vars
[edit enviroment vars](http://i.stack.imgur.com/mwaLI.png)
To give a good starting point for everyone who is a greenhorn like me here is a minimal snap7 tutorial to read variables of a DB from a S7 1212C PLC with Python3:
```
import snap7
from snap7.util import *
import struct
plc = snap7.client.Client()
plc.connect("10.112.115.10",0,1)
#---Read DB---
db = plc.db_read(1234,0,14)
real = struct.iter_unpack("!f",db[:12] )
print( "3 x Real Vars:", [f for f, in real] )
print( "3 x Bool Vars:", db[12]&1==1, db[12]&2==2, db[12]&4==4 )
plc.disconnect()
```
IP and Subnetmask
-----------------
The IP of the PLC must be in the range of the subnetmask of the PC LAN Device. If the IP of the LAN device is 10.112.115.1 and the submask is 255.255.255.0 this give you a range of 10.112.115.2 to 10.112.115.255 for your PLC. Every PLC IP outside this range will give you a "Unreachable peer" Error.
Firewall
--------
Make sure that your firewall allow the communication between your PC and PLC.
PLC Data Location
-----------------
If you are unfamilar with STEP 7/ TIA Portal. Look for the "Online Diagnostics" Button and see the pictures to find the location of your data.
[DB Number and Variable Offsets](http://i.stack.imgur.com/X0zXH.png)
PLC Configuration
-----------------
Beside a PLC program that uses the variables you want to read, the PLC need no additional parts to communicate with snap7. The services that are needed to communicate with snap7 are started by the firmware on power on.
|
The latest setup to use snap7 looks as follows for me:
* install snap7 for python with pip in the command line by "pip install
python-snap7"
* download the latest snap7 package from [sourceforge](https://sourceforge.net/projects/snap7/files/)
* copy the 32 or 64bit version to any folder, for example your project folder
* do an import snap7 in your python program
* temporarily edit your enviroment variables in your python program
```
#---Temporarily Change The Path Enviroment Variable For Snap7.dll---
if not snapPath in os.environ["PATH"]:
os.environ["PATH"] = os.environ["PATH"] + ";" + snapPath.replace("/","\\")
```
Spaces in the path are allowed. This works great although if your create installer for example with xcfreeze.
|
33,697,263
|
i try to install snap7 (to read from a S7-1200) with it's python-snap7 0.4 wrapper but i get always a traceback with the following simple code.
```
from time import sleep
import snap7
from snap7.util import *
import struct
plc = snap7.client.Client()
```
Traceback:
```
>>>
Traceback (most recent call last):
File "Y:\Lonnox\Projekte\Bibliothek\Python und SPS\S7-1200 Test.py", line 6, in <module>
plc = snap7.client.Client()
File "C:\Python34\lib\site-packages\snap7\client.py", line 30, in __init__
self.library = load_library()
File "C:\Python34\lib\site-packages\snap7\common.py", line 54, in load_library
return Snap7Library(lib_location).cdll
File "C:\Python34\lib\site-packages\snap7\common.py", line 46, in __init__
raise Snap7Exception(msg)
snap7.snap7exceptions.Snap7Exception: can't find snap7 library. If installed, try running ldconfig
```
The steps i do to install snap7 and python wrapper are:
1. Download snap7 from sourceforge and copy snap7.dll and snap7.lib to system32 folder of windows 8
2. Install wrapper by using pip install python-snap7
How to install snap7 on windows correctly ?
[log of pip install][1]
|
2015/11/13
|
[
"https://Stackoverflow.com/questions/33697263",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4801693/"
] |
After some try and error experiments and with some infos of snap7 involved developers, i fixed the problem. The folder where the snap7.dll and .lib file are located must be present in the Enviroment variables of Windows. Alternative you can copy the files to the Python install dir if you have checked the "add path" option from the Python installer.
See the picture for Details: Edit Enviroment Vars
[edit enviroment vars](http://i.stack.imgur.com/mwaLI.png)
To give a good starting point for everyone who is a greenhorn like me here is a minimal snap7 tutorial to read variables of a DB from a S7 1212C PLC with Python3:
```
import snap7
from snap7.util import *
import struct
plc = snap7.client.Client()
plc.connect("10.112.115.10",0,1)
#---Read DB---
db = plc.db_read(1234,0,14)
real = struct.iter_unpack("!f",db[:12] )
print( "3 x Real Vars:", [f for f, in real] )
print( "3 x Bool Vars:", db[12]&1==1, db[12]&2==2, db[12]&4==4 )
plc.disconnect()
```
IP and Subnetmask
-----------------
The IP of the PLC must be in the range of the subnetmask of the PC LAN Device. If the IP of the LAN device is 10.112.115.1 and the submask is 255.255.255.0 this give you a range of 10.112.115.2 to 10.112.115.255 for your PLC. Every PLC IP outside this range will give you a "Unreachable peer" Error.
Firewall
--------
Make sure that your firewall allow the communication between your PC and PLC.
PLC Data Location
-----------------
If you are unfamilar with STEP 7/ TIA Portal. Look for the "Online Diagnostics" Button and see the pictures to find the location of your data.
[DB Number and Variable Offsets](http://i.stack.imgur.com/X0zXH.png)
PLC Configuration
-----------------
Beside a PLC program that uses the variables you want to read, the PLC need no additional parts to communicate with snap7. The services that are needed to communicate with snap7 are started by the firmware on power on.
|
**Copy** `snap7.dll and snap7.lib` **from** `"\snap7-full-1.2.1\release\Windows\Win64"` and **paste** them in to `"C:\snap7-full-1.2.1\release\Windows\Win64"` folder.
then "import snap7" is working. but it gives error in next step.
snap7.client.Client() -> AttributeError: module 'snap7' has no attribute 'client'
i used "https://github.com/gijzelaerr/python-snap7" project. it is working.
|
33,697,263
|
i try to install snap7 (to read from a S7-1200) with it's python-snap7 0.4 wrapper but i get always a traceback with the following simple code.
```
from time import sleep
import snap7
from snap7.util import *
import struct
plc = snap7.client.Client()
```
Traceback:
```
>>>
Traceback (most recent call last):
File "Y:\Lonnox\Projekte\Bibliothek\Python und SPS\S7-1200 Test.py", line 6, in <module>
plc = snap7.client.Client()
File "C:\Python34\lib\site-packages\snap7\client.py", line 30, in __init__
self.library = load_library()
File "C:\Python34\lib\site-packages\snap7\common.py", line 54, in load_library
return Snap7Library(lib_location).cdll
File "C:\Python34\lib\site-packages\snap7\common.py", line 46, in __init__
raise Snap7Exception(msg)
snap7.snap7exceptions.Snap7Exception: can't find snap7 library. If installed, try running ldconfig
```
The steps i do to install snap7 and python wrapper are:
1. Download snap7 from sourceforge and copy snap7.dll and snap7.lib to system32 folder of windows 8
2. Install wrapper by using pip install python-snap7
How to install snap7 on windows correctly ?
[log of pip install][1]
|
2015/11/13
|
[
"https://Stackoverflow.com/questions/33697263",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4801693/"
] |
**Copy** `snap7.dll and snap7.lib` **from** `"\snap7-full-1.2.1\release\Windows\Win64"` and **paste** them in to `"C:\snap7-full-1.2.1\release\Windows\Win64"` folder.
then "import snap7" is working. but it gives error in next step.
snap7.client.Client() -> AttributeError: module 'snap7' has no attribute 'client'
i used "https://github.com/gijzelaerr/python-snap7" project. it is working.
|
Try this:
Search the snap7 folder for snap7.dll and snap7.lib files
Copy the snap7.dll and snap7.lib into the "C:/PythonXX/site-packages/snap7 " directory and run you code again. You can figure out this in the common.py file in the same directory.
|
33,697,263
|
i try to install snap7 (to read from a S7-1200) with it's python-snap7 0.4 wrapper but i get always a traceback with the following simple code.
```
from time import sleep
import snap7
from snap7.util import *
import struct
plc = snap7.client.Client()
```
Traceback:
```
>>>
Traceback (most recent call last):
File "Y:\Lonnox\Projekte\Bibliothek\Python und SPS\S7-1200 Test.py", line 6, in <module>
plc = snap7.client.Client()
File "C:\Python34\lib\site-packages\snap7\client.py", line 30, in __init__
self.library = load_library()
File "C:\Python34\lib\site-packages\snap7\common.py", line 54, in load_library
return Snap7Library(lib_location).cdll
File "C:\Python34\lib\site-packages\snap7\common.py", line 46, in __init__
raise Snap7Exception(msg)
snap7.snap7exceptions.Snap7Exception: can't find snap7 library. If installed, try running ldconfig
```
The steps i do to install snap7 and python wrapper are:
1. Download snap7 from sourceforge and copy snap7.dll and snap7.lib to system32 folder of windows 8
2. Install wrapper by using pip install python-snap7
How to install snap7 on windows correctly ?
[log of pip install][1]
|
2015/11/13
|
[
"https://Stackoverflow.com/questions/33697263",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4801693/"
] |
**Copy** `snap7.dll and snap7.lib` **from** `"\snap7-full-1.2.1\release\Windows\Win64"` and **paste** them in to `"C:\snap7-full-1.2.1\release\Windows\Win64"` folder.
then "import snap7" is working. but it gives error in next step.
snap7.client.Client() -> AttributeError: module 'snap7' has no attribute 'client'
i used "https://github.com/gijzelaerr/python-snap7" project. it is working.
|
The latest setup to use snap7 looks as follows for me:
* install snap7 for python with pip in the command line by "pip install
python-snap7"
* download the latest snap7 package from [sourceforge](https://sourceforge.net/projects/snap7/files/)
* copy the 32 or 64bit version to any folder, for example your project folder
* do an import snap7 in your python program
* temporarily edit your enviroment variables in your python program
```
#---Temporarily Change The Path Enviroment Variable For Snap7.dll---
if not snapPath in os.environ["PATH"]:
os.environ["PATH"] = os.environ["PATH"] + ";" + snapPath.replace("/","\\")
```
Spaces in the path are allowed. This works great although if your create installer for example with xcfreeze.
|
39,457,209
|
I am trying to do some white blob detection using OpenCV. But my script failed to detect the big white block which is my goal while some small blobs are detected. I am new to OpenCV, and am i doing something wrong when using simpleblobdetection in OpenCV? [Solved partially, please read below]
And here is the script:
```
#!/usr/bin/python
# Standard imports
import cv2
import numpy as np;
from matplotlib import pyplot as plt
# Read image
im = cv2.imread('whiteborder.jpg', cv2.IMREAD_GRAYSCALE)
imfiltered = cv2.inRange(im,255,255)
#OPENING
kernel = np.ones((5,5))
opening = cv2.morphologyEx(imfiltered,cv2.MORPH_OPEN,kernel)
#write out the filtered image
cv2.imwrite('colorfiltered.jpg',opening)
# Setup SimpleBlobDetector parameters.
params = cv2.SimpleBlobDetector_Params()
params.blobColor= 255
params.filterByColor = True
# Create a detector with the parameters
ver = (cv2.__version__).split('.')
if int(ver[0]) < 3 :
detector = cv2.SimpleBlobDetector(params)
else :
detector = cv2.SimpleBlobDetector_create(params)
# Detect blobs.
keypoints = detector.detect(opening)
# Draw detected blobs as green circles.
# cv2.DRAW_MATCHES_FLAGS_DRAW_RICH_KEYPOINTS ensures
# the size of the circle corresponds to the size of blob
print str(keypoints)
im_with_keypoints = cv2.drawKeypoints(opening, keypoints, np.array([]), (0,255,0), cv2.DRAW_MATCHES_FLAGS_DRAW_RICH_KEYPOINTS)
# Show blobs
##cv2.imshow("Keypoints", im_with_keypoints)
cv2.imwrite('Keypoints.jpg',im_with_keypoints)
cv2.waitKey(0)
```
**EDIT**:
By adding a bigger value of area maximum value, i am able to identify a big blob but my end goal is to identify the big white rectangle exist or not. And the white blob detection i did returns not only the rectangle but also the surrounding areas as well. [This part solved]
**EDIT 2:**
Based on the answer from @PSchn, i update my code to apply the logic, first set the color filter to only get the white pixels and then remove the noise point using opening. It works for the sample data and i can successfully get the keypoint after blob detection.
[](https://i.stack.imgur.com/U2TRP.jpg)
|
2016/09/12
|
[
"https://Stackoverflow.com/questions/39457209",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6779632/"
] |
If you just want to detect the white rectangle you can try to set a higher threshold, e.g. 253, erase small object with an opening and take the biggest blob. I first smoothed your image, then thresholding it:
[](https://i.stack.imgur.com/UrrBT.png)
and the opening:
[](https://i.stack.imgur.com/LNEBf.png)
now you just have to use `findContours` and take the `boundingRect`. If your rectangle is always that white it should work. If you get lower then 251 with your threshold the other small blobs will appear and your region merges with them, like here:
[](https://i.stack.imgur.com/GB6iM.png)
Then you could still do an opening several times and you get this:
[](https://i.stack.imgur.com/oKWJT.png)
But i dont think that it is the fastest idea ;)
|
You could try setting params.maxArea to something obnoxiously large (somewhere in the tens of thousands): the default may be something lower than the area of the rectangle you're trying to detect. Also, I don't know how true this is or not, but I've heard that detection by color is bugged with a logic error, so it may be worth a try disabling it just in case that is causing problems (this has probably been fixed in later versions, but it could still be worth a try)
|
68,010,585
|
I can edit python code in a folder located in a Docker Volume. I use Visual Studio Code and in general lines it works fine.
The only problem that I have is that the libraries (such as pandas and numpy) are not installed in the container that Visual Studio creates to mount the volume, so I get warning errors.
How to install these libraries in Visual Studio Code container?
\*\* UPDATE \*\*
This is my application `Dockerfile`, see that the libraries are included in the image, not the volume:
```
FROM daskdev/dask
RUN /opt/conda/bin/conda create -p /pyenv -y
RUN /opt/conda/bin/conda install -p /pyenv scikit-learn flask waitress gunicorn \
pytest apscheduler matplotlib pyodbc -y
RUN /opt/conda/bin/conda install -p /pyenv -c conda-forge dask-ml pyarrow -y
RUN /opt/conda/bin/conda install -p /pyenv pip -y
RUN /pyenv/bin/pip install pydrill
```
And the application is started with `docker compose`:
```
version: '3'
services:
web:
image: img-python
container_name: cont_flask
volumes:
- vol_py_code:/code
ports:
- "5000:5000"
working_dir: /code
entrypoint:
- /pyenv/bin/gunicorn
command:
- -b 0.0.0.0:5000
- --reload
- app.frontend.app:app
```
|
2021/06/16
|
[
"https://Stackoverflow.com/questions/68010585",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1362485/"
] |
you can use this way. Put a img in div tag and use text-aling: center. There are many ways you can do this.
```css
.fotos-block{
text-align: center;
}
```
```html
<div class="fotos-block">
<img src = "https://www.imagemhost.com.br/images/2021/06/13/mail.png" class="fotos" id="foto1f">
</div>
```
|
And you can also use this way to center the img.
```css
.fotos{
display: block;
margin: auto;
text-align: center;
}
```
|
69,726,911
|
I need to return within this FOR only values equal to or less than 6 in each column.
```
colunas = list(df2.columns[8:19])
colunas
['Satisfação geral',
'Comunicação',
'Expertise da industria',
'Inovação',
'Parceira',
'Proatividade',
'Qualidade',
'responsividade',
'Pessoas',
'Expertise técnico',
'Pontualidade']
lista = []
for coluna in colunas:
nome_coluna = coluna
#total_parcial = df2[coluna].count()
df2.loc[df2[coluna]<=6].shape[0]
percentual = df2[coluna].count() / df2[coluna].count()
lista.append([nome_coluna,total_parcial,percentual])
df_new = pd.DataFrame(data=lista, columns=['nome_coluna','total_parcial','percentual'])
```
But returns the error
```
TypeError Traceback (most recent call last)
<ipython-input-120-364994f742fd> in <module>()
4 nome_coluna = coluna
5 #total_parcial = df2[coluna].count()
----> 6 df2.loc[df2[coluna]<=6].shape[0]
7 percentual = df2[coluna].count() / df2[coluna].count()
8 lista.append([nome_coluna,total_parcial,percentual])
3 frames
/usr/local/lib/python3.7/dist-packages/pandas/core/ops/array_ops.py in comp_method_OBJECT_ARRAY(op, x, y)
54 result = libops.vec_compare(x.ravel(), y.ravel(), op)
55 else:
---> 56 result = libops.scalar_compare(x.ravel(), y, op)
57 return result.reshape(x.shape)
58
pandas/_libs/ops.pyx in pandas._libs.ops.scalar_compare()
TypeError: '<=' not supported between instances of 'str' and 'int'
```
If I put the code that is giving the error alone in a line it works
```
df2.loc[df2['Pontualidade'] <= 6].shape[0]
1537
```
What is the correct syntax?
Thanks
|
2021/10/26
|
[
"https://Stackoverflow.com/questions/69726911",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/17161157/"
] |
I found the solution: seems like AWS is using the term "Subnet Group" in multiple services. I created the group in the service "ElastiCache" but it needs to be created in service "DocumentDB" (see screenshot below).
[](https://i.stack.imgur.com/NGPT3.png)
|
I had a similar issue. Before you create the cluster, you need to have a Security Group setup, and there, you should be able to change the VPC selected by default.
[](https://i.stack.imgur.com/lsPTl.png)
Additional info [here](https://docs.aws.amazon.com/documentdb/latest/developerguide/get-started-guide.html)
|
68,736,258
|
We successfully trained a TensorFlow model based on five climate features and one binary (0 or 1) label. We want an output for an outside input of five new climate variable values that will be inputted into model.predict(). However, we got an error when we tried to input an array of five values. Thanks in advance!
```
def split_dataset(dataset, test_ratio=0.10):
"""Splits a panda dataframe in two."""
test_indices = np.random.rand(len(dataset)) < test_ratio
return dataset[~test_indices], dataset[test_indices]
train_ds_pd, test_ds_pd = split_dataset(dataset_df)
print("{} examples in training, {} examples for testing.".format(
len(train_ds_pd), len(test_ds_pd)))
label = "Presence"
train_ds = tfdf.keras.pd_dataframe_to_tf_dataset(train_ds_pd, label=label)
test_ds = tfdf.keras.pd_dataframe_to_tf_dataset(test_ds_pd, label=label)
model_1 = tfdf.keras.RandomForestModel()
model_1.compile(
metrics=["accuracy"])
with sys_pipes():
model_1.fit(x=train_ds)
evaluation = model_1.evaluate(test_ds, return_dict=True)
print()
for name, value in evaluation.items():
print(f"{name}: {value:.4f}")
model_1.save("tfmodelmosquito")
import numpy as np
model_1=tf.keras.models.load_model ("tfmodelmosquito")
import pandas as pd
prediction = model_1.predict([9.0, 10.0, 11.0, 12.0, 13.0])
print (prediction)
```
Error:
```
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
<ipython-input-67-be5f2b7bc739> in <module>()
3 import pandas as pd
4
----> 5 prediction = model.predict([[9.0,10.0,11.0,12.0,13.0]])
6 print (prediction)
9 frames
/usr/local/lib/python3.7/dist-packages/tensorflow/python/framework/func_graph.py in wrapper(*args, **kwargs)
984 except Exception as e: # pylint:disable=broad-except
985 if hasattr(e, "ag_error_metadata"):
--> 986 raise e.ag_error_metadata.to_exception(e)
987 else:
988 raise
ValueError: in user code:
/usr/local/lib/python3.7/dist-packages/tensorflow/python/keras/engine/training.py:1569 predict_function *
return step_function(self, iterator)
/usr/local/lib/python3.7/dist-packages/tensorflow/python/keras/engine/training.py:1559 step_function **
outputs = model.distribute_strategy.run(run_step, args=(data,))
/usr/local/lib/python3.7/dist-packages/tensorflow/python/distribute/distribute_lib.py:1285 run
return self._extended.call_for_each_replica(fn, args=args, kwargs=kwargs)
/usr/local/lib/python3.7/dist-packages/tensorflow/python/distribute/distribute_lib.py:2833 call_for_each_replica
return self._call_for_each_replica(fn, args, kwargs)
/usr/local/lib/python3.7/dist-packages/tensorflow/python/distribute/distribute_lib.py:3608 _call_for_each_replica
return fn(*args, **kwargs)
/usr/local/lib/python3.7/dist-packages/tensorflow/python/keras/engine/training.py:1552 run_step **
outputs = model.predict_step(data)
/usr/local/lib/python3.7/dist-packages/tensorflow/python/keras/engine/training.py:1525 predict_step
return self(x, training=False)
/usr/local/lib/python3.7/dist-packages/tensorflow/python/keras/engine/base_layer.py:1030 __call__
outputs = call_fn(inputs, *args, **kwargs)
/usr/local/lib/python3.7/dist-packages/tensorflow/python/keras/saving/saved_model/utils.py:69 return_outputs_and_add_losses
outputs, losses = fn(*args, **kwargs)
/usr/local/lib/python3.7/dist-packages/tensorflow/python/keras/saving/saved_model/utils.py:167 wrap_with_training_arg
lambda: replace_training_and_call(False))
/usr/local/lib/python3.7/dist-packages/tensorflow/python/keras/utils/control_flow_util.py:110 smart_cond
pred, true_fn=true_fn, false_fn=false_fn, name=name)
/usr/local/lib/python3.7/dist-packages/tensorflow/python/framework/smart_cond.py:56 smart_cond
return false_fn()
/usr/local/lib/python3.7/dist-packages/tensorflow/python/keras/saving/saved_model/utils.py:167 <lambda>
lambda: replace_training_and_call(False))
/usr/local/lib/python3.7/dist-packages/tensorflow/python/keras/saving/saved_model/utils.py:163 replace_training_and_call
return wrapped_call(*args, **kwargs)
/usr/local/lib/python3.7/dist-packages/tensorflow/python/eager/def_function.py:889 __call__
result = self._call(*args, **kwds)
/usr/local/lib/python3.7/dist-packages/tensorflow/python/eager/def_function.py:933 _call
self._initialize(args, kwds, add_initializers_to=initializers)
/usr/local/lib/python3.7/dist-packages/tensorflow/python/eager/def_function.py:764 _initialize
*args, **kwds))
/usr/local/lib/python3.7/dist-packages/tensorflow/python/eager/function.py:3050 _get_concrete_function_internal_garbage_collected
graph_function, _ = self._maybe_define_function(args, kwargs)
/usr/local/lib/python3.7/dist-packages/tensorflow/python/eager/function.py:3444 _maybe_define_function
graph_function = self._create_graph_function(args, kwargs)
/usr/local/lib/python3.7/dist-packages/tensorflow/python/eager/function.py:3289 _create_graph_function
capture_by_value=self._capture_by_value),
/usr/local/lib/python3.7/dist-packages/tensorflow/python/framework/func_graph.py:999 func_graph_from_py_func
func_outputs = python_func(*func_args, **func_kwargs)
/usr/local/lib/python3.7/dist-packages/tensorflow/python/eager/def_function.py:672 wrapped_fn
out = weak_wrapped_fn().__wrapped__(*args, **kwds)
/usr/local/lib/python3.7/dist-packages/tensorflow/python/saved_model/function_deserialization.py:291 restored_function_body
"\n\n".join(signature_descriptions)))
ValueError: Could not find matching function to call loaded from the SavedModel. Got:
Positional arguments (2 total):
* Tensor("inputs:0", shape=(None, 5), dtype=float32)
* False
Keyword arguments: {}
Expected these arguments to match one of the following 4 option(s):
Option 1:
Positional arguments (2 total):
* {'Humidity': TensorSpec(shape=(None, 1), dtype=tf.float32, name='inputs/Humidity'), 'Cloud_Cover': TensorSpec(shape=(None, 1), dtype=tf.float32, name='inputs/Cloud_Cover'), 'Temperature': TensorSpec(shape=(None, 1), dtype=tf.float32, name='inputs/Temperature'), 'Pressure': TensorSpec(shape=(None, 1), dtype=tf.float32, name='inputs/Pressure'), 'Precipitation': TensorSpec(shape=(None, 1), dtype=tf.float32, name='inputs/Precipitation')}
* False
Keyword arguments: {}
Option 2:
Positional arguments (2 total):
* {'Temperature': TensorSpec(shape=(None, 1), dtype=tf.float32, name='inputs/Temperature'), 'Precipitation': TensorSpec(shape=(None, 1), dtype=tf.float32, name='inputs/Precipitation'), 'Cloud_Cover': TensorSpec(shape=(None, 1), dtype=tf.float32, name='inputs/Cloud_Cover'), 'Humidity': TensorSpec(shape=(None, 1), dtype=tf.float32, name='inputs/Humidity'), 'Pressure': TensorSpec(shape=(None, 1), dtype=tf.float32, name='inputs/Pressure')}
* True
Keyword arguments: {}
Option 3:
Positional arguments (2 total):
* {'Cloud_Cover': TensorSpec(shape=(None, 1), dtype=tf.float32, name='Cloud_Cover'), 'Humidity': TensorSpec(shape=(None, 1), dtype=tf.float32, name='Humidity'), 'Precipitation': TensorSpec(shape=(None, 1), dtype=tf.float32, name='Precipitation'), 'Temperature': TensorSpec(shape=(None, 1), dtype=tf.float32, name='Temperature'), 'Pressure': TensorSpec(shape=(None, 1), dtype=tf.float32, name='Pressure')}
* False
Keyword arguments: {}
Option 4:
Positional arguments (2 total):
* {'Temperature': TensorSpec(shape=(None, 1), dtype=tf.float32, name='Temperature'), 'Precipitation': TensorSpec(shape=(None, 1), dtype=tf.float32, name='Precipitation'), 'Humidity': TensorSpec(shape=(None, 1), dtype=tf.float32, name='Humidity'), 'Cloud_Cover': TensorSpec(shape=(None, 1), dtype=tf.float32, name='Cloud_Cover'), 'Pressure': TensorSpec(shape=(None, 1), dtype=tf.float32, name='Pressure')}
* True
Keyword arguments: {}
```
|
2021/08/11
|
[
"https://Stackoverflow.com/questions/68736258",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/16637888/"
] |
This is because of the non-blocking, asynchronous nature of the `con.query()` function call. It starts the asynchronous operation and then executes the lines of code after it. Then, sometime LATER, it calls its callback. So, in this code of yours with my adding logging:
```
router.post('/login', (req, res) => {
con.query("SELECT id, username, password FROM authorizedusers;", (err, result) => {
if(err) throw err;
for(var i = 0; i < result.length; i++) {
if(req.body.username === result[i].username && req.body.password === result[i].password){
con.query(`SELECT id FROM students WHERE user_id = ${result[i].id};`, (err, result) => {
if(err) throw err;
console.log("got student_id");
req.session.student_id = result[0].id;
});
req.session.is_logged_in = true;
req.session.user_id = result[i].id;
console.log("redirecting and finishing request");
return res.redirect('/');
}
}
return res.render('login', {
msg: "Error! Invalid Credentials!"
});
});
});
```
You would get this logging:
```
redirecting and finishing request
got student_id
```
So, you finished the request BEFORE you got the `student_id` and set it into the session. Thus, when you go to immediately try to use the data from the session, it isn't there yet.
---
This is not a particularly easy problem to solve without promises because you have an older-style asynchronous call inside a `for` loop. This would be much easier if you use the promise interface for your SQL library as then you can sequence things and can use `await` to run only one query at a time - your current loop is running all the queries from the `for` loop in parallel which is not going to make it easy to select the one winner and stop everything else.
If you switch to `mysql2` and then use the promise version:
```
const mysql = require('mysql2/promise');
```
Then, you can do something like this (I'm no expert on mysql2, but hopefully you can see the idea for how you use a promise interface to solve your problem):
```
router.post('/login', async (req, res) => {
try {
const [rows, fields] = await con.query("SELECT id, username, password FROM authorizedusers;");
for (let i = 0; i < rows.length; i++) {
if (req.body.username === rows[i].username && req.body.password === rows[i].password) {
let [students, fields] = await con.query(`SELECT id FROM students WHERE user_id = ${rows[i].id};`);
req.session.student_id = students[0].id;
req.session.is_logged_in = true;
req.session.user_id = rows[i].id;
return res.redirect('/');
}
}
return res.render('login', { msg: "Error! Invalid Credentials!" });
} catch (e) {
console.log(e);
res.sendStatus(500);
}
});
```
Your existing implementation has other deficiencies too that are corrected with this code:
1. The loop variable declared with `var` will never be correct inside the asynchronous callback function.
2. Your `if (err) throw err` error handling is insufficient. You need to capture the error, log it and send an error response when you get an error in a response handler.
3. You will always call `res.render()` before any of the database calls in the loop complete.
|
It can be sometimes that the express session is saved when the out direct handler/function finished.
On that situation, if you want to save your session within a new async function, you should add the `next` function variable in your handler.
Then use it as the callback function to save you session.
It should look like this:
```js
router.post('/login', (req, res, next) => {
con.query(...,
(err, result) => {
for(...) {
if(...){
res.redirect('/');
next();
return;
}
}
res.render('login', {
msg: "Error! Invalid Credentials!"
});
next();
return;
});
});
```
Before it happens, make use your code is executed correctly.
|
55,392,952
|
I have a Python script that runs selenium webdriver that executes in the following steps:
1) Execute a for loop that runs for x number of times
2) Within the main for loop, selenium web driver finds buttons on the page using xpath
3) For each button found by selenium, the nested for loop clicks each button
4) Once a button is clicked, a popup window opens, that redirects random websites within the popup
5) Further, the selenium webdriver finds other buttons within the popup and clicks the button, closes the popup and returns to main window to click the second button on the main website
This code works fine while executing, but the issue occurs while selenium exceptions happen.
1) If the popup window has blank page, then selenium exception occurs, but the code written for that exception is not executing
2) If the popup closes by the main website after timeout(not closed by selenium webdriver), then NoSuchWindowException occours, but the under this exception never executes
I have tried changing the code several times by adding if else condition, but not able to resolve NoSuchWindowException exceptio
Below is the code:
```
for _ in range(100):
print("main loop pass")
fb_buttons = driver.find_elements_by_xpath('//a[contains(@class,"pages_button profile_view")]')
for button in fb_buttons:
try:
time.sleep(10)
button.click()
driver.implicitly_wait(5)
driver.switch_to.window(driver.window_handles[1])
driver.execute_script("window.scrollTo(0, 2500)")
print("wiindow scrolled")
like_right = driver.find_elements_by_xpath(
"/html[1]/body[1]/div[1]/div[1]/div[4]/div[1]/div[1]/div[1]/div[1]/div[2]/div[2]/div[1]/div[1]/div[3]/div[1]/div[1]")
like_left = driver.find_elements_by_xpath(
"/html/body/div[1]/div/div[2]/div/div[1]/div[1]/div[2]/div/div[2]/table/tbody/tr/td[1]/a[1]")
while like_right:
for right in like_right:
right.click()
break
while like_left:
for left in like_left:
left.click()
break
while like_post:
for like in like_post:
like.click()
break
time.sleep(5)
driver.close()
driver.implicitly_wait(5)
driver.switch_to.window(driver.window_handles[0])
print("clicks executed successfully")
continue
except StaleElementReferenceException as e:
driver.close()
driver.switch_to.window(driver.window_handles[0])
popunder = driver.find_element_by_xpath("/html/body/div[1]/div[2]/div[3]/p[2]/a")
if popunder is True:
popunder.click()
driver.implicitly_wait(5)
else:
continue
print("exception occured-element is not attached to the page document")
except ElementNotVisibleException as e:
driver.close()
driver.switch_to.window(driver.window_handles[0])
popunder = driver.find_element_by_xpath("/html/body/div[1]/div[2]/div[3]/p[2]/a")
if popunder is True:
popunder.click()
driver.implicitly_wait(5)
else:
continue
print("Exception occured - ElementNotVisibleException")
except WebDriverException as e:
driver.close()
driver.switch_to.window(driver.window_handles[0])
popunder = driver.find_element_by_xpath("/html/body/div[1]/div[2]/div[3]/p[2]/a")
if popunder is True:
popunder.click()
driver.implicitly_wait(5)
else:
continue
print("Exception occured - WebDriverException")
except NoSuchWindowException as e:
driver.switch_to.window(driver.window_handles[0])
popunder = driver.find_element_by_xpath("/html/body/div[1]/div[2]/div[3]/p[2]/a")
if popunder is True:
popunder.click()
driver.implicitly_wait(5)
else:
continue
print("Exception - NoSuchWindowException - Switched to main window")
else:
time.sleep(5)
refresh.click()
print("refreshed")
```
I am trying to handle the NoSuchWindowException by the python code itself as everytime the popup window closes by main website, this exception occurs and python script stops to execute the next for loop:
```
File "C:\Program Files (x86)\Python37-32\lib\site-packages\selenium\webdriver\remote\errorhandler.py", line 242, in check_response
raise exception_class(message, screen, stacktrace)
selenium.common.exceptions.NoSuchWindowException: Message: no such window: target window already closed
from unknown error: web view not found
(Session info: chrome=73.0.3683.86)
(Driver info: chromedriver=73.0.3683.68 (47787ec04b6e38e22703e856e101e840b65afe72),platform=Windows NT 6.1.7601 SP1 x86_64)
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "C:/Users/javed/PycharmProjects/clicks/test/fb-click-perfect-working.py", line 98, in <module>
driver.close()
File "C:\Program Files (x86)\Python37-32\lib\site-packages\selenium\webdriver\remote\webdriver.py", line 688, in close
self.execute(Command.CLOSE)
File "C:\Program Files (x86)\Python37-32\lib\site-packages\selenium\webdriver\remote\webdriver.py", line 321, in execute
self.error_handler.check_response(response)
File "C:\Program Files (x86)\Python37-32\lib\site-packages\selenium\webdriver\remote\errorhandler.py", line 242, in check_response
raise exception_class(message, screen, stacktrace)
selenium.common.exceptions.NoSuchWindowException: Message: no such window: target window already closed
from unknown error: web view not found
(Session info: chrome=73.0.3683.86)
(Driver info: chromedriver=73.0.3683.68 (47787ec04b6e38e22703e856e101e840b65afe72),platform=Windows NT 6.1.7601 SP1 x86_64)
Process finished with exit code 1
```
|
2019/03/28
|
[
"https://Stackoverflow.com/questions/55392952",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/601787/"
] |
Finally instead of:
```py
conn2 = conn.connect_as_project(project_id)
```
I used:
```py
conn2 = openstack.connection.Connection(
region_name='RegionOne',
auth=dict(
auth_url='http://controller:5000/v3',
username=u_name,
password=password,
project_id=project_id,
user_domain_id='default'),
compute_api_version='2',
identity_interface='internal')
```
and it worked.
|
I did this just fine...the only difference is that the project is a new project and I have to give credentials to the user I was using.
It was something like that:
```py
project = sconn.create_project(
name=name, domain_id='default')
user_id = conn.current_user_id
user = conn.get_user(user_id)
roles = conn.list_roles()
for r in roles:
conn.identity.assign_project_role_to_user(
project.id, user.id, r.id
)
# Make sure the roles are correctly assigned to the user before proceed
conn2 = self.conn.connect_as_project(project.name)
```
After that, anything created (servers, keypairs, networks, etc) is under the new project.
|
40,307,635
|
In the R xgboost package, I can specify `predictions=TRUE` to save the out-of-fold predictions during cross-validation, e.g.:
```
library(xgboost)
data(mtcars)
xgb_params = list(
max_depth = 1,
eta = 0.01
)
x = model.matrix(mpg~0+., mtcars)
train = xgb.DMatrix(x, label=mtcars$mpg)
res = xgb.cv(xgb_params, train, 100, prediction=TRUE, nfold=5)
print(head(res$pred))
```
How would I do the equivalent in the python package? I can't find a `prediction` argument for `xgboost.cv`in python.
|
2016/10/28
|
[
"https://Stackoverflow.com/questions/40307635",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/345660/"
] |
I'm not sure if this is what you want, but you can accomplish this by using the sklearn wrapper for xgboost: (I know I'm using iris dataset as regression problem -- which it isn't but this is for illustration).
```
import xgboost as xgb
from sklearn.cross_validation import cross_val_predict as cvp
from sklearn import datasets
X = datasets.load_iris().data[:, :2]
y = datasets.load_iris().target
xgb_model = xgb.XGBRegressor()
y_pred = cvp(xgb_model, X, y, cv=3, n_jobs = 1)
y_pred
array([ 9.07209516e-01, 1.84738374e+00, 1.78878939e+00,
1.83672094e+00, 9.07209516e-01, 9.07209516e-01,
1.77482617e+00, 9.07209516e-01, 1.75681138e+00,
1.83672094e+00, 9.07209516e-01, 1.77482617e+00,
1.84738374e+00, 1.84738374e+00, 1.12216723e+00,
9.96944368e-01, 9.07209516e-01, 9.07209516e-01,
9.96944368e-01, 9.07209516e-01, 9.07209516e-01,
9.07209516e-01, 1.77482617e+00, 8.35850239e-01,
1.77482617e+00, 9.87186074e-01, 9.07209516e-01,
9.07209516e-01, 9.07209516e-01, 1.78878939e+00,
1.83672094e+00, 9.07209516e-01, 9.07209516e-01,
8.91427517e-01, 1.83672094e+00, 9.09049034e-01,
8.91427517e-01, 1.83672094e+00, 1.84738374e+00,
9.07209516e-01, 9.07209516e-01, 1.01038718e+00,
1.78878939e+00, 9.07209516e-01, 9.07209516e-01,
1.84738374e+00, 9.07209516e-01, 1.78878939e+00,
9.07209516e-01, 8.35850239e-01, 1.99947178e+00,
1.99947178e+00, 1.99947178e+00, 1.94922602e+00,
1.99975276e+00, 1.91500926e+00, 1.99947178e+00,
1.97454870e+00, 1.99947178e+00, 1.56287444e+00,
1.96453893e+00, 1.99947178e+00, 1.99715066e+00,
1.99947178e+00, 2.84575284e-01, 1.99947178e+00,
2.84575284e-01, 2.00303388e+00, 1.99715066e+00,
2.04597521e+00, 1.99947178e+00, 1.99975276e+00,
2.00527954e+00, 1.99975276e+00, 1.99947178e+00,
1.99947178e+00, 1.99975276e+00, 1.99947178e+00,
1.99947178e+00, 1.91500926e+00, 1.95735490e+00,
1.95735490e+00, 2.00303388e+00, 1.99975276e+00,
5.92201948e-04, 1.99947178e+00, 1.99947178e+00,
1.99715066e+00, 2.84575284e-01, 1.95735490e+00,
1.89267385e+00, 1.99947178e+00, 2.00303388e+00,
1.96453893e+00, 1.98232651e+00, 2.39597082e-01,
2.39597082e-01, 1.99947178e+00, 1.97454870e+00,
1.91500926e+00, 9.99531507e-01, 1.00023842e+00,
1.00023842e+00, 1.00023842e+00, 1.00023842e+00,
1.00023842e+00, 9.22234297e-01, 1.00023842e+00,
1.00100708e+00, 1.16144836e-01, 1.00077248e+00,
1.00023842e+00, 1.00023842e+00, 1.00100708e+00,
1.00023842e+00, 1.00077248e+00, 1.00023842e+00,
1.13711983e-01, 1.00023842e+00, 1.00135887e+00,
1.00077248e+00, 1.00023842e+00, 1.00023842e+00,
1.00023842e+00, 9.99531507e-01, 1.00077248e+00,
1.00023842e+00, 1.00023842e+00, 1.00023842e+00,
1.00023842e+00, 1.00023842e+00, 1.13711983e-01,
1.00023842e+00, 1.00023842e+00, 1.00023842e+00,
1.00023842e+00, 9.78098869e-01, 1.00023842e+00,
1.00023842e+00, 1.00023842e+00, 1.00023842e+00,
1.00023842e+00, 1.00023842e+00, 1.00077248e+00,
9.99531507e-01, 1.00023842e+00, 1.00100708e+00,
1.00023842e+00, 9.78098869e-01, 1.00023842e+00], dtype=float32)
```
|
This is possible with `xgboost.cv()` but it is a bit hacky. It uses the callbacks and ... a global variable which I'm told is not desirable.
```
def oof_prediction():
"""
Dirty global variable callback hack.
"""
global cv_prediction_dict
def callback(env):
"""internal function"""
cv_prediction_list = []
for i in [0, 1, 2, 3, 4]:
cv_prediction_list.append([env.cvfolds[i].bst.predict(env.cvfolds[i].dtest)])
cv_prediction_dict['cv'] = cv_prediction_list
return callback
```
Now we can call the callback from `xgboost.cv()` as follows.
```
cv_prediction_dict = {}
xgb.cv(xgb_params, train, 100, callbacks=[oof_prediction()]), nfold=5)
pos_oof_predictions = cv_prediction_dict.copy()
```
It will return the out-of-fold prediction for the last iteration/num\_boost\_round, even if there is `early_stopping` used. I believe this is something the R `predictions=TRUE` functionality [does/did](https://github.com/dmlc/xgboost/issues/1188) not do correctly.
---
*Hack disclaimer: I know this is rather hacky but it is a work around my poor understanding of how the callback is working. If anyone knows how to make this better then please comment.*
|
40,307,635
|
In the R xgboost package, I can specify `predictions=TRUE` to save the out-of-fold predictions during cross-validation, e.g.:
```
library(xgboost)
data(mtcars)
xgb_params = list(
max_depth = 1,
eta = 0.01
)
x = model.matrix(mpg~0+., mtcars)
train = xgb.DMatrix(x, label=mtcars$mpg)
res = xgb.cv(xgb_params, train, 100, prediction=TRUE, nfold=5)
print(head(res$pred))
```
How would I do the equivalent in the python package? I can't find a `prediction` argument for `xgboost.cv`in python.
|
2016/10/28
|
[
"https://Stackoverflow.com/questions/40307635",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/345660/"
] |
I'm not sure if this is what you want, but you can accomplish this by using the sklearn wrapper for xgboost: (I know I'm using iris dataset as regression problem -- which it isn't but this is for illustration).
```
import xgboost as xgb
from sklearn.cross_validation import cross_val_predict as cvp
from sklearn import datasets
X = datasets.load_iris().data[:, :2]
y = datasets.load_iris().target
xgb_model = xgb.XGBRegressor()
y_pred = cvp(xgb_model, X, y, cv=3, n_jobs = 1)
y_pred
array([ 9.07209516e-01, 1.84738374e+00, 1.78878939e+00,
1.83672094e+00, 9.07209516e-01, 9.07209516e-01,
1.77482617e+00, 9.07209516e-01, 1.75681138e+00,
1.83672094e+00, 9.07209516e-01, 1.77482617e+00,
1.84738374e+00, 1.84738374e+00, 1.12216723e+00,
9.96944368e-01, 9.07209516e-01, 9.07209516e-01,
9.96944368e-01, 9.07209516e-01, 9.07209516e-01,
9.07209516e-01, 1.77482617e+00, 8.35850239e-01,
1.77482617e+00, 9.87186074e-01, 9.07209516e-01,
9.07209516e-01, 9.07209516e-01, 1.78878939e+00,
1.83672094e+00, 9.07209516e-01, 9.07209516e-01,
8.91427517e-01, 1.83672094e+00, 9.09049034e-01,
8.91427517e-01, 1.83672094e+00, 1.84738374e+00,
9.07209516e-01, 9.07209516e-01, 1.01038718e+00,
1.78878939e+00, 9.07209516e-01, 9.07209516e-01,
1.84738374e+00, 9.07209516e-01, 1.78878939e+00,
9.07209516e-01, 8.35850239e-01, 1.99947178e+00,
1.99947178e+00, 1.99947178e+00, 1.94922602e+00,
1.99975276e+00, 1.91500926e+00, 1.99947178e+00,
1.97454870e+00, 1.99947178e+00, 1.56287444e+00,
1.96453893e+00, 1.99947178e+00, 1.99715066e+00,
1.99947178e+00, 2.84575284e-01, 1.99947178e+00,
2.84575284e-01, 2.00303388e+00, 1.99715066e+00,
2.04597521e+00, 1.99947178e+00, 1.99975276e+00,
2.00527954e+00, 1.99975276e+00, 1.99947178e+00,
1.99947178e+00, 1.99975276e+00, 1.99947178e+00,
1.99947178e+00, 1.91500926e+00, 1.95735490e+00,
1.95735490e+00, 2.00303388e+00, 1.99975276e+00,
5.92201948e-04, 1.99947178e+00, 1.99947178e+00,
1.99715066e+00, 2.84575284e-01, 1.95735490e+00,
1.89267385e+00, 1.99947178e+00, 2.00303388e+00,
1.96453893e+00, 1.98232651e+00, 2.39597082e-01,
2.39597082e-01, 1.99947178e+00, 1.97454870e+00,
1.91500926e+00, 9.99531507e-01, 1.00023842e+00,
1.00023842e+00, 1.00023842e+00, 1.00023842e+00,
1.00023842e+00, 9.22234297e-01, 1.00023842e+00,
1.00100708e+00, 1.16144836e-01, 1.00077248e+00,
1.00023842e+00, 1.00023842e+00, 1.00100708e+00,
1.00023842e+00, 1.00077248e+00, 1.00023842e+00,
1.13711983e-01, 1.00023842e+00, 1.00135887e+00,
1.00077248e+00, 1.00023842e+00, 1.00023842e+00,
1.00023842e+00, 9.99531507e-01, 1.00077248e+00,
1.00023842e+00, 1.00023842e+00, 1.00023842e+00,
1.00023842e+00, 1.00023842e+00, 1.13711983e-01,
1.00023842e+00, 1.00023842e+00, 1.00023842e+00,
1.00023842e+00, 9.78098869e-01, 1.00023842e+00,
1.00023842e+00, 1.00023842e+00, 1.00023842e+00,
1.00023842e+00, 1.00023842e+00, 1.00077248e+00,
9.99531507e-01, 1.00023842e+00, 1.00100708e+00,
1.00023842e+00, 9.78098869e-01, 1.00023842e+00], dtype=float32)
```
|
Here is an example of use a custom `callback` function. This function can also save the best models.
```
import os
def cv_misc_callback(model_dir:str=None, oof_preds:list=None, maximize=True):
"""
To reduce memory and disk storage, only best models and best oof preds and stored
For classification, the preds are scores before applying sigmoid.
"""
state = {}
def init(env):
if maximize:
state['best_score'] = -np.inf
else:
state['best_score'] = np.inf
if (model_dir is not None) and (not os.path.isdir(model_dir)):
os.mkdir(model_dir)
if oof_preds is not None:
for i, _ in enumerate(env.cvfolds):
oof_preds.append(None)
def callback(env):
if not state:
init(env)
best_score = state['best_score']
score = env.evaluation_result_list[-1][1]
if (maximize and score > best_score) or (not maximize and score < best_score):
for i, cvpack in enumerate(env.cvfolds):
if model_dir is not None:
cvpack.bst.save_model(f'{model_dir}/{i}.model')
if oof_preds is not None:
oof_preds[i] = cvpack.bst.predict(cvpack.dtest)
state['best_score'] = score
callback.before_iteration = False
return callback
```
CV code:
```
eval_res = []
oof_preds = []
history = xgb.cv(params, dtrain, num_boost_round=1000,
folds=folds, early_stopping_rounds=40, seed=RANDOM_SEED,
callbacks=[cv_misc_callback('./models', oof_preds), xgb.callback.print_evaluation(period=10)])
```
Mapping preds list to oof\_preds of train\_data
```
oof_preds_proba = np.zeros(av_data.shape[0])
for i, (trn_idx, val_idx) in enumerate(folds):
oof_preds_proba[val_idx] = sigmoid(oof_preds[i])
```
```
@jit
def sigmoid(x):
return 1/(1 + np.exp(-x))
```
|
40,307,635
|
In the R xgboost package, I can specify `predictions=TRUE` to save the out-of-fold predictions during cross-validation, e.g.:
```
library(xgboost)
data(mtcars)
xgb_params = list(
max_depth = 1,
eta = 0.01
)
x = model.matrix(mpg~0+., mtcars)
train = xgb.DMatrix(x, label=mtcars$mpg)
res = xgb.cv(xgb_params, train, 100, prediction=TRUE, nfold=5)
print(head(res$pred))
```
How would I do the equivalent in the python package? I can't find a `prediction` argument for `xgboost.cv`in python.
|
2016/10/28
|
[
"https://Stackoverflow.com/questions/40307635",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/345660/"
] |
This is possible with `xgboost.cv()` but it is a bit hacky. It uses the callbacks and ... a global variable which I'm told is not desirable.
```
def oof_prediction():
"""
Dirty global variable callback hack.
"""
global cv_prediction_dict
def callback(env):
"""internal function"""
cv_prediction_list = []
for i in [0, 1, 2, 3, 4]:
cv_prediction_list.append([env.cvfolds[i].bst.predict(env.cvfolds[i].dtest)])
cv_prediction_dict['cv'] = cv_prediction_list
return callback
```
Now we can call the callback from `xgboost.cv()` as follows.
```
cv_prediction_dict = {}
xgb.cv(xgb_params, train, 100, callbacks=[oof_prediction()]), nfold=5)
pos_oof_predictions = cv_prediction_dict.copy()
```
It will return the out-of-fold prediction for the last iteration/num\_boost\_round, even if there is `early_stopping` used. I believe this is something the R `predictions=TRUE` functionality [does/did](https://github.com/dmlc/xgboost/issues/1188) not do correctly.
---
*Hack disclaimer: I know this is rather hacky but it is a work around my poor understanding of how the callback is working. If anyone knows how to make this better then please comment.*
|
Here is an example of use a custom `callback` function. This function can also save the best models.
```
import os
def cv_misc_callback(model_dir:str=None, oof_preds:list=None, maximize=True):
"""
To reduce memory and disk storage, only best models and best oof preds and stored
For classification, the preds are scores before applying sigmoid.
"""
state = {}
def init(env):
if maximize:
state['best_score'] = -np.inf
else:
state['best_score'] = np.inf
if (model_dir is not None) and (not os.path.isdir(model_dir)):
os.mkdir(model_dir)
if oof_preds is not None:
for i, _ in enumerate(env.cvfolds):
oof_preds.append(None)
def callback(env):
if not state:
init(env)
best_score = state['best_score']
score = env.evaluation_result_list[-1][1]
if (maximize and score > best_score) or (not maximize and score < best_score):
for i, cvpack in enumerate(env.cvfolds):
if model_dir is not None:
cvpack.bst.save_model(f'{model_dir}/{i}.model')
if oof_preds is not None:
oof_preds[i] = cvpack.bst.predict(cvpack.dtest)
state['best_score'] = score
callback.before_iteration = False
return callback
```
CV code:
```
eval_res = []
oof_preds = []
history = xgb.cv(params, dtrain, num_boost_round=1000,
folds=folds, early_stopping_rounds=40, seed=RANDOM_SEED,
callbacks=[cv_misc_callback('./models', oof_preds), xgb.callback.print_evaluation(period=10)])
```
Mapping preds list to oof\_preds of train\_data
```
oof_preds_proba = np.zeros(av_data.shape[0])
for i, (trn_idx, val_idx) in enumerate(folds):
oof_preds_proba[val_idx] = sigmoid(oof_preds[i])
```
```
@jit
def sigmoid(x):
return 1/(1 + np.exp(-x))
```
|
33,879,523
|
is there a way in python to generate a continuous series of beeps in increasing amplitude and export it into a WAV file?
|
2015/11/23
|
[
"https://Stackoverflow.com/questions/33879523",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5192982/"
] |
I've based this on the answer to the previous question and added a lot of comments. Hopefully this makes it clear. You'll probably want to introduce a for loop to control the number of beeps and the increasing volume.
```
#!/usr/bin/python
# based on : www.daniweb.com/code/snippet263775.html
import math
import wave
import struct
# Audio will contain a long list of samples (i.e. floating point numbers describing the
# waveform). If you were working with a very long sound you'd want to stream this to
# disk instead of buffering it all in memory list this. But most sounds will fit in
# memory.
audio = []
sample_rate = 44100.0
def append_silence(duration_milliseconds=500):
"""
Adding silence is easy - we add zeros to the end of our array
"""
num_samples = duration_milliseconds * (sample_rate / 1000.0)
for x in range(int(num_samples)):
audio.append(0.0)
return
def append_sinewave(
freq=440.0,
duration_milliseconds=500,
volume=1.0):
"""
The sine wave generated here is the standard beep. If you want something
more aggresive you could try a square or saw tooth waveform. Though there
are some rather complicated issues with making high quality square and
sawtooth waves... which we won't address here :)
"""
global audio # using global variables isn't cool.
num_samples = duration_milliseconds * (sample_rate / 1000.0)
for x in range(int(num_samples)):
audio.append(volume * math.sin(2 * math.pi * freq * ( x / sample_rate )))
return
def save_wav(file_name):
# Open up a wav file
wav_file=wave.open(file_name,"w")
# wav params
nchannels = 1
sampwidth = 2
# 44100 is the industry standard sample rate - CD quality. If you need to
# save on file size you can adjust it downwards. The stanard for low quality
# is 8000 or 8kHz.
nframes = len(audio)
comptype = "NONE"
compname = "not compressed"
wav_file.setparams((nchannels, sampwidth, sample_rate, nframes, comptype, compname))
# WAV files here are using short, 16 bit, signed integers for the
# sample size. So we multiply the floating point data we have by 32767, the
# maximum value for a short integer. NOTE: It is theortically possible to
# use the floating point -1.0 to 1.0 data directly in a WAV file but not
# obvious how to do that using the wave module in python.
for sample in audio:
wav_file.writeframes(struct.pack('h', int( sample * 32767.0 )))
wav_file.close()
return
append_sinewave(volume=0.25)
append_silence()
append_sinewave(volume=0.5)
append_silence()
append_sinewave()
save_wav("output.wav")
```
|
I added minor improvements to the [JCx](https://stackoverflow.com/users/3818191/jcx) code above. As author said, its not cool to use global variables. So I wrapped his solution into class, and it works just fine:
```
import math
import wave
import struct
class BeepGenerator:
def __init__(self):
# Audio will contain a long list of samples (i.e. floating point numbers describing the
# waveform). If you were working with a very long sound you'd want to stream this to
# disk instead of buffering it all in memory list this. But most sounds will fit in
# memory.
self.audio = []
self.sample_rate = 44100.0
def append_silence(self, duration_milliseconds=500):
"""
Adding silence is easy - we add zeros to the end of our array
"""
num_samples = duration_milliseconds * (self.sample_rate / 1000.0)
for x in range(int(num_samples)):
self.audio.append(0.0)
return
def append_sinewave(
self,
freq=440.0,
duration_milliseconds=500,
volume=1.0):
"""
The sine wave generated here is the standard beep. If you want something
more aggresive you could try a square or saw tooth waveform. Though there
are some rather complicated issues with making high quality square and
sawtooth waves... which we won't address here :)
"""
num_samples = duration_milliseconds * (self.sample_rate / 1000.0)
for x in range(int(num_samples)):
self.audio.append(volume * math.sin(2 * math.pi * freq * ( x / self.sample_rate )))
return
def save_wav(self, file_name):
# Open up a wav file
wav_file=wave.open(file_name,"w")
# wav params
nchannels = 1
sampwidth = 2
# 44100 is the industry standard sample rate - CD quality. If you need to
# save on file size you can adjust it downwards. The stanard for low quality
# is 8000 or 8kHz.
nframes = len(self.audio)
comptype = "NONE"
compname = "not compressed"
wav_file.setparams((nchannels, sampwidth, self.sample_rate, nframes, comptype, compname))
# WAV files here are using short, 16 bit, signed integers for the
# sample size. So we multiply the floating point data we have by 32767, the
# maximum value for a short integer. NOTE: It is theortically possible to
# use the floating point -1.0 to 1.0 data directly in a WAV file but not
# obvious how to do that using the wave module in python.
for sample in self.audio:
wav_file.writeframes(struct.pack('h', int( sample * 32767.0 )))
wav_file.close()
return
if __name__ == "__main__":
bg = BeepGenerator()
bg.append_sinewave(volume=0.25, duration_milliseconds=100)
bg.append_silence()
bg.append_sinewave(volume=0.5, duration_milliseconds=700)
bg.append_silence()
bg.save_wav("output.wav")
```
|
3,263,665
|
I am trying to create a line-by-line filter in python. However, `stdin.readlines()` reads all lines in before starting to process, and python runs out of memory (`MemoryError`).
How can I have *just one line* in memory at a time?
The kind of code I have:
```
for line in sys.stdin.readlines():
if( filter.apply( line ) ):
print( line )
```
(note: I'm on 2.6)
|
2010/07/16
|
[
"https://Stackoverflow.com/questions/3263665",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6610/"
] |
```
for line in sys.stdin:
...
```
Or call `.readline()` in a loop.
|
```
import sys
while 1:
line = sys.stdin.readline()
if not line:
break
if (filter.apply(line)):
print(line)
```
|
62,232,923
|
I'm pretty new to python but I need some help parsing a string with a unique structure. I have a CSV file with a column with the following structure:
```
[Chakroff, Alek; Young, Liane] Boston Coll, Chestnut Hill, MA 02167 USA; [Russell, Pascale Sophie] Univ Surrey, Guildford, Surrey, England; [Piazza, Jared] Univ Lancaster, Lancaster, England
```
I want to just pull the country name present right before the semicolons. So for the above, I want "USA, England, England". The overall structure of the string is:
```
[last name, first name], university, address, zip code, country;
```
How do I get just the countries with this string layout? Is there a way to specify that I want the country name which right before the semicolon? Or maybe an even easier way to pull the information I need?
Please go easy on me, I'm not the best programmer by any means :)
|
2020/06/06
|
[
"https://Stackoverflow.com/questions/62232923",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/13694393/"
] |
You can take advantage of the unique substring before the elements you want:
```
# split string on substring '; ['
for i in s.split('; ['):
# split each resulting string on space char, return last element of array
print(i.split()[-1])
USA
England
England
```
|
You can use the split() method for strings
```
states = [person_record.split(",")[-1] for person_record in records.split("; [")]
```
Where records is the string you get from your input.
|
62,232,923
|
I'm pretty new to python but I need some help parsing a string with a unique structure. I have a CSV file with a column with the following structure:
```
[Chakroff, Alek; Young, Liane] Boston Coll, Chestnut Hill, MA 02167 USA; [Russell, Pascale Sophie] Univ Surrey, Guildford, Surrey, England; [Piazza, Jared] Univ Lancaster, Lancaster, England
```
I want to just pull the country name present right before the semicolons. So for the above, I want "USA, England, England". The overall structure of the string is:
```
[last name, first name], university, address, zip code, country;
```
How do I get just the countries with this string layout? Is there a way to specify that I want the country name which right before the semicolon? Or maybe an even easier way to pull the information I need?
Please go easy on me, I'm not the best programmer by any means :)
|
2020/06/06
|
[
"https://Stackoverflow.com/questions/62232923",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/13694393/"
] |
You can take advantage of the unique substring before the elements you want:
```
# split string on substring '; ['
for i in s.split('; ['):
# split each resulting string on space char, return last element of array
print(i.split()[-1])
USA
England
England
```
|
Using a regular expression:
```
import regex as re
data = "[Chakroff, Alek; Young, Liane] Boston Coll, Chestnut Hill, MA 02167 USA; [Russell, Pascale Sophie] Univ Surrey, Guildford, Surrey, England; [Piazza, Jared] Univ Lancaster, Lancaster, England"
outer_pattern = re.compile(r'\[[^][]+\](*SKIP)(*FAIL)|;')
inner_pattern = re.compile(r'(\w+)\s*$')
countries = [country.group(1)
for chunk in outer_pattern.split(data)
for country in [inner_pattern.search(chunk)]
if country]
print(countries)
# ['USA', 'England', 'England']
```
|
51,688,822
|
Can anybody help me please? I am new to machine learning Studio.
I am using free azure machine learning studio workspace
trying to use in cell run all got the following error.
```
ValueError Traceback (most recent call last)
<ipython-input-1-17afe06b8f16> in <module>()
1 from azureml import Workspace
2
----> 3 ws = Workspace()
4 ds = ws.datasets['Lemonadecsv.csv']
home/nbuser/anaconda3_23/lib/python3.4/site-packages/azureml/__init__.py in __init__(self, workspace_id, authorization_token, endpoint)
883 endpoint = https://studio.azureml.net
884 """
--> 885 workspace_id, authorization_token, endpoint, management_endpoint = _get_workspace_info(workspace_id, authorization_token, endpoint, None)
886
887 _not_none_or_empty('workspace_id', workspace_id)
/home/nbuser/anaconda3_23/lib/python3.4/site-packages/azureml/__init__.py in _get_workspace_info(workspace_id, authorization_token, endpoint, management_endpoint)
849
850 if workspace_id is None:
--> 851 raise ValueError('workspace_id not provided and not available via config')
852 if authorization_token is None:
853 raise ValueError('authorization_token not provided and not available via config')
ValueError: workspace_id not provided and not available via config
5 frame = ds.to_dataframe()
```
|
2018/08/04
|
[
"https://Stackoverflow.com/questions/51688822",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5892761/"
] |
I have same problem as you. I have contacted tech support so once I get an answer, I will update this post. Meanwhile, you can use this **WORKAROUND**:
Get missing parameters and input them as Strings.
```
ws = Workspace("[WORKSPACE_ID]", "[AUTH_TOKEN]")
```
Where to get them:
[WOKRSPACE\_ID]: Azure ML Studio -> Settings -> Name Tab -> WorkspaceId
[AUTH\_TOKEN]: Azure ML Studio -> Settings -> Authorization Token Tab -> Primary AUTH Token.
|
the easiest way is to right click on the data set you have and choose Generate Data Access Code, the system will do it for you and all you have to do is to copy it to the frame and it all will be there.
I hope this helps!
|
51,688,822
|
Can anybody help me please? I am new to machine learning Studio.
I am using free azure machine learning studio workspace
trying to use in cell run all got the following error.
```
ValueError Traceback (most recent call last)
<ipython-input-1-17afe06b8f16> in <module>()
1 from azureml import Workspace
2
----> 3 ws = Workspace()
4 ds = ws.datasets['Lemonadecsv.csv']
home/nbuser/anaconda3_23/lib/python3.4/site-packages/azureml/__init__.py in __init__(self, workspace_id, authorization_token, endpoint)
883 endpoint = https://studio.azureml.net
884 """
--> 885 workspace_id, authorization_token, endpoint, management_endpoint = _get_workspace_info(workspace_id, authorization_token, endpoint, None)
886
887 _not_none_or_empty('workspace_id', workspace_id)
/home/nbuser/anaconda3_23/lib/python3.4/site-packages/azureml/__init__.py in _get_workspace_info(workspace_id, authorization_token, endpoint, management_endpoint)
849
850 if workspace_id is None:
--> 851 raise ValueError('workspace_id not provided and not available via config')
852 if authorization_token is None:
853 raise ValueError('authorization_token not provided and not available via config')
ValueError: workspace_id not provided and not available via config
5 frame = ds.to_dataframe()
```
|
2018/08/04
|
[
"https://Stackoverflow.com/questions/51688822",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5892761/"
] |
You can also go to Dataset-> Check/highlight the dataset you are working on -> Generate Data Access Code (down below).
Copy and paste the generated code into the first cell in your python notebook. It should look similar to this.
```
from azureml import Workspace
ws = Workspace(
workspace_id='WORKSPACEID',
authorization_token='AUTH_TOKEN',
endpoint='https://studioapi.azureml.net'
)
ds = ws.datasets['Lemonade.csv']
frame = ds.to_dataframe()
```
|
the easiest way is to right click on the data set you have and choose Generate Data Access Code, the system will do it for you and all you have to do is to copy it to the frame and it all will be there.
I hope this helps!
|
46,230,413
|
I'm trying to run DMelt programs (<http://jwork.org/dmelt/>) program using Java9 (JDK9), and it gives me errors such as:
```
WARNING: An illegal reflective access operation has occurred
WARNING: Illegal reflective access by org.python.core.PySystemState (file:/dmelt/jehep/lib/jython/jython.jar) to method java.io.Console.encoding()
WARNING: Please consider reporting this to the maintainers of org.python.core.PySystemState
WARNING: Use --illegal-access=warn to enable warnings of further illegal reflective access operations
WARNING: All illegal access operations will be denied in a future release
```
How can I fix it? I was trying to add –illegal-access=permit to the last line of the script "dmelt.sh" (I'm using bash in Linux), but this did not solve this problem. I'm very frustrating with this. I was using this program very often, for very long time. Maybe I should never move to JDK9
|
2017/09/15
|
[
"https://Stackoverflow.com/questions/46230413",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/8612074/"
] |
To avoid this error, you need to redefine `maven-war-plugin` to a newer one. For example:
```xml
<plugins>
. . .
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-war-plugin</artifactId>
<version>3.2.2</version>
</plugin>
</plugins>
```
---
Works for `jdk-12`.
|
Since the Java update 9, the "illegal reflective access operation has occurred" warning occurs.
To remove the warning message. You can replace maven-compiler-plugin with maven-war-plugin and/or updating the maven-war-plugin with the latest version in your pom.xml. Following are 2 examples:
Change version from:
```xml
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-war-plugin</artifactId>
<version>2.4</version>
...
...
</plugin>
```
To:
```xml
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-war-plugin</artifactId>
<version>3.3.1</version>
...
...
</plugin>
```
Change the artifactId and version From:
```xml
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-compiler-plugin</artifactId>
<version>3.5.1</version>
<configuration>
<source>1.8</source>
<target>1.8</target>
</configuration>
</plugin>
```
TO:
```xml
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-war-plugin</artifactId>
<version>3.3.1</version>
<executions>
<execution>
<id>prepare-war</id>
<phase>prepare-package</phase>
<goals>
<goal>exploded</goal>
</goals>
</execution>
</executions>
</plugin>
```
When i re-run Maven Build or Maven Install, the "illegal reflective access operation has occurred" is gone.
|
46,230,413
|
I'm trying to run DMelt programs (<http://jwork.org/dmelt/>) program using Java9 (JDK9), and it gives me errors such as:
```
WARNING: An illegal reflective access operation has occurred
WARNING: Illegal reflective access by org.python.core.PySystemState (file:/dmelt/jehep/lib/jython/jython.jar) to method java.io.Console.encoding()
WARNING: Please consider reporting this to the maintainers of org.python.core.PySystemState
WARNING: Use --illegal-access=warn to enable warnings of further illegal reflective access operations
WARNING: All illegal access operations will be denied in a future release
```
How can I fix it? I was trying to add –illegal-access=permit to the last line of the script "dmelt.sh" (I'm using bash in Linux), but this did not solve this problem. I'm very frustrating with this. I was using this program very often, for very long time. Maybe I should never move to JDK9
|
2017/09/15
|
[
"https://Stackoverflow.com/questions/46230413",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/8612074/"
] |
The *ideal* way to resolve this would be to
>
> **reporting this to the maintainers of org.python.core.PySystemState**
>
>
>
and asking them to fix such reflective access going forward.
---
>
> If the default mode permits illegal reflective access, however, then
> it's essential to make that known so that people aren't surprised when
> this is no longer the default mode in a future release.
>
>
>
From one of the [threads on the mailing list](http://mail.openjdk.java.net/pipermail/jigsaw-dev/2017-May/012673.html) :
```
--illegal-access=permit
```
>
> This will be the **default** mode for JDK 9. It opens every package in
> every explicit module to code in all unnamed modules, i.e., code on
> the class path, just as `--permit-illegal-access` does today.
>
>
>
>
> The first illegal reflective-access operation causes a warning to be
> issued, as with `--permit-illegal-access`, but no warnings are issued
> after that point. This single warning will describe how to enable
> further warnings.
>
>
>
```
--illegal-access=deny
```
>
> This disables all illegal reflective-access operations except for
> those enabled by other command-line options, such as `--add-opens`.
> This **will become the default** mode in a future release.
>
>
>
Warning messages in any mode can be avoided, as before, by the judicious use of the `--add-exports` and `--add-opens` options.
---
Hence a current temporary solution available is to use `--add-exports` as the VM arguments as mentioned in the [docs](https://docs.oracle.com/javase/9/tools/java.htm#JSWOR624) :
```
--add-exports module/package=target-module(,target-module)*
```
>
> Updates module to **`export`** package to `target-module`, regardless of
> module declaration. The `target-module` can be all *unnamed* to export to
> all unnamed modules.
>
>
>
This would allow the `target-module` to access all public types in `package`. In case you want to access the JDK internal classes which would still be encapsulated, you would have to allow a *deep reflection* using the `--add-opens` argument as:
```
--add-opens module/package=target-module(,target-module)*
```
>
> Updates module to **`open`** package to `target-module`, regardless of module
> declaration.
>
>
>
In your case to currently accessing the `java.io.Console`, you can simply add this as a VM option -
```
--add-opens java.base/java.io=ALL-UNNAMED
```
---
Also, note from the same thread as linked above
*When `deny` becomes the default mode then I expect `permit` to remain supported for at least one release so that developers can continue to migrate their code. The `permit`, `warn`, and `debug` modes will, over time, be removed, as will the `--illegal-access` option itself.*
So it's better to change the implementation and follow the ideal solution to it.
|
Since the Java update 9, the "illegal reflective access operation has occurred" warning occurs.
To remove the warning message. You can replace maven-compiler-plugin with maven-war-plugin and/or updating the maven-war-plugin with the latest version in your pom.xml. Following are 2 examples:
Change version from:
```xml
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-war-plugin</artifactId>
<version>2.4</version>
...
...
</plugin>
```
To:
```xml
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-war-plugin</artifactId>
<version>3.3.1</version>
...
...
</plugin>
```
Change the artifactId and version From:
```xml
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-compiler-plugin</artifactId>
<version>3.5.1</version>
<configuration>
<source>1.8</source>
<target>1.8</target>
</configuration>
</plugin>
```
TO:
```xml
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-war-plugin</artifactId>
<version>3.3.1</version>
<executions>
<execution>
<id>prepare-war</id>
<phase>prepare-package</phase>
<goals>
<goal>exploded</goal>
</goals>
</execution>
</executions>
</plugin>
```
When i re-run Maven Build or Maven Install, the "illegal reflective access operation has occurred" is gone.
|
46,230,413
|
I'm trying to run DMelt programs (<http://jwork.org/dmelt/>) program using Java9 (JDK9), and it gives me errors such as:
```
WARNING: An illegal reflective access operation has occurred
WARNING: Illegal reflective access by org.python.core.PySystemState (file:/dmelt/jehep/lib/jython/jython.jar) to method java.io.Console.encoding()
WARNING: Please consider reporting this to the maintainers of org.python.core.PySystemState
WARNING: Use --illegal-access=warn to enable warnings of further illegal reflective access operations
WARNING: All illegal access operations will be denied in a future release
```
How can I fix it? I was trying to add –illegal-access=permit to the last line of the script "dmelt.sh" (I'm using bash in Linux), but this did not solve this problem. I'm very frustrating with this. I was using this program very often, for very long time. Maybe I should never move to JDK9
|
2017/09/15
|
[
"https://Stackoverflow.com/questions/46230413",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/8612074/"
] |
The *ideal* way to resolve this would be to
>
> **reporting this to the maintainers of org.python.core.PySystemState**
>
>
>
and asking them to fix such reflective access going forward.
---
>
> If the default mode permits illegal reflective access, however, then
> it's essential to make that known so that people aren't surprised when
> this is no longer the default mode in a future release.
>
>
>
From one of the [threads on the mailing list](http://mail.openjdk.java.net/pipermail/jigsaw-dev/2017-May/012673.html) :
```
--illegal-access=permit
```
>
> This will be the **default** mode for JDK 9. It opens every package in
> every explicit module to code in all unnamed modules, i.e., code on
> the class path, just as `--permit-illegal-access` does today.
>
>
>
>
> The first illegal reflective-access operation causes a warning to be
> issued, as with `--permit-illegal-access`, but no warnings are issued
> after that point. This single warning will describe how to enable
> further warnings.
>
>
>
```
--illegal-access=deny
```
>
> This disables all illegal reflective-access operations except for
> those enabled by other command-line options, such as `--add-opens`.
> This **will become the default** mode in a future release.
>
>
>
Warning messages in any mode can be avoided, as before, by the judicious use of the `--add-exports` and `--add-opens` options.
---
Hence a current temporary solution available is to use `--add-exports` as the VM arguments as mentioned in the [docs](https://docs.oracle.com/javase/9/tools/java.htm#JSWOR624) :
```
--add-exports module/package=target-module(,target-module)*
```
>
> Updates module to **`export`** package to `target-module`, regardless of
> module declaration. The `target-module` can be all *unnamed* to export to
> all unnamed modules.
>
>
>
This would allow the `target-module` to access all public types in `package`. In case you want to access the JDK internal classes which would still be encapsulated, you would have to allow a *deep reflection* using the `--add-opens` argument as:
```
--add-opens module/package=target-module(,target-module)*
```
>
> Updates module to **`open`** package to `target-module`, regardless of module
> declaration.
>
>
>
In your case to currently accessing the `java.io.Console`, you can simply add this as a VM option -
```
--add-opens java.base/java.io=ALL-UNNAMED
```
---
Also, note from the same thread as linked above
*When `deny` becomes the default mode then I expect `permit` to remain supported for at least one release so that developers can continue to migrate their code. The `permit`, `warn`, and `debug` modes will, over time, be removed, as will the `--illegal-access` option itself.*
So it's better to change the implementation and follow the ideal solution to it.
|
To avoid this error, you need to redefine `maven-war-plugin` to a newer one. For example:
```xml
<plugins>
. . .
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-war-plugin</artifactId>
<version>3.2.2</version>
</plugin>
</plugins>
```
---
Works for `jdk-12`.
|
46,230,413
|
I'm trying to run DMelt programs (<http://jwork.org/dmelt/>) program using Java9 (JDK9), and it gives me errors such as:
```
WARNING: An illegal reflective access operation has occurred
WARNING: Illegal reflective access by org.python.core.PySystemState (file:/dmelt/jehep/lib/jython/jython.jar) to method java.io.Console.encoding()
WARNING: Please consider reporting this to the maintainers of org.python.core.PySystemState
WARNING: Use --illegal-access=warn to enable warnings of further illegal reflective access operations
WARNING: All illegal access operations will be denied in a future release
```
How can I fix it? I was trying to add –illegal-access=permit to the last line of the script "dmelt.sh" (I'm using bash in Linux), but this did not solve this problem. I'm very frustrating with this. I was using this program very often, for very long time. Maybe I should never move to JDK9
|
2017/09/15
|
[
"https://Stackoverflow.com/questions/46230413",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/8612074/"
] |
Jython developers do not have any practical solution for jdk9, according to this post <http://bugs.jython.org/issue2582>.
The previous explanation seems very long to figure out what should done. I just want jdk9 behaves exactly as jdk1.4 - 1.8, i.e be totally silent. The JVM strength in backward comparability. I'm totally OK to have additional options in JDK9, but new features cannot break applications
|
Since the Java update 9, the "illegal reflective access operation has occurred" warning occurs.
To remove the warning message. You can replace maven-compiler-plugin with maven-war-plugin and/or updating the maven-war-plugin with the latest version in your pom.xml. Following are 2 examples:
Change version from:
```xml
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-war-plugin</artifactId>
<version>2.4</version>
...
...
</plugin>
```
To:
```xml
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-war-plugin</artifactId>
<version>3.3.1</version>
...
...
</plugin>
```
Change the artifactId and version From:
```xml
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-compiler-plugin</artifactId>
<version>3.5.1</version>
<configuration>
<source>1.8</source>
<target>1.8</target>
</configuration>
</plugin>
```
TO:
```xml
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-war-plugin</artifactId>
<version>3.3.1</version>
<executions>
<execution>
<id>prepare-war</id>
<phase>prepare-package</phase>
<goals>
<goal>exploded</goal>
</goals>
</execution>
</executions>
</plugin>
```
When i re-run Maven Build or Maven Install, the "illegal reflective access operation has occurred" is gone.
|
46,230,413
|
I'm trying to run DMelt programs (<http://jwork.org/dmelt/>) program using Java9 (JDK9), and it gives me errors such as:
```
WARNING: An illegal reflective access operation has occurred
WARNING: Illegal reflective access by org.python.core.PySystemState (file:/dmelt/jehep/lib/jython/jython.jar) to method java.io.Console.encoding()
WARNING: Please consider reporting this to the maintainers of org.python.core.PySystemState
WARNING: Use --illegal-access=warn to enable warnings of further illegal reflective access operations
WARNING: All illegal access operations will be denied in a future release
```
How can I fix it? I was trying to add –illegal-access=permit to the last line of the script "dmelt.sh" (I'm using bash in Linux), but this did not solve this problem. I'm very frustrating with this. I was using this program very often, for very long time. Maybe I should never move to JDK9
|
2017/09/15
|
[
"https://Stackoverflow.com/questions/46230413",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/8612074/"
] |
Since the Java update 9, the "illegal reflective access operation has occurred" warning occurs.
To remove the warning message. You can replace maven-compiler-plugin with maven-war-plugin and/or updating the maven-war-plugin with the latest version in your pom.xml. Following are 2 examples:
Change version from:
```xml
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-war-plugin</artifactId>
<version>2.4</version>
...
...
</plugin>
```
To:
```xml
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-war-plugin</artifactId>
<version>3.3.1</version>
...
...
</plugin>
```
Change the artifactId and version From:
```xml
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-compiler-plugin</artifactId>
<version>3.5.1</version>
<configuration>
<source>1.8</source>
<target>1.8</target>
</configuration>
</plugin>
```
TO:
```xml
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-war-plugin</artifactId>
<version>3.3.1</version>
<executions>
<execution>
<id>prepare-war</id>
<phase>prepare-package</phase>
<goals>
<goal>exploded</goal>
</goals>
</execution>
</executions>
</plugin>
```
When i re-run Maven Build or Maven Install, the "illegal reflective access operation has occurred" is gone.
|
Came here while working on a Kotlin Spring project. Resolved the issue by:
```
cd /project/root/
touch .mvn/jvm.config
echo "--illegal-access=permit" >> .mvn/jvm.config
```
|
46,230,413
|
I'm trying to run DMelt programs (<http://jwork.org/dmelt/>) program using Java9 (JDK9), and it gives me errors such as:
```
WARNING: An illegal reflective access operation has occurred
WARNING: Illegal reflective access by org.python.core.PySystemState (file:/dmelt/jehep/lib/jython/jython.jar) to method java.io.Console.encoding()
WARNING: Please consider reporting this to the maintainers of org.python.core.PySystemState
WARNING: Use --illegal-access=warn to enable warnings of further illegal reflective access operations
WARNING: All illegal access operations will be denied in a future release
```
How can I fix it? I was trying to add –illegal-access=permit to the last line of the script "dmelt.sh" (I'm using bash in Linux), but this did not solve this problem. I'm very frustrating with this. I was using this program very often, for very long time. Maybe I should never move to JDK9
|
2017/09/15
|
[
"https://Stackoverflow.com/questions/46230413",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/8612074/"
] |
The *ideal* way to resolve this would be to
>
> **reporting this to the maintainers of org.python.core.PySystemState**
>
>
>
and asking them to fix such reflective access going forward.
---
>
> If the default mode permits illegal reflective access, however, then
> it's essential to make that known so that people aren't surprised when
> this is no longer the default mode in a future release.
>
>
>
From one of the [threads on the mailing list](http://mail.openjdk.java.net/pipermail/jigsaw-dev/2017-May/012673.html) :
```
--illegal-access=permit
```
>
> This will be the **default** mode for JDK 9. It opens every package in
> every explicit module to code in all unnamed modules, i.e., code on
> the class path, just as `--permit-illegal-access` does today.
>
>
>
>
> The first illegal reflective-access operation causes a warning to be
> issued, as with `--permit-illegal-access`, but no warnings are issued
> after that point. This single warning will describe how to enable
> further warnings.
>
>
>
```
--illegal-access=deny
```
>
> This disables all illegal reflective-access operations except for
> those enabled by other command-line options, such as `--add-opens`.
> This **will become the default** mode in a future release.
>
>
>
Warning messages in any mode can be avoided, as before, by the judicious use of the `--add-exports` and `--add-opens` options.
---
Hence a current temporary solution available is to use `--add-exports` as the VM arguments as mentioned in the [docs](https://docs.oracle.com/javase/9/tools/java.htm#JSWOR624) :
```
--add-exports module/package=target-module(,target-module)*
```
>
> Updates module to **`export`** package to `target-module`, regardless of
> module declaration. The `target-module` can be all *unnamed* to export to
> all unnamed modules.
>
>
>
This would allow the `target-module` to access all public types in `package`. In case you want to access the JDK internal classes which would still be encapsulated, you would have to allow a *deep reflection* using the `--add-opens` argument as:
```
--add-opens module/package=target-module(,target-module)*
```
>
> Updates module to **`open`** package to `target-module`, regardless of module
> declaration.
>
>
>
In your case to currently accessing the `java.io.Console`, you can simply add this as a VM option -
```
--add-opens java.base/java.io=ALL-UNNAMED
```
---
Also, note from the same thread as linked above
*When `deny` becomes the default mode then I expect `permit` to remain supported for at least one release so that developers can continue to migrate their code. The `permit`, `warn`, and `debug` modes will, over time, be removed, as will the `--illegal-access` option itself.*
So it's better to change the implementation and follow the ideal solution to it.
|
DMelt seems to use Jython and this warning is something that the Jython maintainers will need to address. There is an issue tracking it here:
<http://bugs.jython.org/issue2582>
|
46,230,413
|
I'm trying to run DMelt programs (<http://jwork.org/dmelt/>) program using Java9 (JDK9), and it gives me errors such as:
```
WARNING: An illegal reflective access operation has occurred
WARNING: Illegal reflective access by org.python.core.PySystemState (file:/dmelt/jehep/lib/jython/jython.jar) to method java.io.Console.encoding()
WARNING: Please consider reporting this to the maintainers of org.python.core.PySystemState
WARNING: Use --illegal-access=warn to enable warnings of further illegal reflective access operations
WARNING: All illegal access operations will be denied in a future release
```
How can I fix it? I was trying to add –illegal-access=permit to the last line of the script "dmelt.sh" (I'm using bash in Linux), but this did not solve this problem. I'm very frustrating with this. I was using this program very often, for very long time. Maybe I should never move to JDK9
|
2017/09/15
|
[
"https://Stackoverflow.com/questions/46230413",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/8612074/"
] |
To avoid this error, you need to redefine `maven-war-plugin` to a newer one. For example:
```xml
<plugins>
. . .
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-war-plugin</artifactId>
<version>3.2.2</version>
</plugin>
</plugins>
```
---
Works for `jdk-12`.
|
Some recent feedback.
as stated in the java error code
```
WARNING: All illegal access operations will be denied in a future release
```
This future release is JDK 17, where the launcher parameter `--illegal-access` will stop working.
More information direct from Oracle can be found here: [JEP 403 link 1](https://openjdk.java.net/jeps/403) and [JEP 403 link 2](https://bugs.openjdk.java.net/browse/JDK-8263547)
>
> With this change, **it will no longer be possible for end users to use
> the --illegal-access option** to enable access to internal elements of
> the JDK. (A list of the packages affected is available here.) The
> sun.misc and sun.reflect packages will still be exported by the
> jdk.unsupported module, and will still be open so that code can access
> their non-public elements via reflection. No other JDK packages will
> be open in this way.
>
>
> **It will still be possible to use the --add-opens**
> command-line option, or the Add-Opens
> JAR-file manifest attribute, to open specific packages.
>
>
>
So the solution `--add-opens module/package=target-module(,target-module)*` and the example `--add-opens java.base/java.io=ALL-UNNAMED` will keep working for JDK 17 and future releases but the `--illegal-access` will not.
|
46,230,413
|
I'm trying to run DMelt programs (<http://jwork.org/dmelt/>) program using Java9 (JDK9), and it gives me errors such as:
```
WARNING: An illegal reflective access operation has occurred
WARNING: Illegal reflective access by org.python.core.PySystemState (file:/dmelt/jehep/lib/jython/jython.jar) to method java.io.Console.encoding()
WARNING: Please consider reporting this to the maintainers of org.python.core.PySystemState
WARNING: Use --illegal-access=warn to enable warnings of further illegal reflective access operations
WARNING: All illegal access operations will be denied in a future release
```
How can I fix it? I was trying to add –illegal-access=permit to the last line of the script "dmelt.sh" (I'm using bash in Linux), but this did not solve this problem. I'm very frustrating with this. I was using this program very often, for very long time. Maybe I should never move to JDK9
|
2017/09/15
|
[
"https://Stackoverflow.com/questions/46230413",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/8612074/"
] |
Since the Java update 9, the "illegal reflective access operation has occurred" warning occurs.
To remove the warning message. You can replace maven-compiler-plugin with maven-war-plugin and/or updating the maven-war-plugin with the latest version in your pom.xml. Following are 2 examples:
Change version from:
```xml
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-war-plugin</artifactId>
<version>2.4</version>
...
...
</plugin>
```
To:
```xml
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-war-plugin</artifactId>
<version>3.3.1</version>
...
...
</plugin>
```
Change the artifactId and version From:
```xml
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-compiler-plugin</artifactId>
<version>3.5.1</version>
<configuration>
<source>1.8</source>
<target>1.8</target>
</configuration>
</plugin>
```
TO:
```xml
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-war-plugin</artifactId>
<version>3.3.1</version>
<executions>
<execution>
<id>prepare-war</id>
<phase>prepare-package</phase>
<goals>
<goal>exploded</goal>
</goals>
</execution>
</executions>
</plugin>
```
When i re-run Maven Build or Maven Install, the "illegal reflective access operation has occurred" is gone.
|
Perhaps the fix below works for java 9 as well:
In my case the java open jdk version was 10.0.2 and got the same error (An illegal reflective access opeeration has occurred). I upgraded maven to version 3.6.0 on linux, and the problem was gone.
|
46,230,413
|
I'm trying to run DMelt programs (<http://jwork.org/dmelt/>) program using Java9 (JDK9), and it gives me errors such as:
```
WARNING: An illegal reflective access operation has occurred
WARNING: Illegal reflective access by org.python.core.PySystemState (file:/dmelt/jehep/lib/jython/jython.jar) to method java.io.Console.encoding()
WARNING: Please consider reporting this to the maintainers of org.python.core.PySystemState
WARNING: Use --illegal-access=warn to enable warnings of further illegal reflective access operations
WARNING: All illegal access operations will be denied in a future release
```
How can I fix it? I was trying to add –illegal-access=permit to the last line of the script "dmelt.sh" (I'm using bash in Linux), but this did not solve this problem. I'm very frustrating with this. I was using this program very often, for very long time. Maybe I should never move to JDK9
|
2017/09/15
|
[
"https://Stackoverflow.com/questions/46230413",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/8612074/"
] |
DMelt seems to use Jython and this warning is something that the Jython maintainers will need to address. There is an issue tracking it here:
<http://bugs.jython.org/issue2582>
|
Some recent feedback.
as stated in the java error code
```
WARNING: All illegal access operations will be denied in a future release
```
This future release is JDK 17, where the launcher parameter `--illegal-access` will stop working.
More information direct from Oracle can be found here: [JEP 403 link 1](https://openjdk.java.net/jeps/403) and [JEP 403 link 2](https://bugs.openjdk.java.net/browse/JDK-8263547)
>
> With this change, **it will no longer be possible for end users to use
> the --illegal-access option** to enable access to internal elements of
> the JDK. (A list of the packages affected is available here.) The
> sun.misc and sun.reflect packages will still be exported by the
> jdk.unsupported module, and will still be open so that code can access
> their non-public elements via reflection. No other JDK packages will
> be open in this way.
>
>
> **It will still be possible to use the --add-opens**
> command-line option, or the Add-Opens
> JAR-file manifest attribute, to open specific packages.
>
>
>
So the solution `--add-opens module/package=target-module(,target-module)*` and the example `--add-opens java.base/java.io=ALL-UNNAMED` will keep working for JDK 17 and future releases but the `--illegal-access` will not.
|
46,230,413
|
I'm trying to run DMelt programs (<http://jwork.org/dmelt/>) program using Java9 (JDK9), and it gives me errors such as:
```
WARNING: An illegal reflective access operation has occurred
WARNING: Illegal reflective access by org.python.core.PySystemState (file:/dmelt/jehep/lib/jython/jython.jar) to method java.io.Console.encoding()
WARNING: Please consider reporting this to the maintainers of org.python.core.PySystemState
WARNING: Use --illegal-access=warn to enable warnings of further illegal reflective access operations
WARNING: All illegal access operations will be denied in a future release
```
How can I fix it? I was trying to add –illegal-access=permit to the last line of the script "dmelt.sh" (I'm using bash in Linux), but this did not solve this problem. I'm very frustrating with this. I was using this program very often, for very long time. Maybe I should never move to JDK9
|
2017/09/15
|
[
"https://Stackoverflow.com/questions/46230413",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/8612074/"
] |
Since the Java update 9, the "illegal reflective access operation has occurred" warning occurs.
To remove the warning message. You can replace maven-compiler-plugin with maven-war-plugin and/or updating the maven-war-plugin with the latest version in your pom.xml. Following are 2 examples:
Change version from:
```xml
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-war-plugin</artifactId>
<version>2.4</version>
...
...
</plugin>
```
To:
```xml
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-war-plugin</artifactId>
<version>3.3.1</version>
...
...
</plugin>
```
Change the artifactId and version From:
```xml
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-compiler-plugin</artifactId>
<version>3.5.1</version>
<configuration>
<source>1.8</source>
<target>1.8</target>
</configuration>
</plugin>
```
TO:
```xml
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-war-plugin</artifactId>
<version>3.3.1</version>
<executions>
<execution>
<id>prepare-war</id>
<phase>prepare-package</phase>
<goals>
<goal>exploded</goal>
</goals>
</execution>
</executions>
</plugin>
```
When i re-run Maven Build or Maven Install, the "illegal reflective access operation has occurred" is gone.
|
Some recent feedback.
as stated in the java error code
```
WARNING: All illegal access operations will be denied in a future release
```
This future release is JDK 17, where the launcher parameter `--illegal-access` will stop working.
More information direct from Oracle can be found here: [JEP 403 link 1](https://openjdk.java.net/jeps/403) and [JEP 403 link 2](https://bugs.openjdk.java.net/browse/JDK-8263547)
>
> With this change, **it will no longer be possible for end users to use
> the --illegal-access option** to enable access to internal elements of
> the JDK. (A list of the packages affected is available here.) The
> sun.misc and sun.reflect packages will still be exported by the
> jdk.unsupported module, and will still be open so that code can access
> their non-public elements via reflection. No other JDK packages will
> be open in this way.
>
>
> **It will still be possible to use the --add-opens**
> command-line option, or the Add-Opens
> JAR-file manifest attribute, to open specific packages.
>
>
>
So the solution `--add-opens module/package=target-module(,target-module)*` and the example `--add-opens java.base/java.io=ALL-UNNAMED` will keep working for JDK 17 and future releases but the `--illegal-access` will not.
|
17,586,599
|
Using win32com.client, I'm attempting to create a simple shortcut in a folder. The shortcut however I would like to have arguments, except I keep getting the following error.
```
Traceback (most recent call last):
File "D:/Projects/Ms/ms.py", line 153, in <module>
scut.TargetPath = '"C:/python27/python.exe" "D:/Projects/Ms/msd.py" -b ' + str(loop7)
File "C:\Python27\lib\site-packages\win32com\client\dynamic.py", line 570, in __setattr__
raise AttributeError("Property '%s.%s' can not be set." % (self._username_, attr))
AttributeError: Property '<unknown>.TargetPath' can not be set.
```
My code looks like this. I've tried multiple different variates but can't seem to get it right. What am I doing wrong?
```
ws = win32com.client.Dispatch("wscript.shell")
scut = ws.CreateShortcut("D:/Projects/Ms/TestDir/testlink.lnk")
scut.TargetPath = '"C:/python27/python.exe" "D:/Projects/Ms/msd.py" -b 0'
scut.Save()
```
|
2013/07/11
|
[
"https://Stackoverflow.com/questions/17586599",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/721386/"
] |
Your code works for me without error. (Windows XP 32bit, Python 2.7.5, pywin32-216).
(I slightly modified your code because `TargetPath` should contain only executable path.)
```
import win32com.client
ws = win32com.client.Dispatch("wscript.shell")
scut = ws.CreateShortcut('run_idle.lnk')
scut.TargetPath = '"c:/python27/python.exe"'
scut.Arguments = '-m idlelib.idle'
scut.Save()
```
I got AttributeError similar to yours when I tried following (assign list to `Arguments` property.)
```
>>> scut.Arguments = []
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "c:\python27\lib\site-packages\win32com\client\dynamic.py", line 570, in __setattr__
raise AttributeError("Property '%s.%s' can not be set." % (self._username_, attr))
AttributeError: Property '<unknown>.Arguments' can not be set.
```
|
"..TargetPath should contain only [an] executable path." is incorrect in two ways :
1. The target may also contain the executable's arguments.
For instance, I have a file [ D:\DATA\CCMD\Expl.CMD ] whose essential line of code is
START Explorer.exe "%Target%"
An example of its use is
D:\DATA\CCMD\Expl.CMD "D:\DATA\SYSTEM - NEW INSTALL PROGS"
This entire line is the "executable" you are referring to.
1. The target doesn't have to be an "executable" at all. It may be *any* file in which the OS can act upon, such as those file types whose default actions run executable with the files as its arguments, such as :
"My File.txt"
The "default action" on this file type is to open it with a text editor. The actual executable file run isn't explicit.
|
59,209,756
|
I'm new to Django 1.11 LTS and I'm trying to solve this error from a very long time. Here is my code where the error is occurring:
model.py:
```
name = models.CharField(db_column="name", db_index=True, max_length=128)
description = models.TextField(db_column="description", null=True, blank=True)
created = models.DateTimeField(db_column="created", auto_now_add=True, blank=True)
updated = models.DateTimeField(db_column="updated", auto_now=True, null=True)
active = models.BooleanField(db_column="active", default=True)
customer = models.ForeignKey(Customer, db_column="customer_id")
class Meta(object):
db_table="customer_build"
unique_together = ("name", "customer")
def __unicode__(self):
return u"%s [%s]" % (self.name, self.customer)
def get(self,row,customer):
build_name = row['build']
return self._default_manager.filter(name = build_name, customer_id = customer.id).first()
def add(self,row):
pass
```
Views.py block:
```
for row in others:
rack_name = row['rack']
build = Build().get(row,customer)
try:
rack = Rack().get(row,customer)
except Exception as E:
msg = {'exception': str(E), 'where':'Non-server device portmap creation', 'doing_what': 'Rack with name {} does not exist in build {}'.format(rack_name,build.name),
'current_row': row, 'status': 417}
log_it('An error occurred: {}'.format(msg))
return JsonResponse(msg, status = 417)
```
Error traceback:
```
File "/usr/local/lib/python3.6/dist-packages/django/core/handlers/exception.py", line 41, in inner
response = get_response(request)
File "/usr/local/lib/python3.6/dist-packages/django/core/handlers/base.py", line 249, in _legacy_get_response
response = self._get_response(request)
File "/usr/local/lib/python3.6/dist-packages/django/core/handlers/base.py", line 187, in _get_response
response = self.process_exception_by_middleware(e, request)
File "/usr/local/lib/python3.6/dist-packages/django/core/handlers/base.py", line 185, in _get_response
response = wrapped_callback(request, *callback_args, **callback_kwargs)
File "./customers/views.py", line 2513, in create_rack
add_rack_status = add_rack(customer, csv_file)
File "./customers/views.py", line 1803, in add_rack
build = Build().get(row,customer)
File "./customers/models.py", line 69, in get
return self._default_manager.filter(name = build_name, customer_id = customer.id).first()
AttributeError: 'Build' object has no attribute '_default_manager'
```
I'm trying to understand the issue so that I can fix it.
Thanks in advance.
Regards,
Bharath
|
2019/12/06
|
[
"https://Stackoverflow.com/questions/59209756",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5368168/"
] |
<https://www.anylogic.com/files/anylogic-professional-8.3.3.exe>
For any version, just put the version you want and you will likely be able to download it
if using mac:
<https://www.anylogic.com/files/anylogic-professional-8.3.3.dmg>
|
In addition to Felipe's answer, you can always ask
>
> support@anylogic.com
>
>
>
if you need *very* old versions. I believe that AL7.x is not available online anymore but they happily send the installers if you need them.
|
7,454,590
|
I'm trying to unit test a handler with webapp2 and am running into what has to be just a stupid little error.
I'd like to be able to use webapp2.uri\_for in the test, but I can't seem to do that:
```
def test_returns_200_on_home_page(self):
response = main.app.get_response(webapp2.uri_for('index'))
self.assertEqual(200, response.status_int)
```
If I just do `main.app.get_response('/')` it works just fine.
The exception received is:
```
Traceback (most recent call last):
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/unittest/case.py", line 318, in run
testMethod()
File "tests.py", line 27, in test_returns_200_on_home_page
webapp2.uri_for('index')
File "/Users/.../webapp2_example/lib/webapp2.py", line 1671, in uri_for
return request.app.router.build(request, _name, args, kwargs)
File "/Users/.../webapp2_example/lib/webapp2_extras/local.py", line 173, in __getattr__
return getattr(self._get_current_object(), name)
File "/Users/.../webapp2_example/lib/webapp2_extras/local.py", line 136, in _get_current_object
raise RuntimeError('no object bound to %s' % self.__name__)
RuntimeError: no object bound to request
```
Is there some silly setup I'm missing?
|
2011/09/17
|
[
"https://Stackoverflow.com/questions/7454590",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/233242/"
] |
I think the only option is to set a dummy request just to be able to create URIs for the test:
```
def test_returns_200_on_home_page(self):
// Set a dummy request just to be able to use uri_for().
req = webapp2.Request.blank('/')
req.app = main.app
main.app.set_globals(app=main.app, request=req)
response = main.app.get_response(webapp2.uri_for('index'))
self.assertEqual(200, response.status_int)
```
Never use `set_globals()` outside of tests. Is is called by the WSGI application to set the active app and request in a thread-safe manner.
|
`webapp2.uri_for()` assumes that you are in a web request context and it fails because it cannot find the `request` object.
Instead of working around this you could think of your application as a black box and call it using literal URIs, like `'/'` as you mention it. After all, you want to simulate a normal web request, and a web browser will also use URIs and not internal routing shortcuts.
|
45,949,105
|
I had used created a GUI by wxpython to run stats model using statsmodels SARIMAX(). I put all five scripts in one file and tried to use
```
pyinstaller --onedir <mainscript.py>
```
to create compiled application.
After the pyinstaller process completed, I ran the generated application in dist file but it gave this error:
```
c:\users\appdata\local\temp\pip-build-dm6yoc\pyinstaller\PyInstaller\loader\pyimod03_importers.py:389:
Traceback (most recent call last):
File "envs\conda_env1\myApp\mainscript.py", line 2, in <module>
File "c:\users\appdata\local\temp\pip-build-dm6yoc\pyinstaller\PyInstaller\loader\pyimod03_importers.py",
line 389, in load_module
File "envs\conda_env1\myApp\my_algorithm.py", line 3, in <module>
File "c:\users\appdata\local\temp\pip-builddm6yoc\pyinstaller\PyInstaller\loader\pyimod03_importers.py",
line 389, in load_module
File "site-packages\statsmodels\api.py", line 22, in <module>
File "c:\users\appdata\local\temp\pip-builddm6yoc\pyinstaller\PyInstaller\loader\pyimod03_importers.py",
line 389, in load_module
File "site-packages\statsmodels\__init__.py", line 8, in <module>
ImportError: No module named tools.sm_exceptions
Failed to execute script mainscript
```
I used python2.7 in Windows8 to create the GUI and statsmodel algorithm in conda environment but the pyinstaller was done by pip install. I wonder if this is what caused the error?? Any advise or link to associated discussion would be appreciated!! (I don't even know which topics this problem falls into ...)
|
2017/08/29
|
[
"https://Stackoverflow.com/questions/45949105",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/7644284/"
] |
If you have dark background in your application and want to use light colors for your ngx charts then you can use this method. It will use official code for ngx dark theme and show light colors for the chart labels. You can also change the color code in sccss variables and things work as you need.
I solved it using the way used on the official website. In you application SCSS file for styles, add the following styles:
```
.dark {
/**
* Backgrounds
*/
$color-bg-darkest: #13141b;
$color-bg-darker: #1b1e27;
$color-bg-dark: #232837;
$color-bg-med: #2f3646;
$color-bg-light: #455066;
$color-bg-lighter: #5b6882;
/**
* Text
*/
$color-text-dark: #72809b;
$color-text-med-dark: #919db5;
$color-text-med: #A0AABE;
$color-text-med-light: #d9dce1;
$color-text-light: #f0f1f6;
$color-text-lighter: #fff;
background: $color-bg-darker;
.ngx-charts {
text {
fill: $color-text-med;
}
.tooltip-anchor {
fill: rgb(255, 255, 255);
}
.gridline-path {
stroke: $color-bg-med;
}
.refline-path {
stroke: $color-bg-light;
}
.reference-area {
fill: #fff;
}
.grid-panel {
&.odd {
rect {
fill: rgba(255, 255, 255, 0.05);
}
}
}
.force-directed-graph {
.edge {
stroke: $color-bg-light;
}
}
.number-card {
p {
color: $color-text-light;
}
}
.gauge {
.background-arc {
path {
fill: $color-bg-med;
}
}
.gauge-tick {
path {
stroke: $color-text-med;
}
text {
fill: $color-text-med;
}
}
}
.linear-gauge {
.background-bar {
path {
fill: $color-bg-med;
}
}
.units {
fill: $color-text-dark;
}
}
.timeline {
.brush-background {
fill: rgba(255, 255, 255, 0.05);
}
.brush {
.selection {
fill: rgba(255, 255, 255, 0.1);
stroke: #aaa;
}
}
}
.polar-chart .polar-chart-background {
fill: rgb(30, 34, 46);
}
}
.chart-legend {
.legend-labels {
background: rgba(255, 255, 255, 0.05) !important;
}
.legend-item {
&:hover {
color: #fff;
}
}
.legend-label {
&:hover {
color: #fff !important;
}
.active {
.legend-label-text {
color: #fff !important;
}
}
}
.scale-legend-label {
color: $color-text-med;
}
}
.advanced-pie-legend {
color: $color-text-med;
.legend-item {
&:hover {
color: #fff !important;
}
}
}
.number-card .number-card-label {
font-size: 0.8em;
color: $color-text-med;
}
}
```
Once this has been added make sure you have this scss file linked in your angular.json file. After that you just have to add class dark in the first wrapping component of your ngx chart like this for example:
```
<div class="areachart-wrapper dark">
<ngx-charts-area-chart
[view]="view"
[scheme]="colorScheme"
[results]="data"
[gradient]="gradient"
[xAxis]="showXAxis"
[yAxis]="showYAxis"
[legend]="showLegend"
[showXAxisLabel]="showXAxisLabel"
[showYAxisLabel]="showYAxisLabel"
[xAxisLabel]="xAxisLabel"
[yAxisLabel]="yAxisLabel"
[autoScale]="autoScale"
[curve]="curve"
(select)="onSelect($event)">
</ngx-charts-area-chart>
</div>
```
This will make your charts look exactly as shown on the official website with dark theme for the charts: <https://swimlane.github.io/ngx-charts/#/ngx-charts/bar-vertical>.
[](https://i.stack.imgur.com/Gphgs.png)
|
Axis ticks formatting can be done like this
<https://github.com/swimlane/ngx-charts/blob/master/demo/app.component.html>
this has individual element classes.
|
45,949,105
|
I had used created a GUI by wxpython to run stats model using statsmodels SARIMAX(). I put all five scripts in one file and tried to use
```
pyinstaller --onedir <mainscript.py>
```
to create compiled application.
After the pyinstaller process completed, I ran the generated application in dist file but it gave this error:
```
c:\users\appdata\local\temp\pip-build-dm6yoc\pyinstaller\PyInstaller\loader\pyimod03_importers.py:389:
Traceback (most recent call last):
File "envs\conda_env1\myApp\mainscript.py", line 2, in <module>
File "c:\users\appdata\local\temp\pip-build-dm6yoc\pyinstaller\PyInstaller\loader\pyimod03_importers.py",
line 389, in load_module
File "envs\conda_env1\myApp\my_algorithm.py", line 3, in <module>
File "c:\users\appdata\local\temp\pip-builddm6yoc\pyinstaller\PyInstaller\loader\pyimod03_importers.py",
line 389, in load_module
File "site-packages\statsmodels\api.py", line 22, in <module>
File "c:\users\appdata\local\temp\pip-builddm6yoc\pyinstaller\PyInstaller\loader\pyimod03_importers.py",
line 389, in load_module
File "site-packages\statsmodels\__init__.py", line 8, in <module>
ImportError: No module named tools.sm_exceptions
Failed to execute script mainscript
```
I used python2.7 in Windows8 to create the GUI and statsmodel algorithm in conda environment but the pyinstaller was done by pip install. I wonder if this is what caused the error?? Any advise or link to associated discussion would be appreciated!! (I don't even know which topics this problem falls into ...)
|
2017/08/29
|
[
"https://Stackoverflow.com/questions/45949105",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/7644284/"
] |
**Finally**
I was struggling with this information and found something neat, just adding a line within the ngx tag. Hope help someone in the future.
[Problem Reference - github #540](https://github.com/swimlane/ngx-charts/issues/540)
`style="fill: #2B2B2B"`
```
<ngx-charts-bar-horizontal
[results]="results"
[scheme]="colorScheme"
[results]="single"
[gradient]="gradient"
[xAxis]="showXAxis"
[yAxis]="showYAxis"
[legend]="showLegend"
[showXAxisLabel]="showXAxisLabel"
[showYAxisLabel]="showYAxisLabel"
[xAxisLabel]="xAxisLabel"
[yAxisLabel]="yAxisLabel"
(select)="onSelect($event)"
style="fill: #2B2B2B;"> <----------------------- HERE
</ngx-charts-bar-horizontal>
```
|
Axis ticks formatting can be done like this
<https://github.com/swimlane/ngx-charts/blob/master/demo/app.component.html>
this has individual element classes.
|
45,949,105
|
I had used created a GUI by wxpython to run stats model using statsmodels SARIMAX(). I put all five scripts in one file and tried to use
```
pyinstaller --onedir <mainscript.py>
```
to create compiled application.
After the pyinstaller process completed, I ran the generated application in dist file but it gave this error:
```
c:\users\appdata\local\temp\pip-build-dm6yoc\pyinstaller\PyInstaller\loader\pyimod03_importers.py:389:
Traceback (most recent call last):
File "envs\conda_env1\myApp\mainscript.py", line 2, in <module>
File "c:\users\appdata\local\temp\pip-build-dm6yoc\pyinstaller\PyInstaller\loader\pyimod03_importers.py",
line 389, in load_module
File "envs\conda_env1\myApp\my_algorithm.py", line 3, in <module>
File "c:\users\appdata\local\temp\pip-builddm6yoc\pyinstaller\PyInstaller\loader\pyimod03_importers.py",
line 389, in load_module
File "site-packages\statsmodels\api.py", line 22, in <module>
File "c:\users\appdata\local\temp\pip-builddm6yoc\pyinstaller\PyInstaller\loader\pyimod03_importers.py",
line 389, in load_module
File "site-packages\statsmodels\__init__.py", line 8, in <module>
ImportError: No module named tools.sm_exceptions
Failed to execute script mainscript
```
I used python2.7 in Windows8 to create the GUI and statsmodel algorithm in conda environment but the pyinstaller was done by pip install. I wonder if this is what caused the error?? Any advise or link to associated discussion would be appreciated!! (I don't even know which topics this problem falls into ...)
|
2017/08/29
|
[
"https://Stackoverflow.com/questions/45949105",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/7644284/"
] |
If you have dark background in your application and want to use light colors for your ngx charts then you can use this method. It will use official code for ngx dark theme and show light colors for the chart labels. You can also change the color code in sccss variables and things work as you need.
I solved it using the way used on the official website. In you application SCSS file for styles, add the following styles:
```
.dark {
/**
* Backgrounds
*/
$color-bg-darkest: #13141b;
$color-bg-darker: #1b1e27;
$color-bg-dark: #232837;
$color-bg-med: #2f3646;
$color-bg-light: #455066;
$color-bg-lighter: #5b6882;
/**
* Text
*/
$color-text-dark: #72809b;
$color-text-med-dark: #919db5;
$color-text-med: #A0AABE;
$color-text-med-light: #d9dce1;
$color-text-light: #f0f1f6;
$color-text-lighter: #fff;
background: $color-bg-darker;
.ngx-charts {
text {
fill: $color-text-med;
}
.tooltip-anchor {
fill: rgb(255, 255, 255);
}
.gridline-path {
stroke: $color-bg-med;
}
.refline-path {
stroke: $color-bg-light;
}
.reference-area {
fill: #fff;
}
.grid-panel {
&.odd {
rect {
fill: rgba(255, 255, 255, 0.05);
}
}
}
.force-directed-graph {
.edge {
stroke: $color-bg-light;
}
}
.number-card {
p {
color: $color-text-light;
}
}
.gauge {
.background-arc {
path {
fill: $color-bg-med;
}
}
.gauge-tick {
path {
stroke: $color-text-med;
}
text {
fill: $color-text-med;
}
}
}
.linear-gauge {
.background-bar {
path {
fill: $color-bg-med;
}
}
.units {
fill: $color-text-dark;
}
}
.timeline {
.brush-background {
fill: rgba(255, 255, 255, 0.05);
}
.brush {
.selection {
fill: rgba(255, 255, 255, 0.1);
stroke: #aaa;
}
}
}
.polar-chart .polar-chart-background {
fill: rgb(30, 34, 46);
}
}
.chart-legend {
.legend-labels {
background: rgba(255, 255, 255, 0.05) !important;
}
.legend-item {
&:hover {
color: #fff;
}
}
.legend-label {
&:hover {
color: #fff !important;
}
.active {
.legend-label-text {
color: #fff !important;
}
}
}
.scale-legend-label {
color: $color-text-med;
}
}
.advanced-pie-legend {
color: $color-text-med;
.legend-item {
&:hover {
color: #fff !important;
}
}
}
.number-card .number-card-label {
font-size: 0.8em;
color: $color-text-med;
}
}
```
Once this has been added make sure you have this scss file linked in your angular.json file. After that you just have to add class dark in the first wrapping component of your ngx chart like this for example:
```
<div class="areachart-wrapper dark">
<ngx-charts-area-chart
[view]="view"
[scheme]="colorScheme"
[results]="data"
[gradient]="gradient"
[xAxis]="showXAxis"
[yAxis]="showYAxis"
[legend]="showLegend"
[showXAxisLabel]="showXAxisLabel"
[showYAxisLabel]="showYAxisLabel"
[xAxisLabel]="xAxisLabel"
[yAxisLabel]="yAxisLabel"
[autoScale]="autoScale"
[curve]="curve"
(select)="onSelect($event)">
</ngx-charts-area-chart>
</div>
```
This will make your charts look exactly as shown on the official website with dark theme for the charts: <https://swimlane.github.io/ngx-charts/#/ngx-charts/bar-vertical>.
[](https://i.stack.imgur.com/Gphgs.png)
|
According to this [GitHub issue](https://github.com/swimlane/ngx-charts/issues/540) you might use following CSS to style the labels (worked for me):
```
.ngx-charts text { fill: #fff; }
```
The xAxisTickFormatting/yAxisTickFormatting that you've mentioned can be used to supply a formatting function which is used to generate the content of the labels (eg. if you have dates in your data, you may set the formatting function which shows the date in custom format, like 'HH:mm')
|
45,949,105
|
I had used created a GUI by wxpython to run stats model using statsmodels SARIMAX(). I put all five scripts in one file and tried to use
```
pyinstaller --onedir <mainscript.py>
```
to create compiled application.
After the pyinstaller process completed, I ran the generated application in dist file but it gave this error:
```
c:\users\appdata\local\temp\pip-build-dm6yoc\pyinstaller\PyInstaller\loader\pyimod03_importers.py:389:
Traceback (most recent call last):
File "envs\conda_env1\myApp\mainscript.py", line 2, in <module>
File "c:\users\appdata\local\temp\pip-build-dm6yoc\pyinstaller\PyInstaller\loader\pyimod03_importers.py",
line 389, in load_module
File "envs\conda_env1\myApp\my_algorithm.py", line 3, in <module>
File "c:\users\appdata\local\temp\pip-builddm6yoc\pyinstaller\PyInstaller\loader\pyimod03_importers.py",
line 389, in load_module
File "site-packages\statsmodels\api.py", line 22, in <module>
File "c:\users\appdata\local\temp\pip-builddm6yoc\pyinstaller\PyInstaller\loader\pyimod03_importers.py",
line 389, in load_module
File "site-packages\statsmodels\__init__.py", line 8, in <module>
ImportError: No module named tools.sm_exceptions
Failed to execute script mainscript
```
I used python2.7 in Windows8 to create the GUI and statsmodel algorithm in conda environment but the pyinstaller was done by pip install. I wonder if this is what caused the error?? Any advise or link to associated discussion would be appreciated!! (I don't even know which topics this problem falls into ...)
|
2017/08/29
|
[
"https://Stackoverflow.com/questions/45949105",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/7644284/"
] |
**Finally**
I was struggling with this information and found something neat, just adding a line within the ngx tag. Hope help someone in the future.
[Problem Reference - github #540](https://github.com/swimlane/ngx-charts/issues/540)
`style="fill: #2B2B2B"`
```
<ngx-charts-bar-horizontal
[results]="results"
[scheme]="colorScheme"
[results]="single"
[gradient]="gradient"
[xAxis]="showXAxis"
[yAxis]="showYAxis"
[legend]="showLegend"
[showXAxisLabel]="showXAxisLabel"
[showYAxisLabel]="showYAxisLabel"
[xAxisLabel]="xAxisLabel"
[yAxisLabel]="yAxisLabel"
(select)="onSelect($event)"
style="fill: #2B2B2B;"> <----------------------- HERE
</ngx-charts-bar-horizontal>
```
|
According to this [GitHub issue](https://github.com/swimlane/ngx-charts/issues/540) you might use following CSS to style the labels (worked for me):
```
.ngx-charts text { fill: #fff; }
```
The xAxisTickFormatting/yAxisTickFormatting that you've mentioned can be used to supply a formatting function which is used to generate the content of the labels (eg. if you have dates in your data, you may set the formatting function which shows the date in custom format, like 'HH:mm')
|
45,949,105
|
I had used created a GUI by wxpython to run stats model using statsmodels SARIMAX(). I put all five scripts in one file and tried to use
```
pyinstaller --onedir <mainscript.py>
```
to create compiled application.
After the pyinstaller process completed, I ran the generated application in dist file but it gave this error:
```
c:\users\appdata\local\temp\pip-build-dm6yoc\pyinstaller\PyInstaller\loader\pyimod03_importers.py:389:
Traceback (most recent call last):
File "envs\conda_env1\myApp\mainscript.py", line 2, in <module>
File "c:\users\appdata\local\temp\pip-build-dm6yoc\pyinstaller\PyInstaller\loader\pyimod03_importers.py",
line 389, in load_module
File "envs\conda_env1\myApp\my_algorithm.py", line 3, in <module>
File "c:\users\appdata\local\temp\pip-builddm6yoc\pyinstaller\PyInstaller\loader\pyimod03_importers.py",
line 389, in load_module
File "site-packages\statsmodels\api.py", line 22, in <module>
File "c:\users\appdata\local\temp\pip-builddm6yoc\pyinstaller\PyInstaller\loader\pyimod03_importers.py",
line 389, in load_module
File "site-packages\statsmodels\__init__.py", line 8, in <module>
ImportError: No module named tools.sm_exceptions
Failed to execute script mainscript
```
I used python2.7 in Windows8 to create the GUI and statsmodel algorithm in conda environment but the pyinstaller was done by pip install. I wonder if this is what caused the error?? Any advise or link to associated discussion would be appreciated!! (I don't even know which topics this problem falls into ...)
|
2017/08/29
|
[
"https://Stackoverflow.com/questions/45949105",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/7644284/"
] |
If you have dark background in your application and want to use light colors for your ngx charts then you can use this method. It will use official code for ngx dark theme and show light colors for the chart labels. You can also change the color code in sccss variables and things work as you need.
I solved it using the way used on the official website. In you application SCSS file for styles, add the following styles:
```
.dark {
/**
* Backgrounds
*/
$color-bg-darkest: #13141b;
$color-bg-darker: #1b1e27;
$color-bg-dark: #232837;
$color-bg-med: #2f3646;
$color-bg-light: #455066;
$color-bg-lighter: #5b6882;
/**
* Text
*/
$color-text-dark: #72809b;
$color-text-med-dark: #919db5;
$color-text-med: #A0AABE;
$color-text-med-light: #d9dce1;
$color-text-light: #f0f1f6;
$color-text-lighter: #fff;
background: $color-bg-darker;
.ngx-charts {
text {
fill: $color-text-med;
}
.tooltip-anchor {
fill: rgb(255, 255, 255);
}
.gridline-path {
stroke: $color-bg-med;
}
.refline-path {
stroke: $color-bg-light;
}
.reference-area {
fill: #fff;
}
.grid-panel {
&.odd {
rect {
fill: rgba(255, 255, 255, 0.05);
}
}
}
.force-directed-graph {
.edge {
stroke: $color-bg-light;
}
}
.number-card {
p {
color: $color-text-light;
}
}
.gauge {
.background-arc {
path {
fill: $color-bg-med;
}
}
.gauge-tick {
path {
stroke: $color-text-med;
}
text {
fill: $color-text-med;
}
}
}
.linear-gauge {
.background-bar {
path {
fill: $color-bg-med;
}
}
.units {
fill: $color-text-dark;
}
}
.timeline {
.brush-background {
fill: rgba(255, 255, 255, 0.05);
}
.brush {
.selection {
fill: rgba(255, 255, 255, 0.1);
stroke: #aaa;
}
}
}
.polar-chart .polar-chart-background {
fill: rgb(30, 34, 46);
}
}
.chart-legend {
.legend-labels {
background: rgba(255, 255, 255, 0.05) !important;
}
.legend-item {
&:hover {
color: #fff;
}
}
.legend-label {
&:hover {
color: #fff !important;
}
.active {
.legend-label-text {
color: #fff !important;
}
}
}
.scale-legend-label {
color: $color-text-med;
}
}
.advanced-pie-legend {
color: $color-text-med;
.legend-item {
&:hover {
color: #fff !important;
}
}
}
.number-card .number-card-label {
font-size: 0.8em;
color: $color-text-med;
}
}
```
Once this has been added make sure you have this scss file linked in your angular.json file. After that you just have to add class dark in the first wrapping component of your ngx chart like this for example:
```
<div class="areachart-wrapper dark">
<ngx-charts-area-chart
[view]="view"
[scheme]="colorScheme"
[results]="data"
[gradient]="gradient"
[xAxis]="showXAxis"
[yAxis]="showYAxis"
[legend]="showLegend"
[showXAxisLabel]="showXAxisLabel"
[showYAxisLabel]="showYAxisLabel"
[xAxisLabel]="xAxisLabel"
[yAxisLabel]="yAxisLabel"
[autoScale]="autoScale"
[curve]="curve"
(select)="onSelect($event)">
</ngx-charts-area-chart>
</div>
```
This will make your charts look exactly as shown on the official website with dark theme for the charts: <https://swimlane.github.io/ngx-charts/#/ngx-charts/bar-vertical>.
[](https://i.stack.imgur.com/Gphgs.png)
|
**Finally**
I was struggling with this information and found something neat, just adding a line within the ngx tag. Hope help someone in the future.
[Problem Reference - github #540](https://github.com/swimlane/ngx-charts/issues/540)
`style="fill: #2B2B2B"`
```
<ngx-charts-bar-horizontal
[results]="results"
[scheme]="colorScheme"
[results]="single"
[gradient]="gradient"
[xAxis]="showXAxis"
[yAxis]="showYAxis"
[legend]="showLegend"
[showXAxisLabel]="showXAxisLabel"
[showYAxisLabel]="showYAxisLabel"
[xAxisLabel]="xAxisLabel"
[yAxisLabel]="yAxisLabel"
(select)="onSelect($event)"
style="fill: #2B2B2B;"> <----------------------- HERE
</ngx-charts-bar-horizontal>
```
|
30,489,449
|
How can I see a warning again without restarting python. Now I see them only once.
Consider this code for example:
```
import pandas as pd
pd.Series([1]) / 0
```
I get
```
RuntimeWarning: divide by zero encountered in true_divide
```
But when I run it again it executes silently.
**How can I see the warning again without restarting python?**
---
I have tried to do
```
del __warningregistry__
```
but that doesn't help.
Seems like only some types of warnings are stored there.
For example if I do:
```
def f():
X = pd.DataFrame(dict(a=[1,2,3],b=[4,5,6]))
Y = X.iloc[:2]
Y['c'] = 8
```
then this will raise warning only first time when `f()` is called.
However, now when if do `del __warningregistry__` I can see the warning again.
---
What is the difference between first and second warning? Why only the second one is stored in this `__warningregistry__`? Where is the first one stored?
|
2015/05/27
|
[
"https://Stackoverflow.com/questions/30489449",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3549680/"
] |
>
> How can I see the warning again without restarting python?
>
>
>
As long as you do the following at the beginning of your script, you will not need to restart.
```
import pandas as pd
import numpy as np
import warnings
np.seterr(all='warn')
warnings.simplefilter("always")
```
At this point every time you attempt to divide by zero, it will display
```
RuntimeWarning: divide by zero encountered in true_divide
```
---
Explanation:
We are setting up a couple warning filters. The first ([`np.seterr`](http://docs.scipy.org/doc/numpy/reference/generated/numpy.seterr.html)) is telling NumPy how it should handle warnings. I have set it to show warnings on *all*, but if you are only interested in seeing the Divide by zero warnings, change the parameter from `all` to `divide`.
Next we change how we want the `warnings` module to always display warnings. We do this by setting up a [warning filter](https://docs.python.org/2/library/warnings.html#the-warnings-filter).
>
> What is the difference between first and second warning? Why only the second one is stored in this \_\_warningregistry\_\_? Where is the first one stored?
>
>
>
This is described in the [bug report](https://bugs.python.org/msg75117) reporting this issue:
>
> If you didn't raise the warning before using the simple filter, this
> would have worked. The undesired behavior is because of
> \_\_warningsregistry\_\_. It is set the first time the warning is emitted.
> When the second warning comes through, the filter isn't even looked at.
> I think the best way to fix this is to invalidate \_\_warningsregistry\_\_
> when a filter is used. It would probably be best to store warnings data
> in a global then instead of on the module, so it is easy to invalidate.
>
>
>
Incidentally, the [bug](https://bugs.python.org/issue4180) has been closed as fixed for versions 3.4 and 3.5.
|
`warnings` is a pretty awesome standard library module. You're going to enjoy getting to know it :)
A little background
-------------------
The default behavior of `warnings` is to only show a particular warning, coming from a particular line, on its first occurrence. For instance, the following code will result in two warnings shown to the user:
```py
import numpy as np
# 10 warnings, but only the first copy will be shown
for i in range(10):
np.true_divide(1, 0)
# This is on a separate line from the other "copies", so its warning will show
np.true_divide(1, 0)
```
You have a few options to change this behavior.
Option 1: Reset the warnings registry
-------------------------------------
when you want python to "forget" what warnings you've seen before, you can use [`resetwarnings`](https://docs.python.org/3/library/warnings.html#warnings.resetwarnings):
```py
# warns every time, because the warnings registry has been reset
for i in range(10):
warnings.resetwarnings()
np.true_divide(1, 0)
```
Note that this also resets any warning configuration changes you've made. Which brings me to...
Option 2: Change the warnings configuration
-------------------------------------------
The [warnings module documentation](https://docs.python.org/3/library/warnings.html) covers this in greater detail, but one straightforward option is just to use a `simplefilter` to change that default behavior.
```py
import warnings
import numpy as np
# Show all warnings
warnings.simplefilter('always')
for i in range(10):
# Now this will warn every loop
np.true_divide(1, 0)
```
Since this is a global configuration change, it has global effects which you'll likely want to avoid (all warnings anywhere in your application will show every time). A less drastic option is to use the context manager:
```py
with warnings.catch_warnings():
warnings.simplefilter('always')
for i in range(10):
# This will warn every loop
np.true_divide(1, 0)
# Back to normal behavior: only warn once
for i in range(10):
np.true_divide(1, 0)
```
There are also more granular options for changing the configuration on specific types of warnings. For that, check out the [docs](https://docs.python.org/3/library/warnings.html#overriding-the-default-filter).
|
15,784,537
|
Purpose: Given a PDB file, prints out all pairs of Cysteine residues forming disulfide bonds in the tertiary protein structure. Licence: GNU GPL Written By: Eric Miller
```
#!/usr/bin/env python
import math
def getDistance((x1,y1,z1),(x2,y2,z2)):
d = math.sqrt(pow((x1-x2),2)+pow((y1-y2),2)+pow((z1-z2),2));
return round(d,3);
def prettyPrint(dsBonds):
print "Residue 1\tResidue 2\tDistance";
for (r1,r2,d) in dsBonds:
print " {0}\t\t {1}\t\t {2}".format(r1,r2,d);
def main():
pdbFile = open('2v5t.pdb','r');
maxBondDist = 2.5;
isCysLine = lambda line: (line[0:4] == "ATOM" and line[13:15] == "SG");
cysLines = [line for line in pdbFile if isCysLine(line)];
pdbFile.close();
getCoords = lambda line:(float(line[31:38]),
float(line[39:46]),float(line[47:54]));
cysCoords = map(getCoords, cysLines);
dsBonds = [];
for i in range(len(cysCoords)-1):
for j in range(i+1,len(cysCoords)):
dist = getDistance(cysCoords[i],cysCoords[j]);
residue1 = int(cysLines[i][23:27]);
residue2 = int(cysLines[j][23:27]);
if (dist < maxBondDist):
dsBonds.append((residue1,residue2,dist));
prettyPrint(dsBonds);
if __name__ == "__main__":
main()
```
When I try to run this script I get indentation problem. I have 2v5t.pdb (required to run the above script) in my working directory. Any solution?
|
2013/04/03
|
[
"https://Stackoverflow.com/questions/15784537",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2176228/"
] |
For me the indentation is broken within 'prettyPrint' and in '**main**'. Also no need to use ';'. Try this:
```
#!/usr/bin/env python
import math
# Input: Two 3D points of the form (x,y,z).
# Output: Euclidean distance between the points.
def getDistance((x1, y1, z1), (x2, y2, z2)):
d = math.sqrt(pow((x1 - x2), 2) + pow((y1 - y2), 2) + pow((z1 - z2), 2))
return round(d, 3)
# Purpose: Prints a list of 3-tuples (r1,r2,d). R1 and r2 are
# residue numbers, and d is the distance between their respective
# gamma sulfur atoms.
def prettyPrint(dsBonds):
print "Residue 1\tResidue 2\tDistance"
for r1, r2, d in dsBonds:
print " {0}\t\t {1}\t\t {2}".format(r1, r2, d)
# Purpose: Find all pairs of cysteine residues whose gamma sulfur atoms
# are within maxBondDist of each other.
def main():
pdbFile = open('2v5t.pdb','r')
#Max distance to consider a disulfide bond.
maxBondDist = 2.5
# Anonymous function to check if a line from the PDB file is a gamma
# sulfur atom from a cysteine residue.
isCysLine = lambda line: (line[0:4] == "ATOM" and line[13:15] == "SG")
cysLines = [line for line in pdbFile if isCysLine(line)]
pdbFile.close()
# Anonymous function to get (x,y,z) coordinates in angstroms for
# the location of a cysteine residue's gamma sulfur atom.
getCoords = lambda line:(float(line[31:38]),
float(line[39:46]), float(line[47:54]))
cysCoords = map(getCoords, cysLines)
# Make a list of all residue pairs classified as disulfide bonds.
dsBonds = []
for i in range(len(cysCoords)-1):
for j in range(i+1, len(cysCoords)):
dist = getDistance(cysCoords[i], cysCoords[j])
residue1 = int(cysLines[i][23:27])
residue2 = int(cysLines[j][23:27])
if dist < maxBondDist:
dsBonds.append((residue1,residue2,dist))
prettyPrint(dsBonds)
if __name__ == "__main__":
main()
```
|
This:
```
if __name__ == "__main__":
main()
```
Should be:
```
if __name__ == "__main__":
main()
```
Also, the python interpreter will give you information on the IndentationError *down to the line*. I strongly suggest reading the error messages provided, as developers write them for a reason.
|
15,784,537
|
Purpose: Given a PDB file, prints out all pairs of Cysteine residues forming disulfide bonds in the tertiary protein structure. Licence: GNU GPL Written By: Eric Miller
```
#!/usr/bin/env python
import math
def getDistance((x1,y1,z1),(x2,y2,z2)):
d = math.sqrt(pow((x1-x2),2)+pow((y1-y2),2)+pow((z1-z2),2));
return round(d,3);
def prettyPrint(dsBonds):
print "Residue 1\tResidue 2\tDistance";
for (r1,r2,d) in dsBonds:
print " {0}\t\t {1}\t\t {2}".format(r1,r2,d);
def main():
pdbFile = open('2v5t.pdb','r');
maxBondDist = 2.5;
isCysLine = lambda line: (line[0:4] == "ATOM" and line[13:15] == "SG");
cysLines = [line for line in pdbFile if isCysLine(line)];
pdbFile.close();
getCoords = lambda line:(float(line[31:38]),
float(line[39:46]),float(line[47:54]));
cysCoords = map(getCoords, cysLines);
dsBonds = [];
for i in range(len(cysCoords)-1):
for j in range(i+1,len(cysCoords)):
dist = getDistance(cysCoords[i],cysCoords[j]);
residue1 = int(cysLines[i][23:27]);
residue2 = int(cysLines[j][23:27]);
if (dist < maxBondDist):
dsBonds.append((residue1,residue2,dist));
prettyPrint(dsBonds);
if __name__ == "__main__":
main()
```
When I try to run this script I get indentation problem. I have 2v5t.pdb (required to run the above script) in my working directory. Any solution?
|
2013/04/03
|
[
"https://Stackoverflow.com/questions/15784537",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2176228/"
] |
For me the indentation is broken within 'prettyPrint' and in '**main**'. Also no need to use ';'. Try this:
```
#!/usr/bin/env python
import math
# Input: Two 3D points of the form (x,y,z).
# Output: Euclidean distance between the points.
def getDistance((x1, y1, z1), (x2, y2, z2)):
d = math.sqrt(pow((x1 - x2), 2) + pow((y1 - y2), 2) + pow((z1 - z2), 2))
return round(d, 3)
# Purpose: Prints a list of 3-tuples (r1,r2,d). R1 and r2 are
# residue numbers, and d is the distance between their respective
# gamma sulfur atoms.
def prettyPrint(dsBonds):
print "Residue 1\tResidue 2\tDistance"
for r1, r2, d in dsBonds:
print " {0}\t\t {1}\t\t {2}".format(r1, r2, d)
# Purpose: Find all pairs of cysteine residues whose gamma sulfur atoms
# are within maxBondDist of each other.
def main():
pdbFile = open('2v5t.pdb','r')
#Max distance to consider a disulfide bond.
maxBondDist = 2.5
# Anonymous function to check if a line from the PDB file is a gamma
# sulfur atom from a cysteine residue.
isCysLine = lambda line: (line[0:4] == "ATOM" and line[13:15] == "SG")
cysLines = [line for line in pdbFile if isCysLine(line)]
pdbFile.close()
# Anonymous function to get (x,y,z) coordinates in angstroms for
# the location of a cysteine residue's gamma sulfur atom.
getCoords = lambda line:(float(line[31:38]),
float(line[39:46]), float(line[47:54]))
cysCoords = map(getCoords, cysLines)
# Make a list of all residue pairs classified as disulfide bonds.
dsBonds = []
for i in range(len(cysCoords)-1):
for j in range(i+1, len(cysCoords)):
dist = getDistance(cysCoords[i], cysCoords[j])
residue1 = int(cysLines[i][23:27])
residue2 = int(cysLines[j][23:27])
if dist < maxBondDist:
dsBonds.append((residue1,residue2,dist))
prettyPrint(dsBonds)
if __name__ == "__main__":
main()
```
|
You didn't say where the error was flagged to be but:
```
if __name__ == "__main__":
main()
```
Should be:
```
if __name__ == "__main__":
main()
```
|
72,060,798
|
In python I am trying to lookup the relevant price depending on qty from a list of scale prices. For example when getting a quotation request:
```
Product Qty Price
0 A 6
1 B 301
2 C 1
3 D 200
4 E 48
```
Price list with scale prices:
```
Product Scale Qty Scale Price
0 A 1 48
1 A 5 43
2 A 50 38
3 B 1 10
4 B 10 9
5 B 50 7
6 B 100 5
7 B 150 2
8 C 1 300
9 C 2 250
10 C 3 200
11 D 1 5
12 D 100 3
13 D 200 1
14 E 1 100
15 E 10 10
16 E 100 1
```
Output that I would like:
```
Product Qty Price
0 A 6 43
1 B 301 2
2 C 1 300
3 D 200 1
4 E 48 10
```
|
2022/04/29
|
[
"https://Stackoverflow.com/questions/72060798",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/18991198/"
] |
Try with `merge_asof`:
```
output = (pd.merge_asof(df2.sort_values("Qty"),df1.sort_values("Scale Qty"),left_on="Qty",right_on="Scale Qty",by="Product")
.sort_values("Product", ignore_index=True)
.drop("Scale Qty", axis=1)
.rename(columns={"Scale Price":"Price"}))
>>> output
Product Qty Price
0 A 6 43
1 B 301 2
2 C 1 300
3 D 200 1
4 E 48 10
```
###### Inputs:
```
df1 = pd.DataFrame({'Product': ['A','A','A','B','B','B','B','B','C','C','C','D','D','D','E','E','E'],
'Scale Qty': [1, 5, 50, 1, 10, 50, 100, 150, 1, 2, 3, 1, 100, 200, 1, 10, 100],
'Scale Price': [48, 43, 38, 10, 9, 7, 5, 2, 300, 250, 200, 5, 3, 1, 100, 10, 1]})
df2 = pd.DataFrame({"Product": list("ABCDE"),
"Qty": [6,301,1,200,48]})
```
|
Assuming `df1` and `df2`, use `merge_asof`:
```
pd.merge_asof(df1.sort_values(by='Qty'),
df2.sort_values(by='Scale Qty').rename(columns={'Scale Price': 'Price'}),
by='Product', left_on='Qty', right_on='Scale Qty')
```
output:
```
Product Qty Scale Qty Price
0 C 1 1 300
1 A 6 5 43
2 E 48 10 10
3 D 200 200 1
4 B 301 150 2
```
|
58,143,742
|
I'm working on a project using keras (python 3), and I've encountered a problem - I've installed using pip tensorflow, and imported it into my prject, but whenether I try to run it, I get an error saying:
```
ModuleNotFoundError: No module named 'tensorflow'
```
it seems my installation completed successfully, and I think I have the right PATH since I installed few other things such as numpy and their installation worked well. does anyone have a clue what did I do wrong?
Thank you!
|
2019/09/28
|
[
"https://Stackoverflow.com/questions/58143742",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/9126289/"
] |
Use [`Series.str.replace`](http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Series.str.replace.html) with replace `uppercase` by same vales with space before and then remove first space:
```
df = pd.DataFrame({'U.N.Region':['WestAfghanistan','NorthEastAfghanistan']})
df['U.N.Region'] = df['U.N.Region'].str.replace( r"([A-Z])", r" \1").str.strip()
print (df)
U.N.Region
0 West Afghanistan
1 North East Afghanistan
```
|
Another option would be,
```
import pandas as pd
import re
df = pd.DataFrame({'U.N.Region': ['WestAfghanistan', 'NorthEastAfghanistan']})
df['U.N.Region'] = df['U.N.Region'].str.replace(
r"(?<=[a-z])(?=[A-Z])", " ")
print(df)
```
|
58,143,742
|
I'm working on a project using keras (python 3), and I've encountered a problem - I've installed using pip tensorflow, and imported it into my prject, but whenether I try to run it, I get an error saying:
```
ModuleNotFoundError: No module named 'tensorflow'
```
it seems my installation completed successfully, and I think I have the right PATH since I installed few other things such as numpy and their installation worked well. does anyone have a clue what did I do wrong?
Thank you!
|
2019/09/28
|
[
"https://Stackoverflow.com/questions/58143742",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/9126289/"
] |
Use [`Series.str.replace`](http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Series.str.replace.html) with replace `uppercase` by same vales with space before and then remove first space:
```
df = pd.DataFrame({'U.N.Region':['WestAfghanistan','NorthEastAfghanistan']})
df['U.N.Region'] = df['U.N.Region'].str.replace( r"([A-Z])", r" \1").str.strip()
print (df)
U.N.Region
0 West Afghanistan
1 North East Afghanistan
```
|
Yet another solution:
```
df.apply(lambda col: col.str.replace(r"([a-z])([A-Z])",r"\1 \2"))
Out:
U.N. Region Centers
0 North East Afghanistan Fayzabad
1 West Afghanistan Qala Naw
```
|
58,143,742
|
I'm working on a project using keras (python 3), and I've encountered a problem - I've installed using pip tensorflow, and imported it into my prject, but whenether I try to run it, I get an error saying:
```
ModuleNotFoundError: No module named 'tensorflow'
```
it seems my installation completed successfully, and I think I have the right PATH since I installed few other things such as numpy and their installation worked well. does anyone have a clue what did I do wrong?
Thank you!
|
2019/09/28
|
[
"https://Stackoverflow.com/questions/58143742",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/9126289/"
] |
Yet another solution:
```
df.apply(lambda col: col.str.replace(r"([a-z])([A-Z])",r"\1 \2"))
Out:
U.N. Region Centers
0 North East Afghanistan Fayzabad
1 West Afghanistan Qala Naw
```
|
Another option would be,
```
import pandas as pd
import re
df = pd.DataFrame({'U.N.Region': ['WestAfghanistan', 'NorthEastAfghanistan']})
df['U.N.Region'] = df['U.N.Region'].str.replace(
r"(?<=[a-z])(?=[A-Z])", " ")
print(df)
```
|
40,138,090
|
My data is organized in a dataframe:
```
import pandas as pd
import numpy as np
data = {'Col1' : [4,5,6,7], 'Col2' : [10,20,30,40], 'Col3' : [100,50,-30,-50], 'Col4' : ['AAA', 'BBB', 'AAA', 'CCC']}
df = pd.DataFrame(data=data, index = ['R1','R2','R3','R4'])
```
Which looks like this (only much bigger):
```
Col1 Col2 Col3 Col4
R1 4 10 100 AAA
R2 5 20 50 BBB
R3 6 30 -30 AAA
R4 7 40 -50 CCC
```
My algorithm loops through this table rows and performs a set of operations.
For cleaness/lazyness sake, I would like to work on a single row at each iteration without typing `df.loc['row index', 'column name']` to get each cell value
I have tried to follow the [right style](http://pandas.pydata.org/pandas-docs/stable/indexing.html#indexing-view-versus-copy) using for example:
```
row_of_interest = df.loc['R2', :]
```
However, I still get the warning when I do:
```
row_of_interest['Col2'] = row_of_interest['Col2'] + 1000
SettingWithCopyWarning: A value is trying to be set on a copy of a slice from a DataFrame
```
And it is not working (as I intended) it is making a copy
```
print df
Col1 Col2 Col3 Col4
R1 4 10 100 AAA
R2 5 20 50 BBB
R3 6 30 -30 AAA
R4 7 40 -50 CCC
```
Any advice on the proper way to do it? Or should I just stick to work with the data frame directly?
Edit 1:
Using the replies provided the warning is removed from the code but the original dataframe is not modified: The "row of interest" `Series` is a copy not part of the original dataframe. For example:
```
import pandas as pd
import numpy as np
data = {'Col1' : [4,5,6,7], 'Col2' : [10,20,30,40], 'Col3' : [100,50,-30,-50], 'Col4' : ['AAA', 'BBB', 'AAA', 'CCC']}
df = pd.DataFrame(data=data, index = ['R1','R2','R3','R4'])
row_of_interest = df.loc['R2']
row_of_interest.is_copy = False
new_cell_value = row_of_interest['Col2'] + 1000
row_of_interest['Col2'] = new_cell_value
print row_of_interest
Col1 5
Col2 1020
Col3 50
Col4 BBB
Name: R2, dtype: object
print df
Col1 Col2 Col3 Col4
R1 4 10 100 AAA
R2 5 20 50 BBB
R3 6 30 -30 AAA
R4 7 40 -50 CCC
```
Edit 2:
This is an example of the functionality I would like to replicate. In python a list of lists looks like:
```
a = [[1,2,3],[4,5,6]]
```
Now I can create a "label"
```
b = a[0]
```
And if I change an entry in b:
```
b[0] = 7
```
Both a and b change.
```
print a, b
[[7,2,3],[4,5,6]], [7,2,3]
```
Can this behavior be replicated between a pandas dataframe labeling one of its rows a pandas series?
|
2016/10/19
|
[
"https://Stackoverflow.com/questions/40138090",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2301970/"
] |
The tint property does not affect the color of the title. To set the title color (along with other attributes like font) globally, you can set the `titleTextAttributes` property of the `UINavigationBar` appearance to suit your needs. Just place this code in your AppDelegate or somewhere else appropriate that gets called on launch:
**Swift 3:**
```
UINavigationBar.appearance().titleTextAttributes = [NSForegroundColorAttributeName: UIColor.white]
```
**Swift 2**
```
UINavigationBar.appearance().titleTextAttributes = [NSForegroundColorAttributeName: UIColor.whiteColor()]
```
|
No you work correctly. But you should to set color for second view. You can use this code to solve your problem.
In second view write this code to set color and font for your navigation title.
---
```
navigationController!.navigationBar.titleTextAttributes = ([NSFontAttributeName: UIFont(name: "Helvetica", size: 25)!,NSForegroundColorAttributeName: UIColor.white])
```
---
---
|
29,035,115
|
I am working with an existing SQLite database and experiencing errors due to the data being encoded in CP-1252, when Python is expecting it to be UTF-8.
```
>>> import sqlite3
>>> conn = sqlite3.connect('dnd.sqlite')
>>> curs = conn.cursor()
>>> result = curs.execute("SELECT * FROM dnd_characterclass WHERE id=802")
Traceback (most recent call last):
File "<input>", line 1, in <module>
OperationalError: Could not decode to UTF-8 column 'short_description_html'
with text ' <p>Over a dozen deities have worshipers who are paladins,
promoting law and good across Faer�n, but it is the Weave itself that
```
The offending character is `\0xfb` which decodes to `û`. Other offending texts include `“?nd and slay illithids.”` which uses "smart quotes" `\0x93` and `\0x94`.
[SQLite, python, unicode, and non-utf data](https://stackoverflow.com/questions/2392732/sqlite-python-unicode-and-non-utf-data) details how this problem can be solved when using `sqlite3` on its own.
**However, I am using SQLAlchemy.** How can I deal with CP-1252 encoded data in an SQLite database, when I am using SQLAlchemy?
---
Edit:
This would also apply for any other funny encodings in an SQLite `TEXT` field, like `latin-1`, `cp437`, and so on.
|
2015/03/13
|
[
"https://Stackoverflow.com/questions/29035115",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1191425/"
] |
SQLAlchemy and SQLite are behaving normally. The solution is to fix the non-UTF-8 data in the database.
I wrote the below, drawing inspiration from <https://stackoverflow.com/a/2395414/1191425> . It:
* loads up the target SQLite database
* lists all columns in all tables
* if the column is a `text`, `char`, or `clob` type - including variants like `varchar` and `longtext` - it re-encodes the data from the `INPUT_ENCODING` to UTF-8.
---
```
INPUT_ENCODING = 'cp1252' # The encoding you want to convert from
import sqlite3
db = sqlite3.connect('dnd_fixed.sqlite')
db.create_function('FIXENCODING', 1, lambda s: str(s).decode(INPUT_ENCODING))
cur = db.cursor()
tables = cur.execute('SELECT name FROM sqlite_master WHERE type="table"').fetchall()
tables = [t[0] for t in tables]
for table in tables:
columns = cur.execute('PRAGMA table_info(%s)' % table ).fetchall() # Note: pragma arguments can't be parameterized.
for column_id, column_name, column_type, nullable, default_value, primary_key in columns:
if ('char' in column_type) or ('text' in column_type) or ('clob' in column_type):
# Table names and column names can't be parameterized either.
db.execute('UPDATE "{0}" SET "{1}" = FIXENCODING(CAST("{1}" AS BLOB))'.format(table, column_name))
```
---
After this script runs, all `*text*`, `*char*`, and `*clob*` fields are in UTF-8 and no more Unicode decoding errors will occur. I can now `Faerûn` to my heart's content.
|
If you have a connection URI then you can add the following options to your DB connection URI:
```
DB_CONNECTION = mysql+pymysql://{username}:{password}@{host}/{db_name}?{options}
DB_OPTIONS = {
"charset": "cp-1252",
"use_unicode": 1,
}
connection_uri = DB_CONNECTION.format(
username=???,
...,
options=urllib.urlencode(DB_OPTIONS)
)
```
Assuming your SQLLite driver can handle those options (pymysql can, but I don't know 100% about sqllite), then your queries will return unicode strings.
|
58,647,020
|
I am trying to run the cvxpy package in an AWS lambda function. This package isn't in the SDK, so I've read that I'll have to compile the dependencies into a zip, and then upload the zip into the lambda function.
I've done some research and tried out the links below, but when I try to pip install cvxpy I get error messages - I'm on a Windows box, but I know that AWS Lambda runs on Linux.
Appreciate the help!
<http://i-systems.github.io/HSE545/machine%20learning%20all/cvxpy_install/CVXPY%2BInstallation%2BGuide%2Bfor%2BWindows.html>
<https://programwithus.com/learn-to-code/Pip-and-virtualenv-on-Windows/>
<https://medium.com/@manivannan_data/import-custom-python-packages-on-aws-lambda-function-5fbac36b40f8>
<https://www.cvxpy.org/install/index.html>
|
2019/10/31
|
[
"https://Stackoverflow.com/questions/58647020",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/10756193/"
] |
For installing `cvxpy` on windows it requires c++ build tools (please refer: <https://buildmedia.readthedocs.org/media/pdf/cvxpy/latest/cvxpy.pdf>)
On Windows:
-----------
* I created a lambda layer python directory structure `python/lib/python3.7/site-packages` (refer: <https://docs.aws.amazon.com/lambda/latest/dg/configuration-layers.html>) and installed my pip packages in that site-packages directory.
```
pip install cvxpy --target python/lib/python3.7/site-packages
```
* Then, I zipped the `python/lib/python3.7/site-packages` as cvxpy\_layer.zip and uploaded it to an S3 bucket (layer zipped file max limit is only 50 MB <https://docs.aws.amazon.com/lambda/latest/dg/limits.html>), to attach it to my lambda layers.
* Now, the layer is ready but the lambda is failing to import the packages as they were installed on a windows machine. (refer: [AWS Lambda - unable to import module 'lambda\_function'](https://stackoverflow.com/questions/49734744/aws-lambda-unable-to-import-module-lambda-function))
On Linux:
---------
* I created the same directory structure as earlier `python/lib/python3.7/site-packages` and installed the cvxpy and zipped it as shown below.
* Later I uploaded the zip file to an S3 bucket and created a new lambda layer.
* Attaching that lambda layer to my lambda function, I colud able to resolve the import issues failing earlier and run the basic cvxpy program on lambda.
```
mkdir -p alley/python/lib/python3.7/site-packages
pip install cvxpy --target alley/python/lib/python3.7/site-packages
cd alley
zip -rqvT cvxpy_layer.zip .
```
### Lambda layer Image:
[](https://i.stack.imgur.com/XVvQl.jpg)
### Lambda function execution:
[](https://i.stack.imgur.com/qJtSI.jpg)
|
You can wrap all your dependencies along with lambda source into a single zipfile and deploy it. Doing this, you will end up having additional repetitive code in multiple lambda functions. Suppose, if more than one of your lambda functions needs the same package `cvxpy`, you will have to package it twice for both the functions individually.
Instead a better option would be to try `Labmda Layers`, where you put all your dependencies into a package and deploy a layer into your Lambda. Then attach that layer to your function to fetch its dependencies from there. The layers can even be versioned. :)
Please refer the below links:
* <https://docs.aws.amazon.com/lambda/latest/dg/configuration-layers.html>
* <https://dev.to/vealkind/getting-started-with-aws-lambda-layers-4ipk>
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.