qid
int64
46k
74.7M
question
stringlengths
54
37.8k
date
stringlengths
10
10
metadata
listlengths
3
3
response_j
stringlengths
17
26k
response_k
stringlengths
26
26k
183,033
This is really two questions, but they are so similar, and to keep it simple, I figured I'd just roll them together: * **Firstly**: Given an established C# project, what are some decent ways to speed it up beyond just plain in-code optimization? * **Secondly**: When writing a program from scratch in C#, what are some good ways to greatly improve performance? **Please stay away from general optimization techniques unless they are *C# specific*.** This has previously been asked for [Python](https://stackoverflow.com/questions/172720/speeding-up-python), [Perl](https://stackoverflow.com/questions/177122/speeding-up-perl), and [Java](https://stackoverflow.com/questions/179745/speeding-up-java).
2008/10/08
[ "https://Stackoverflow.com/questions/183033", "https://Stackoverflow.com", "https://Stackoverflow.com/users/145/" ]
A lot of slowness is related to database access. Make your database queries efficient and you'll do a lot for your app.
Don't use to much reflection.
183,033
This is really two questions, but they are so similar, and to keep it simple, I figured I'd just roll them together: * **Firstly**: Given an established C# project, what are some decent ways to speed it up beyond just plain in-code optimization? * **Secondly**: When writing a program from scratch in C#, what are some good ways to greatly improve performance? **Please stay away from general optimization techniques unless they are *C# specific*.** This has previously been asked for [Python](https://stackoverflow.com/questions/172720/speeding-up-python), [Perl](https://stackoverflow.com/questions/177122/speeding-up-perl), and [Java](https://stackoverflow.com/questions/179745/speeding-up-java).
2008/10/08
[ "https://Stackoverflow.com/questions/183033", "https://Stackoverflow.com", "https://Stackoverflow.com/users/145/" ]
Profile your code. Then you can at least have an understanding of where you can improve. Without profiling you are shooting in the dark...
For Windows Forms on XP and Vista: Turn double buffering on across the board. It does cause transparency issues, so you would definitely want to test the UI: ``` protected override System.Windows.Forms.CreateParams CreateParams { get { CreateParams cp = base.CreateParams; cp.ExStyle = cp.ExStyle | 0x2000000; return cp; } } ```
183,033
This is really two questions, but they are so similar, and to keep it simple, I figured I'd just roll them together: * **Firstly**: Given an established C# project, what are some decent ways to speed it up beyond just plain in-code optimization? * **Secondly**: When writing a program from scratch in C#, what are some good ways to greatly improve performance? **Please stay away from general optimization techniques unless they are *C# specific*.** This has previously been asked for [Python](https://stackoverflow.com/questions/172720/speeding-up-python), [Perl](https://stackoverflow.com/questions/177122/speeding-up-perl), and [Java](https://stackoverflow.com/questions/179745/speeding-up-java).
2008/10/08
[ "https://Stackoverflow.com/questions/183033", "https://Stackoverflow.com", "https://Stackoverflow.com/users/145/" ]
This is true for any language, not just C# 1. For an existing app, don't do *anything* until you know what's making it slow. IMHO, [this is the best way.](http://www.wikihow.com/Optimize-Your-Program%27s-Performance) 2. For new apps, the problem is how programmers are taught. They are taught to make mountains out of molehills. After you've optimized a few apps using [this](http://www.wikihow.com/Optimize-Your-Program%27s-Performance) you will be familiar with the problem of what I call "galloping generality" - layer upon layer of "abstraction", rather than simply asking what the problem requires. The best you can hope for is to run along after them, telling them what the performance problems are that they've just put in, so they can take them out as they go along.
For Windows Forms on XP and Vista: Turn double buffering on across the board. It does cause transparency issues, so you would definitely want to test the UI: ``` protected override System.Windows.Forms.CreateParams CreateParams { get { CreateParams cp = base.CreateParams; cp.ExStyle = cp.ExStyle | 0x2000000; return cp; } } ```
183,033
This is really two questions, but they are so similar, and to keep it simple, I figured I'd just roll them together: * **Firstly**: Given an established C# project, what are some decent ways to speed it up beyond just plain in-code optimization? * **Secondly**: When writing a program from scratch in C#, what are some good ways to greatly improve performance? **Please stay away from general optimization techniques unless they are *C# specific*.** This has previously been asked for [Python](https://stackoverflow.com/questions/172720/speeding-up-python), [Perl](https://stackoverflow.com/questions/177122/speeding-up-perl), and [Java](https://stackoverflow.com/questions/179745/speeding-up-java).
2008/10/08
[ "https://Stackoverflow.com/questions/183033", "https://Stackoverflow.com", "https://Stackoverflow.com/users/145/" ]
Caching items that result from a query: ``` private Item _myResult; public Item Result { get { if (_myResult == null) { _myResult = Database.DoQueryForResult(); } return _myResult; } } ``` Its a basic technique that is frequently overlooked by starting programmers, and one of the EASIEST ways to improve performance in an application. Answer ported from a question that was ruled a dupe of this one.
For Windows Forms on XP and Vista: Turn double buffering on across the board. It does cause transparency issues, so you would definitely want to test the UI: ``` protected override System.Windows.Forms.CreateParams CreateParams { get { CreateParams cp = base.CreateParams; cp.ExStyle = cp.ExStyle | 0x2000000; return cp; } } ```
11,070,920
I try to understand the regex in python. How can i split the following sentence with regular expression? ``` "familyname, Givenname A.15.10" ``` this is like the phonebook in python regex <http://docs.python.org/library/re.html>. The person maybe have 2 or more familynames and 2 or more givennames. After the familynames exist ', ' and after givennames exist ''. the last one is the office of the person. What i did until know is ``` import re file=open('file.txt','r') data=file.readlines() for i in range(90): person=re.split('[,\.]',data[i],maxsplit=2) print(person) ``` it gives me a result like this ``` ['Wegner', ' Sven Ake G', '15.10\n'] ``` i want to have something like ``` ['Wegner', ' Sven Ake', 'G', '15', '10']. any idea? ```
2012/06/17
[ "https://Stackoverflow.com/questions/11070920", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1364181/" ]
What you want to do is first split the family name by , `familyname, rest = text.split(',', 1)` Then you want to split the office with the first space from the right. `givenname, office = rest.rsplit(' ', 1)`
Assuming that family names don't have a comma, you can take them easily. Given names are sensible to dots. For example: ``` Harney, PJ A.15.10 Harvey, P.J. A.15.10 ``` This means that you should probably trim the rest of the record (family names are out) by a mask at the end (regex "maskpattern$").
11,070,920
I try to understand the regex in python. How can i split the following sentence with regular expression? ``` "familyname, Givenname A.15.10" ``` this is like the phonebook in python regex <http://docs.python.org/library/re.html>. The person maybe have 2 or more familynames and 2 or more givennames. After the familynames exist ', ' and after givennames exist ''. the last one is the office of the person. What i did until know is ``` import re file=open('file.txt','r') data=file.readlines() for i in range(90): person=re.split('[,\.]',data[i],maxsplit=2) print(person) ``` it gives me a result like this ``` ['Wegner', ' Sven Ake G', '15.10\n'] ``` i want to have something like ``` ['Wegner', ' Sven Ake', 'G', '15', '10']. any idea? ```
2012/06/17
[ "https://Stackoverflow.com/questions/11070920", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1364181/" ]
In the regex world it's often easier to "match" rather than "split". When you're "matching" you tell the RE engine directly what kinds of substrings you're looking for, instead of concentrating on separating characters. The requirements in your question are a bit unclear, but let's assume that * "surname" is everything before the first comma * "name" is everything before the "office" * "office" consists of non-space characters at the end of the string This translates to regex language like this: ``` rr = r""" ^ # begin ([^,]+) # match everything but a comma (.+?) # match everything, until next match occurs (\S+) # non-space characters $ # end """ ``` Testing: ``` import re rr = re.compile(rr, re.VERBOSE) print rr.findall("de Batz de Castelmore d'Artagnan, Charles Ogier W.12.345") # [("de Batz de Castelmore d'Artagnan", ', Charles Ogier ', 'W.12.345')] ``` Update: ``` rr = r""" ^ # begin ([^,]+) # match everything but a comma [,\s]+ # a comma and spaces (.+?) # match everything until the next match \s* # spaces ([A-Z]) # an uppercase letter \. # a dot (\d+) # some digits \. # a dot (\d+) # some digits \s* # maybe some spaces or newlines $ # end """ import re rr = re.compile(rr, re.VERBOSE) s = 'Wegner, Sven Ake G.15.10\n' print rr.findall(s) # [('Wegner', 'Sven Ake', 'G', '15', '10')] ```
What you want to do is first split the family name by , `familyname, rest = text.split(',', 1)` Then you want to split the office with the first space from the right. `givenname, office = rest.rsplit(' ', 1)`
11,070,920
I try to understand the regex in python. How can i split the following sentence with regular expression? ``` "familyname, Givenname A.15.10" ``` this is like the phonebook in python regex <http://docs.python.org/library/re.html>. The person maybe have 2 or more familynames and 2 or more givennames. After the familynames exist ', ' and after givennames exist ''. the last one is the office of the person. What i did until know is ``` import re file=open('file.txt','r') data=file.readlines() for i in range(90): person=re.split('[,\.]',data[i],maxsplit=2) print(person) ``` it gives me a result like this ``` ['Wegner', ' Sven Ake G', '15.10\n'] ``` i want to have something like ``` ['Wegner', ' Sven Ake', 'G', '15', '10']. any idea? ```
2012/06/17
[ "https://Stackoverflow.com/questions/11070920", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1364181/" ]
In the regex world it's often easier to "match" rather than "split". When you're "matching" you tell the RE engine directly what kinds of substrings you're looking for, instead of concentrating on separating characters. The requirements in your question are a bit unclear, but let's assume that * "surname" is everything before the first comma * "name" is everything before the "office" * "office" consists of non-space characters at the end of the string This translates to regex language like this: ``` rr = r""" ^ # begin ([^,]+) # match everything but a comma (.+?) # match everything, until next match occurs (\S+) # non-space characters $ # end """ ``` Testing: ``` import re rr = re.compile(rr, re.VERBOSE) print rr.findall("de Batz de Castelmore d'Artagnan, Charles Ogier W.12.345") # [("de Batz de Castelmore d'Artagnan", ', Charles Ogier ', 'W.12.345')] ``` Update: ``` rr = r""" ^ # begin ([^,]+) # match everything but a comma [,\s]+ # a comma and spaces (.+?) # match everything until the next match \s* # spaces ([A-Z]) # an uppercase letter \. # a dot (\d+) # some digits \. # a dot (\d+) # some digits \s* # maybe some spaces or newlines $ # end """ import re rr = re.compile(rr, re.VERBOSE) s = 'Wegner, Sven Ake G.15.10\n' print rr.findall(s) # [('Wegner', 'Sven Ake', 'G', '15', '10')] ```
Assuming that family names don't have a comma, you can take them easily. Given names are sensible to dots. For example: ``` Harney, PJ A.15.10 Harvey, P.J. A.15.10 ``` This means that you should probably trim the rest of the record (family names are out) by a mask at the end (regex "maskpattern$").
6,947,210
how would I add set elements to a string in python? I tried: ``` sett = set(['1', '0']) elements = '' for i in sett: elements.join(i) ``` but no dice. when I print elements the string is empty. help
2011/08/04
[ "https://Stackoverflow.com/questions/6947210", "https://Stackoverflow.com", "https://Stackoverflow.com/users/879311/" ]
This should work: ``` sett = set(['1', '0']) elements = '' for i in sett: elements += i # elements = '10' ``` However, if you're just looking to get a string representation of each element, you can simply do this: ``` elements = ''.join(sett) # elements = '10' ```
Don't know what you mean with "add set elements" to a string. But anyway: Strings are immutable in Python, so you cannot add anything to them.
6,947,210
how would I add set elements to a string in python? I tried: ``` sett = set(['1', '0']) elements = '' for i in sett: elements.join(i) ``` but no dice. when I print elements the string is empty. help
2011/08/04
[ "https://Stackoverflow.com/questions/6947210", "https://Stackoverflow.com", "https://Stackoverflow.com/users/879311/" ]
Strings are immutable. ====================== `elements.join(i)` does not change `elements`. You need to assign the value returned by `join` to something: ``` s = set(['1', '0']) elements = '' for i in s: elements = elements.join(i) ``` But, as others pointed out, this is better still: ``` s = set(['1', '0']) elements = '' elements = elements.join(s) ``` or in its most concise form: ``` s = set(['1', '0']) elements = ''.join(s) ```
This should work: ``` sett = set(['1', '0']) elements = '' for i in sett: elements += i # elements = '10' ``` However, if you're just looking to get a string representation of each element, you can simply do this: ``` elements = ''.join(sett) # elements = '10' ```
6,947,210
how would I add set elements to a string in python? I tried: ``` sett = set(['1', '0']) elements = '' for i in sett: elements.join(i) ``` but no dice. when I print elements the string is empty. help
2011/08/04
[ "https://Stackoverflow.com/questions/6947210", "https://Stackoverflow.com", "https://Stackoverflow.com/users/879311/" ]
I believe you want this: ``` s = set(['1', '2']) asString = ''.join(s) ``` Be aware that sets are not ordered like lists are. They'll be in the order added typically until something is removed, but the order could be different than the order you added them.
This should work: ``` sett = set(['1', '0']) elements = '' for i in sett: elements += i # elements = '10' ``` However, if you're just looking to get a string representation of each element, you can simply do this: ``` elements = ''.join(sett) # elements = '10' ```
6,947,210
how would I add set elements to a string in python? I tried: ``` sett = set(['1', '0']) elements = '' for i in sett: elements.join(i) ``` but no dice. when I print elements the string is empty. help
2011/08/04
[ "https://Stackoverflow.com/questions/6947210", "https://Stackoverflow.com", "https://Stackoverflow.com/users/879311/" ]
Strings are immutable. ====================== `elements.join(i)` does not change `elements`. You need to assign the value returned by `join` to something: ``` s = set(['1', '0']) elements = '' for i in s: elements = elements.join(i) ``` But, as others pointed out, this is better still: ``` s = set(['1', '0']) elements = '' elements = elements.join(s) ``` or in its most concise form: ``` s = set(['1', '0']) elements = ''.join(s) ```
Don't know what you mean with "add set elements" to a string. But anyway: Strings are immutable in Python, so you cannot add anything to them.
6,947,210
how would I add set elements to a string in python? I tried: ``` sett = set(['1', '0']) elements = '' for i in sett: elements.join(i) ``` but no dice. when I print elements the string is empty. help
2011/08/04
[ "https://Stackoverflow.com/questions/6947210", "https://Stackoverflow.com", "https://Stackoverflow.com/users/879311/" ]
``` >>> ''.join(set(['1','2'])) '12' ``` I guess this is what you want.
Don't know what you mean with "add set elements" to a string. But anyway: Strings are immutable in Python, so you cannot add anything to them.
6,947,210
how would I add set elements to a string in python? I tried: ``` sett = set(['1', '0']) elements = '' for i in sett: elements.join(i) ``` but no dice. when I print elements the string is empty. help
2011/08/04
[ "https://Stackoverflow.com/questions/6947210", "https://Stackoverflow.com", "https://Stackoverflow.com/users/879311/" ]
I believe you want this: ``` s = set(['1', '2']) asString = ''.join(s) ``` Be aware that sets are not ordered like lists are. They'll be in the order added typically until something is removed, but the order could be different than the order you added them.
Don't know what you mean with "add set elements" to a string. But anyway: Strings are immutable in Python, so you cannot add anything to them.
6,947,210
how would I add set elements to a string in python? I tried: ``` sett = set(['1', '0']) elements = '' for i in sett: elements.join(i) ``` but no dice. when I print elements the string is empty. help
2011/08/04
[ "https://Stackoverflow.com/questions/6947210", "https://Stackoverflow.com", "https://Stackoverflow.com/users/879311/" ]
Strings are immutable. ====================== `elements.join(i)` does not change `elements`. You need to assign the value returned by `join` to something: ``` s = set(['1', '0']) elements = '' for i in s: elements = elements.join(i) ``` But, as others pointed out, this is better still: ``` s = set(['1', '0']) elements = '' elements = elements.join(s) ``` or in its most concise form: ``` s = set(['1', '0']) elements = ''.join(s) ```
``` >>> ''.join(set(['1','2'])) '12' ``` I guess this is what you want.
6,947,210
how would I add set elements to a string in python? I tried: ``` sett = set(['1', '0']) elements = '' for i in sett: elements.join(i) ``` but no dice. when I print elements the string is empty. help
2011/08/04
[ "https://Stackoverflow.com/questions/6947210", "https://Stackoverflow.com", "https://Stackoverflow.com/users/879311/" ]
I believe you want this: ``` s = set(['1', '2']) asString = ''.join(s) ``` Be aware that sets are not ordered like lists are. They'll be in the order added typically until something is removed, but the order could be different than the order you added them.
Strings are immutable. ====================== `elements.join(i)` does not change `elements`. You need to assign the value returned by `join` to something: ``` s = set(['1', '0']) elements = '' for i in s: elements = elements.join(i) ``` But, as others pointed out, this is better still: ``` s = set(['1', '0']) elements = '' elements = elements.join(s) ``` or in its most concise form: ``` s = set(['1', '0']) elements = ''.join(s) ```
6,947,210
how would I add set elements to a string in python? I tried: ``` sett = set(['1', '0']) elements = '' for i in sett: elements.join(i) ``` but no dice. when I print elements the string is empty. help
2011/08/04
[ "https://Stackoverflow.com/questions/6947210", "https://Stackoverflow.com", "https://Stackoverflow.com/users/879311/" ]
I believe you want this: ``` s = set(['1', '2']) asString = ''.join(s) ``` Be aware that sets are not ordered like lists are. They'll be in the order added typically until something is removed, but the order could be different than the order you added them.
``` >>> ''.join(set(['1','2'])) '12' ``` I guess this is what you want.
70,148,408
I am new to learning python but I can't seem to find out how to make the first part of the while loop run in the background while the second part is running. Putting in the input I set allows the first part to run twice but then pauses it. Here is the code ``` import time def money(): coins = 0 multiplyer = 1 cps = 1 while True: coins = coins + cps time.sleep(1) player_input = input("Input: ") if player_input == "coins": print(coins) player_input money() ``` Result: ``` Input: coins 1 Input: coins 2 Input: ``` My goal is to make the input print out 10 coins after 10 seconds, not 2 coins after two times of typing coins.
2021/11/28
[ "https://Stackoverflow.com/questions/70148408", "https://Stackoverflow.com", "https://Stackoverflow.com/users/16408072/" ]
I think you need to refactor your `onSubmit` function to make it `async` so `isSubmitting` will stay `true` during your `signIn` call. ```js const onSubmit = async (data) => { await signIn(data.email, data.password) .then((response) => console.log(response)) .catch((error) => { let message = null if (error.code === "auth/too-many-requests") { message = "Too many unsuccessful attempts, please reset password or try again later" } if (error.code === "auth/wrong-password") { message = "Incorrect password, please try again" } if (error.code === "auth/user-not-found") { message = "User does not exist, please try again" } resetField("password") setFirebaseError(message) }) } ``` [![Edit React Hook Form - Async Submit Validation (forked)](https://codesandbox.io/static/img/play-codesandbox.svg)](https://codesandbox.io/s/react-hook-form-async-submit-validation-forked-ec75r?fontsize=14&hidenavigation=1&theme=dark)
`onSubmit` needs to return a `Promise` for `formState` to update correctly. ``` const onSubmit = (payload) => { // You need to return a promise. return new Promise((resolve) => { setTimeout(() => resolve(), 1000); }); }; ``` References: * <https://react-hook-form.com/api/useform/formstate/> * <https://github.com/react-hook-form/react-hook-form/issues/1363#issuecomment-610681167>
17,099,808
I want to implement the following code in more of a pythonic way: ``` odd_rows = table.findAll('tr', attrs = {'class':'odd'}) #contain all tr tags even_rows = table.findAll('tr', attrs = {'class':'even'}) for rows in odd_rows: #rows equal 1 <tr> tag rows.findAll('td') #find all the <td> tags located in that one <tr> tag for row in rows.findAll('td'): #find one <td> tag print row #print that <td> tag for rows in even_rows: rows.findAll('td') for row in rows.findAll('td'): print row ``` the line `row.findAll('td')` shows my logic
2013/06/14
[ "https://Stackoverflow.com/questions/17099808", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2407162/" ]
Perhaps: ``` for row in table.findAll('tr', attrs = {'class':'odd'}) + table.findAll('tr', attrs = {'class':'even'}): for cell in row.findAll('td'): print cell ``` From a performance standpoint, your original code is better. Combining two lists does use resources. However, unless you are writing code for Google scale, I agree with this quote. > > Programs must be written for people to read, and only incidentally for machines to execute. > > - Hal Abelson, Structure and Interpretation of Computer Programs > > > There is more than one way to do it. Write code the way that you feel is most readable to you. The computer can figure out the details.
``` for cls in ("odd", "even"): for rows in table.findAll('tr', class_=cls): for row in rows.findAll('td'): print row ```
40,785,453
I have a huge file of data: **datatable.txt** ``` id1 england male id2 germany female ... ... ... ``` I have another list of data: **indexes.txt** ``` id1 id3 id6 id10 id11 ``` I want to extract all rows from **datatable.txt** where the id is included in **indexes.txt**. Is it possible to do this with awk/sed/grep? The file sizes are so large using R or python is not convenient.
2016/11/24
[ "https://Stackoverflow.com/questions/40785453", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2662639/" ]
You just need a simple `awk` as ``` awk 'FNR==NR {a[$1]; next}; $1 in a' indexes.csv datatable.csv id1 england male ``` 1. `FNR==NR{a[$1];next}` will process on `indexes.csv` storing the entries of the array as the content of the first column till the end of the file. 2. Now on `datatable.csv`, I can match those rows from the first file by doing `$1 in a` which will give me all those rows in current file whose column `$1`'s value `a[$1]` is same as in other file.
maybe i overlook something, but i build two test files: ``` a1: id1 id2 id3 id6 id9 id10 ``` and ``` a2: id1 a 1 id2 b 2 id3 c 3 id4 c 4 id5 e 5 id6 f 6 id7 g 7 id8 h 8 id9 i 9 id10 j 10 ``` with `join a1 a2 2> /dev/null` I get all lines matched by column one.
49,760,858
I need to get data from this API <https://api.storj.io/contacts/f52624d8ef76df81c40853c22f93735581071434> (sample node) This is my code (python): ``` import requests f = requests.get('https://api.storj.io/contacts/f52624d8ef76df81c40853c22f93735581071434') print f.text ``` I want to save only protocol, responseTime and reputation in three subsequent lines of the txt file. It's supposed to look something like this:: ``` protocol: 1.2.0 responseTime: 8157.912472694088 reputation: 1377 ``` Unfortunately, I'm stuck at this point and I can not process this data in any way
2018/04/10
[ "https://Stackoverflow.com/questions/49760858", "https://Stackoverflow.com", "https://Stackoverflow.com/users/9590393/" ]
``` import requests f = requests.get('https://api.storj.io/contacts/f52624d8ef76df81c40853c22f93735581071434') # Store content as json answer = f.json() # List of element you want to keep items = ['protocol', 'responseTime', 'reputation'] # Display for item in items: print(item + ':' + str(answer[item])) # If you want to save in a file with open("Output.txt", "w") as text_file: for item in items: print(item + ':' + str(answer[item]), file=text_file) ``` Hope it helps! Cheers
This is a very unrefined way to do what you want that you could build off of. You'd need to sub in a path/filename for text.txt. ``` import requests import json f = requests.get('https://api.storj.io/contacts/f52624d8ef76df81c40853c22f93735581071434') t = json.loads(f.text) with open('text.txt', 'a') as mfile: mfile.write("protocol: {0}".format(str(t['protocol']))) mfile.write("responseTime: {0}".format(str(t['responseTime']))) mfile.write("reputation: {0}".format(str(t['reputation']))) ```
49,760,858
I need to get data from this API <https://api.storj.io/contacts/f52624d8ef76df81c40853c22f93735581071434> (sample node) This is my code (python): ``` import requests f = requests.get('https://api.storj.io/contacts/f52624d8ef76df81c40853c22f93735581071434') print f.text ``` I want to save only protocol, responseTime and reputation in three subsequent lines of the txt file. It's supposed to look something like this:: ``` protocol: 1.2.0 responseTime: 8157.912472694088 reputation: 1377 ``` Unfortunately, I'm stuck at this point and I can not process this data in any way
2018/04/10
[ "https://Stackoverflow.com/questions/49760858", "https://Stackoverflow.com", "https://Stackoverflow.com/users/9590393/" ]
``` import requests f = requests.get('https://api.storj.io/contacts/f52624d8ef76df81c40853c22f93735581071434') # Store content as json answer = f.json() # List of element you want to keep items = ['protocol', 'responseTime', 'reputation'] # Display for item in items: print(item + ':' + str(answer[item])) # If you want to save in a file with open("Output.txt", "w") as text_file: for item in items: print(item + ':' + str(answer[item]), file=text_file) ``` Hope it helps! Cheers
You just need to transform to a JSON object to be able to access the keys ``` import requests import simplejson as json f = requests.get('https://api.storj.io/contacts/f52624d8ef76df81c40853c22f93735581071434') x = json.loads(f.text) print 'protocol: {}'.format(x.get('protocol')) print 'responseTime: {}'.format(x.get('responseTime')) print 'reputation: {}'.format(x.get('reputation')) ```
20,467,107
as we know, python has two built-in url lib: * `urllib` * `urllib2` and a third-party lib: * `urllib3` if my requirement is only to request a API by GET method, assume it return a JSON string. which lib I should use? do they have some duplicated functions? if the `urllib` can implement my require, but after if my requirements get more and more complicated, the `urllib` can not fit my function, I should import another lib at that time, but I really want to import only one lib, because I think import all of them can make me confused, I think the method between them are totally different. so now I am confused which lib I should use, I prefer `urllib3`, I think it can fit my requirement all time, how do you think?
2013/12/09
[ "https://Stackoverflow.com/questions/20467107", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1122265/" ]
As Alexander says in the comments, use `requests`. That's all you need.
I don't really know what you want to do, but you should try with [`requests`](http://requests.readthedocs.org/en/latest/). It's simple and intuitive.
20,467,107
as we know, python has two built-in url lib: * `urllib` * `urllib2` and a third-party lib: * `urllib3` if my requirement is only to request a API by GET method, assume it return a JSON string. which lib I should use? do they have some duplicated functions? if the `urllib` can implement my require, but after if my requirements get more and more complicated, the `urllib` can not fit my function, I should import another lib at that time, but I really want to import only one lib, because I think import all of them can make me confused, I think the method between them are totally different. so now I am confused which lib I should use, I prefer `urllib3`, I think it can fit my requirement all time, how do you think?
2013/12/09
[ "https://Stackoverflow.com/questions/20467107", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1122265/" ]
As Alexander says in the comments, use `requests`. That's all you need.
Personally I avoid to use third-party library when possible, so I can reduce the dependencies' list and improve portability. urllib and urllib2 are not mutually exclusive and are often mixed in the same project.
20,467,107
as we know, python has two built-in url lib: * `urllib` * `urllib2` and a third-party lib: * `urllib3` if my requirement is only to request a API by GET method, assume it return a JSON string. which lib I should use? do they have some duplicated functions? if the `urllib` can implement my require, but after if my requirements get more and more complicated, the `urllib` can not fit my function, I should import another lib at that time, but I really want to import only one lib, because I think import all of them can make me confused, I think the method between them are totally different. so now I am confused which lib I should use, I prefer `urllib3`, I think it can fit my requirement all time, how do you think?
2013/12/09
[ "https://Stackoverflow.com/questions/20467107", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1122265/" ]
I don't really know what you want to do, but you should try with [`requests`](http://requests.readthedocs.org/en/latest/). It's simple and intuitive.
Personally I avoid to use third-party library when possible, so I can reduce the dependencies' list and improve portability. urllib and urllib2 are not mutually exclusive and are often mixed in the same project.
60,388,686
Can someone please tell me how to downgrade Python 3.6.9 to 3.6.6 on Ubuntu ? I tried the below commands but didnot work 1) pip install python3.6==3.6.6 2) pip install python3.6.6
2020/02/25
[ "https://Stackoverflow.com/questions/60388686", "https://Stackoverflow.com", "https://Stackoverflow.com/users/12651893/" ]
First, verify that 3.6.6 is available: ```sh apt-cache policy python3.6 ``` If available: ```sh apt-get install python3.6=3.6.6 ``` If not available, you'll need to find a repo which has the version you desire and add it to your apt sources list, update, and install: ```sh echo "<repo url>" >> /etc/apt/sources.list.d/python.list apt-get update apt-get install python3.6=3.6.6 ``` I advise against downgrading your system python unless you're certain it's required. For running your application, install python3.6.6 alongside your system python, and better yet, build a virtual environment from 3.6.6: ```sh apt-get install virtualenv virtualenv -p <path to python3.6.6> <venv name> ```
One option is to use Anaconda, which allows you to easily use different Python versions on the same computer. [Here are the installation instructions for Anaconda on Linux](https://docs.anaconda.com/anaconda/install/linux/). Then create a Conda environment by running this command: ``` conda create --name myenv python=3.6.6 ``` Obviously your can use a different name than "myenv". You can then activate the environment in any terminal window: ``` conda activate myenv ``` Then you can pip install any packages you want. Some basics of anaconda environments can be found on the website's [getting started](https://docs.conda.io/projects/conda/en/latest/user-guide/getting-started.html) page.
45,340,587
How do I use python, mss, and opencv to capture my computer screen and save it as an array of images to form a movie? I am converting to gray-scale so it can be a 3 dimensional array. I would like to store each 2d screen shot in a 3d array for viewing and processing. I am having a hard time constructing an array that saves the sequence of screen shots as well as plays back the sequence of screen shots in cv2. Thanks a lot ``` import time import numpy as np import cv2 import mss from PIL import Image with mss.mss() as sct: fps_list=[] matrix_list = [] monitor = {'top':40, 'left':0, 'width':800, 'height':640} timer = 0 while timer <100: last_time = time.time() #get raw pizels from screen and save to numpy array img = np.array(sct.grab(monitor)) img=cv2.cvtColor(img, cv2.COLOR_BGR2GRAY) #Save img data as matrix matrix_list[timer,:,:] = img #Display Image cv2.imshow('Normal', img) fps = 1/ (time.time()-last_time) fps_list.append(fps) #press q to quit timer += 1 if cv2.waitKey(25) & 0xFF == ord('q'): cv2.destroyAllWindows() break #calculate fps fps_list = np.asarray(fps_list) print(np.average(fps_list)) #playback image movie from screencapture t=0 while t < 100: cv.imshow('Playback',img_matrix[t]) t += 1 ```
2017/07/27
[ "https://Stackoverflow.com/questions/45340587", "https://Stackoverflow.com", "https://Stackoverflow.com/users/8298595/" ]
A clue perhaps, save screenshots into a list and replay them later (you will have to adapt the sleep time): ``` import time import cv2 import mss import numpy with mss.mss() as sct: monitor = {'top': 40, 'left': 0, 'width': 800, 'height': 640} img_matrix = [] for _ in range(100): # Get raw pizels from screen and save to numpy array img = numpy.array(sct.grab(monitor)) img = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY) # Save img data as matrix img_matrix.append(img) # Display Image cv2.imshow('Normal', img) # Press q to quit if cv2.waitKey(25) & 0xFF == ord('q'): cv2.destroyAllWindows() break # Playback image movie from screencapture for img in img_matrix: cv2.imshow('Playback', img) # Press q to quit if cv2.waitKey(25) & 0xFF == ord('q'): cv2.destroyAllWindows() break ```
use `collections.OrderedDict()` to saves the sequence ``` import collections .... fps_list= collections.OrderedDict() ... fps_list[timer] = fps ```
16,438,259
I've just learned how to use `virtualenv` and I installed Django 1.4.5. I'm assuming that the `virtualenv` created a clean slate for me to work on so with the Django 1.4.5 installed, I copied all my previous files into the `virtualenv` environment. I tried to run the server but I get an error saying `"no module named MySQLdb"`. I think this means that I forgot to install MySQL-python. I tried to install it via ``` pip install MySQL-python ``` But I get this error ``` Downloading/unpacking MySQL-python Running setup.py egg_info for package MySQL-python The required version of distribute (>=0.6.28) is not available, and can't be installed while this script is running. Please install a more recent version first, using 'easy_install -U distribute'. (Currently using distribute 0.6.24 (/home/bradford/Development/Django/django_1.4.5/lib/python2.7/site-packages/distribute-0.6.24-py2.7.egg)) Complete output from command python setup.py egg_info: The required version of distribute (>=0.6.28) is not available, and can't be installed while this script is running. Please install a more recent version first, using 'easy_install -U distribute'. (Currently using distribute 0.6.24 (/home/bradford/Development/Django/django_1.4.5/lib/python2.7/site-packages/distribute-0.6.24-py2.7.egg)) ---------------------------------------- Command python setup.py egg_info failed with error code 2 in /home/bradford/Development/Django/django_1.4.5/build/MySQL-python ``` Not quite sure how to go about fixing this problem =/ any help much appreciated!
2013/05/08
[ "https://Stackoverflow.com/questions/16438259", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1815710/" ]
When doing in a virtualenv : ``` pip install MySQL-python ``` I got ``` EnvironmentError: mysql_config not found ``` To install mysql\_config, as Artem Fedosov said, first install ``` sudo apt-get install libmysqlclient-dev ``` then everything works fine in virtualenv
The suggested solutions didn't work out for me, because I still got compilation errors after running ``` `$ sudo apt-get install libmysqlclient-dev` ``` so I had to run ``` apt-get install python-dev ``` Then everything worked fine for me with ``` apt-get install python-dev ```
16,438,259
I've just learned how to use `virtualenv` and I installed Django 1.4.5. I'm assuming that the `virtualenv` created a clean slate for me to work on so with the Django 1.4.5 installed, I copied all my previous files into the `virtualenv` environment. I tried to run the server but I get an error saying `"no module named MySQLdb"`. I think this means that I forgot to install MySQL-python. I tried to install it via ``` pip install MySQL-python ``` But I get this error ``` Downloading/unpacking MySQL-python Running setup.py egg_info for package MySQL-python The required version of distribute (>=0.6.28) is not available, and can't be installed while this script is running. Please install a more recent version first, using 'easy_install -U distribute'. (Currently using distribute 0.6.24 (/home/bradford/Development/Django/django_1.4.5/lib/python2.7/site-packages/distribute-0.6.24-py2.7.egg)) Complete output from command python setup.py egg_info: The required version of distribute (>=0.6.28) is not available, and can't be installed while this script is running. Please install a more recent version first, using 'easy_install -U distribute'. (Currently using distribute 0.6.24 (/home/bradford/Development/Django/django_1.4.5/lib/python2.7/site-packages/distribute-0.6.24-py2.7.egg)) ---------------------------------------- Command python setup.py egg_info failed with error code 2 in /home/bradford/Development/Django/django_1.4.5/build/MySQL-python ``` Not quite sure how to go about fixing this problem =/ any help much appreciated!
2013/05/08
[ "https://Stackoverflow.com/questions/16438259", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1815710/" ]
I recently had exactly this issue (just not in relation to Django). In my case I am developing on Ubuntu 12.04 using the default pip and distribute versions, which are basically a little out of date for `MySQL-python`. Because you are working in an isolated virtualenv, you can safely follow the suggested instruction without affecting your Python installation. So you can... ``` workon your_virtualenv #activate your virtualenv, you do use virtualenvwrapper, right? easy_install -U distribute #update distribute on your virtualenv pip install MySQL-python #install your package ``` If for some reason upgrading distribute is not an option, you could try installing an older version of `MySQL-python` as follows (you'd have to check this version is compatible with your version of Django): ``` pip install MySQL-python==x.y.z #where x.y.z is the version you want ```
The suggested solutions didn't work out for me, because I still got compilation errors after running ``` `$ sudo apt-get install libmysqlclient-dev` ``` so I had to run ``` apt-get install python-dev ``` Then everything worked fine for me with ``` apt-get install python-dev ```
16,438,259
I've just learned how to use `virtualenv` and I installed Django 1.4.5. I'm assuming that the `virtualenv` created a clean slate for me to work on so with the Django 1.4.5 installed, I copied all my previous files into the `virtualenv` environment. I tried to run the server but I get an error saying `"no module named MySQLdb"`. I think this means that I forgot to install MySQL-python. I tried to install it via ``` pip install MySQL-python ``` But I get this error ``` Downloading/unpacking MySQL-python Running setup.py egg_info for package MySQL-python The required version of distribute (>=0.6.28) is not available, and can't be installed while this script is running. Please install a more recent version first, using 'easy_install -U distribute'. (Currently using distribute 0.6.24 (/home/bradford/Development/Django/django_1.4.5/lib/python2.7/site-packages/distribute-0.6.24-py2.7.egg)) Complete output from command python setup.py egg_info: The required version of distribute (>=0.6.28) is not available, and can't be installed while this script is running. Please install a more recent version first, using 'easy_install -U distribute'. (Currently using distribute 0.6.24 (/home/bradford/Development/Django/django_1.4.5/lib/python2.7/site-packages/distribute-0.6.24-py2.7.egg)) ---------------------------------------- Command python setup.py egg_info failed with error code 2 in /home/bradford/Development/Django/django_1.4.5/build/MySQL-python ``` Not quite sure how to go about fixing this problem =/ any help much appreciated!
2013/05/08
[ "https://Stackoverflow.com/questions/16438259", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1815710/" ]
I recently had exactly this issue (just not in relation to Django). In my case I am developing on Ubuntu 12.04 using the default pip and distribute versions, which are basically a little out of date for `MySQL-python`. Because you are working in an isolated virtualenv, you can safely follow the suggested instruction without affecting your Python installation. So you can... ``` workon your_virtualenv #activate your virtualenv, you do use virtualenvwrapper, right? easy_install -U distribute #update distribute on your virtualenv pip install MySQL-python #install your package ``` If for some reason upgrading distribute is not an option, you could try installing an older version of `MySQL-python` as follows (you'd have to check this version is compatible with your version of Django): ``` pip install MySQL-python==x.y.z #where x.y.z is the version you want ```
MySQL driver for Python (mysql-python) needs libmysqlclient-dev. You can get it with: ``` sudo apt-get update sudo apt-get install libmysqlclient-dev ``` If python-dev is not installed, you may have to install it too: ``` sudo apt-get install python-dev ``` Now you can install MySQL driver: ``` pip install mysql-python ``` **Here is a more detailed documentation for MySQL in Django:** <http://codex.themedelta.com/how-to-install-django-with-mysql-in-a-virtualenv-on-linux/>
16,438,259
I've just learned how to use `virtualenv` and I installed Django 1.4.5. I'm assuming that the `virtualenv` created a clean slate for me to work on so with the Django 1.4.5 installed, I copied all my previous files into the `virtualenv` environment. I tried to run the server but I get an error saying `"no module named MySQLdb"`. I think this means that I forgot to install MySQL-python. I tried to install it via ``` pip install MySQL-python ``` But I get this error ``` Downloading/unpacking MySQL-python Running setup.py egg_info for package MySQL-python The required version of distribute (>=0.6.28) is not available, and can't be installed while this script is running. Please install a more recent version first, using 'easy_install -U distribute'. (Currently using distribute 0.6.24 (/home/bradford/Development/Django/django_1.4.5/lib/python2.7/site-packages/distribute-0.6.24-py2.7.egg)) Complete output from command python setup.py egg_info: The required version of distribute (>=0.6.28) is not available, and can't be installed while this script is running. Please install a more recent version first, using 'easy_install -U distribute'. (Currently using distribute 0.6.24 (/home/bradford/Development/Django/django_1.4.5/lib/python2.7/site-packages/distribute-0.6.24-py2.7.egg)) ---------------------------------------- Command python setup.py egg_info failed with error code 2 in /home/bradford/Development/Django/django_1.4.5/build/MySQL-python ``` Not quite sure how to go about fixing this problem =/ any help much appreciated!
2013/05/08
[ "https://Stackoverflow.com/questions/16438259", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1815710/" ]
Spent an hour looking through stackoverflow. Evntually found answer [in the other question](https://stackoverflow.com/questions/7475223/mysql-config-not-found-when-installing-mysqldb-python-interface). This is what saved me: ``` sudo apt-get install libmysqlclient-dev ``` mysql\_config goes with the package.
Try this: **Version Python 2.7** **MySQL-python** package, you should use either MySQL\_python‑1.2.5‑cp27‑none‑win32.whl or MySQL\_python‑1.2.5‑cp27‑none‑win\_amd64.whl depending on whether you have installed 32-bit or 64-bit Python. ``` pip install MySQL_python‑1.2.5‑cp27‑none‑win32.whl ``` if you are using **mysqlclient** package, then use mysqlclient‑1.4.6‑cp27‑cp27m‑win32.whl or mysqlclient‑1.4.6‑cp27‑cp27m‑win\_amd64.whl ``` pip install mysqlclient‑1.4.6‑cp27‑cp27m‑win32.whl ``` <https://www.lfd.uci.edu/~gohlke/pythonlibs/#mysqlclient>
16,438,259
I've just learned how to use `virtualenv` and I installed Django 1.4.5. I'm assuming that the `virtualenv` created a clean slate for me to work on so with the Django 1.4.5 installed, I copied all my previous files into the `virtualenv` environment. I tried to run the server but I get an error saying `"no module named MySQLdb"`. I think this means that I forgot to install MySQL-python. I tried to install it via ``` pip install MySQL-python ``` But I get this error ``` Downloading/unpacking MySQL-python Running setup.py egg_info for package MySQL-python The required version of distribute (>=0.6.28) is not available, and can't be installed while this script is running. Please install a more recent version first, using 'easy_install -U distribute'. (Currently using distribute 0.6.24 (/home/bradford/Development/Django/django_1.4.5/lib/python2.7/site-packages/distribute-0.6.24-py2.7.egg)) Complete output from command python setup.py egg_info: The required version of distribute (>=0.6.28) is not available, and can't be installed while this script is running. Please install a more recent version first, using 'easy_install -U distribute'. (Currently using distribute 0.6.24 (/home/bradford/Development/Django/django_1.4.5/lib/python2.7/site-packages/distribute-0.6.24-py2.7.egg)) ---------------------------------------- Command python setup.py egg_info failed with error code 2 in /home/bradford/Development/Django/django_1.4.5/build/MySQL-python ``` Not quite sure how to go about fixing this problem =/ any help much appreciated!
2013/05/08
[ "https://Stackoverflow.com/questions/16438259", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1815710/" ]
I recently had exactly this issue (just not in relation to Django). In my case I am developing on Ubuntu 12.04 using the default pip and distribute versions, which are basically a little out of date for `MySQL-python`. Because you are working in an isolated virtualenv, you can safely follow the suggested instruction without affecting your Python installation. So you can... ``` workon your_virtualenv #activate your virtualenv, you do use virtualenvwrapper, right? easy_install -U distribute #update distribute on your virtualenv pip install MySQL-python #install your package ``` If for some reason upgrading distribute is not an option, you could try installing an older version of `MySQL-python` as follows (you'd have to check this version is compatible with your version of Django): ``` pip install MySQL-python==x.y.z #where x.y.z is the version you want ```
I had to do this: ``` pip install mysql-python ``` inside the virtualenv
16,438,259
I've just learned how to use `virtualenv` and I installed Django 1.4.5. I'm assuming that the `virtualenv` created a clean slate for me to work on so with the Django 1.4.5 installed, I copied all my previous files into the `virtualenv` environment. I tried to run the server but I get an error saying `"no module named MySQLdb"`. I think this means that I forgot to install MySQL-python. I tried to install it via ``` pip install MySQL-python ``` But I get this error ``` Downloading/unpacking MySQL-python Running setup.py egg_info for package MySQL-python The required version of distribute (>=0.6.28) is not available, and can't be installed while this script is running. Please install a more recent version first, using 'easy_install -U distribute'. (Currently using distribute 0.6.24 (/home/bradford/Development/Django/django_1.4.5/lib/python2.7/site-packages/distribute-0.6.24-py2.7.egg)) Complete output from command python setup.py egg_info: The required version of distribute (>=0.6.28) is not available, and can't be installed while this script is running. Please install a more recent version first, using 'easy_install -U distribute'. (Currently using distribute 0.6.24 (/home/bradford/Development/Django/django_1.4.5/lib/python2.7/site-packages/distribute-0.6.24-py2.7.egg)) ---------------------------------------- Command python setup.py egg_info failed with error code 2 in /home/bradford/Development/Django/django_1.4.5/build/MySQL-python ``` Not quite sure how to go about fixing this problem =/ any help much appreciated!
2013/05/08
[ "https://Stackoverflow.com/questions/16438259", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1815710/" ]
When doing in a virtualenv : ``` pip install MySQL-python ``` I got ``` EnvironmentError: mysql_config not found ``` To install mysql\_config, as Artem Fedosov said, first install ``` sudo apt-get install libmysqlclient-dev ``` then everything works fine in virtualenv
The commands are always run in ubuntu: ``` easy_install -U distribute ``` later ``` sudo apt-get install libmysqlclient-dev ``` and finally ``` pip install MySQL-python ```
16,438,259
I've just learned how to use `virtualenv` and I installed Django 1.4.5. I'm assuming that the `virtualenv` created a clean slate for me to work on so with the Django 1.4.5 installed, I copied all my previous files into the `virtualenv` environment. I tried to run the server but I get an error saying `"no module named MySQLdb"`. I think this means that I forgot to install MySQL-python. I tried to install it via ``` pip install MySQL-python ``` But I get this error ``` Downloading/unpacking MySQL-python Running setup.py egg_info for package MySQL-python The required version of distribute (>=0.6.28) is not available, and can't be installed while this script is running. Please install a more recent version first, using 'easy_install -U distribute'. (Currently using distribute 0.6.24 (/home/bradford/Development/Django/django_1.4.5/lib/python2.7/site-packages/distribute-0.6.24-py2.7.egg)) Complete output from command python setup.py egg_info: The required version of distribute (>=0.6.28) is not available, and can't be installed while this script is running. Please install a more recent version first, using 'easy_install -U distribute'. (Currently using distribute 0.6.24 (/home/bradford/Development/Django/django_1.4.5/lib/python2.7/site-packages/distribute-0.6.24-py2.7.egg)) ---------------------------------------- Command python setup.py egg_info failed with error code 2 in /home/bradford/Development/Django/django_1.4.5/build/MySQL-python ``` Not quite sure how to go about fixing this problem =/ any help much appreciated!
2013/05/08
[ "https://Stackoverflow.com/questions/16438259", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1815710/" ]
Spent an hour looking through stackoverflow. Evntually found answer [in the other question](https://stackoverflow.com/questions/7475223/mysql-config-not-found-when-installing-mysqldb-python-interface). This is what saved me: ``` sudo apt-get install libmysqlclient-dev ``` mysql\_config goes with the package.
MySQL driver for Python (mysql-python) needs libmysqlclient-dev. You can get it with: ``` sudo apt-get update sudo apt-get install libmysqlclient-dev ``` If python-dev is not installed, you may have to install it too: ``` sudo apt-get install python-dev ``` Now you can install MySQL driver: ``` pip install mysql-python ``` **Here is a more detailed documentation for MySQL in Django:** <http://codex.themedelta.com/how-to-install-django-with-mysql-in-a-virtualenv-on-linux/>
16,438,259
I've just learned how to use `virtualenv` and I installed Django 1.4.5. I'm assuming that the `virtualenv` created a clean slate for me to work on so with the Django 1.4.5 installed, I copied all my previous files into the `virtualenv` environment. I tried to run the server but I get an error saying `"no module named MySQLdb"`. I think this means that I forgot to install MySQL-python. I tried to install it via ``` pip install MySQL-python ``` But I get this error ``` Downloading/unpacking MySQL-python Running setup.py egg_info for package MySQL-python The required version of distribute (>=0.6.28) is not available, and can't be installed while this script is running. Please install a more recent version first, using 'easy_install -U distribute'. (Currently using distribute 0.6.24 (/home/bradford/Development/Django/django_1.4.5/lib/python2.7/site-packages/distribute-0.6.24-py2.7.egg)) Complete output from command python setup.py egg_info: The required version of distribute (>=0.6.28) is not available, and can't be installed while this script is running. Please install a more recent version first, using 'easy_install -U distribute'. (Currently using distribute 0.6.24 (/home/bradford/Development/Django/django_1.4.5/lib/python2.7/site-packages/distribute-0.6.24-py2.7.egg)) ---------------------------------------- Command python setup.py egg_info failed with error code 2 in /home/bradford/Development/Django/django_1.4.5/build/MySQL-python ``` Not quite sure how to go about fixing this problem =/ any help much appreciated!
2013/05/08
[ "https://Stackoverflow.com/questions/16438259", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1815710/" ]
I recently had exactly this issue (just not in relation to Django). In my case I am developing on Ubuntu 12.04 using the default pip and distribute versions, which are basically a little out of date for `MySQL-python`. Because you are working in an isolated virtualenv, you can safely follow the suggested instruction without affecting your Python installation. So you can... ``` workon your_virtualenv #activate your virtualenv, you do use virtualenvwrapper, right? easy_install -U distribute #update distribute on your virtualenv pip install MySQL-python #install your package ``` If for some reason upgrading distribute is not an option, you could try installing an older version of `MySQL-python` as follows (you'd have to check this version is compatible with your version of Django): ``` pip install MySQL-python==x.y.z #where x.y.z is the version you want ```
The commands are always run in ubuntu: ``` easy_install -U distribute ``` later ``` sudo apt-get install libmysqlclient-dev ``` and finally ``` pip install MySQL-python ```
16,438,259
I've just learned how to use `virtualenv` and I installed Django 1.4.5. I'm assuming that the `virtualenv` created a clean slate for me to work on so with the Django 1.4.5 installed, I copied all my previous files into the `virtualenv` environment. I tried to run the server but I get an error saying `"no module named MySQLdb"`. I think this means that I forgot to install MySQL-python. I tried to install it via ``` pip install MySQL-python ``` But I get this error ``` Downloading/unpacking MySQL-python Running setup.py egg_info for package MySQL-python The required version of distribute (>=0.6.28) is not available, and can't be installed while this script is running. Please install a more recent version first, using 'easy_install -U distribute'. (Currently using distribute 0.6.24 (/home/bradford/Development/Django/django_1.4.5/lib/python2.7/site-packages/distribute-0.6.24-py2.7.egg)) Complete output from command python setup.py egg_info: The required version of distribute (>=0.6.28) is not available, and can't be installed while this script is running. Please install a more recent version first, using 'easy_install -U distribute'. (Currently using distribute 0.6.24 (/home/bradford/Development/Django/django_1.4.5/lib/python2.7/site-packages/distribute-0.6.24-py2.7.egg)) ---------------------------------------- Command python setup.py egg_info failed with error code 2 in /home/bradford/Development/Django/django_1.4.5/build/MySQL-python ``` Not quite sure how to go about fixing this problem =/ any help much appreciated!
2013/05/08
[ "https://Stackoverflow.com/questions/16438259", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1815710/" ]
When doing in a virtualenv : ``` pip install MySQL-python ``` I got ``` EnvironmentError: mysql_config not found ``` To install mysql\_config, as Artem Fedosov said, first install ``` sudo apt-get install libmysqlclient-dev ``` then everything works fine in virtualenv
Try this: **Version Python 2.7** **MySQL-python** package, you should use either MySQL\_python‑1.2.5‑cp27‑none‑win32.whl or MySQL\_python‑1.2.5‑cp27‑none‑win\_amd64.whl depending on whether you have installed 32-bit or 64-bit Python. ``` pip install MySQL_python‑1.2.5‑cp27‑none‑win32.whl ``` if you are using **mysqlclient** package, then use mysqlclient‑1.4.6‑cp27‑cp27m‑win32.whl or mysqlclient‑1.4.6‑cp27‑cp27m‑win\_amd64.whl ``` pip install mysqlclient‑1.4.6‑cp27‑cp27m‑win32.whl ``` <https://www.lfd.uci.edu/~gohlke/pythonlibs/#mysqlclient>
16,438,259
I've just learned how to use `virtualenv` and I installed Django 1.4.5. I'm assuming that the `virtualenv` created a clean slate for me to work on so with the Django 1.4.5 installed, I copied all my previous files into the `virtualenv` environment. I tried to run the server but I get an error saying `"no module named MySQLdb"`. I think this means that I forgot to install MySQL-python. I tried to install it via ``` pip install MySQL-python ``` But I get this error ``` Downloading/unpacking MySQL-python Running setup.py egg_info for package MySQL-python The required version of distribute (>=0.6.28) is not available, and can't be installed while this script is running. Please install a more recent version first, using 'easy_install -U distribute'. (Currently using distribute 0.6.24 (/home/bradford/Development/Django/django_1.4.5/lib/python2.7/site-packages/distribute-0.6.24-py2.7.egg)) Complete output from command python setup.py egg_info: The required version of distribute (>=0.6.28) is not available, and can't be installed while this script is running. Please install a more recent version first, using 'easy_install -U distribute'. (Currently using distribute 0.6.24 (/home/bradford/Development/Django/django_1.4.5/lib/python2.7/site-packages/distribute-0.6.24-py2.7.egg)) ---------------------------------------- Command python setup.py egg_info failed with error code 2 in /home/bradford/Development/Django/django_1.4.5/build/MySQL-python ``` Not quite sure how to go about fixing this problem =/ any help much appreciated!
2013/05/08
[ "https://Stackoverflow.com/questions/16438259", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1815710/" ]
I had to do this: ``` pip install mysql-python ``` inside the virtualenv
Try this: **Version Python 2.7** **MySQL-python** package, you should use either MySQL\_python‑1.2.5‑cp27‑none‑win32.whl or MySQL\_python‑1.2.5‑cp27‑none‑win\_amd64.whl depending on whether you have installed 32-bit or 64-bit Python. ``` pip install MySQL_python‑1.2.5‑cp27‑none‑win32.whl ``` if you are using **mysqlclient** package, then use mysqlclient‑1.4.6‑cp27‑cp27m‑win32.whl or mysqlclient‑1.4.6‑cp27‑cp27m‑win\_amd64.whl ``` pip install mysqlclient‑1.4.6‑cp27‑cp27m‑win32.whl ``` <https://www.lfd.uci.edu/~gohlke/pythonlibs/#mysqlclient>
65,990,047
I've been trying to make a keylogger but got this error in python while running the script. > > File "C:\Users\David\AppData\Roaming\Python\Python39\site-packages\pynput\_util\_*init*\_.py", line 211, in inner > return f(self, \*args, \*\*kwargs) > File "C:\Users\David\AppData\Roaming\Python\Python39\site-packages\pynput\keyboard\_win32.py", line 280, in *process > self.on\_press(key) > File "C:\Users\David\AppData\Roaming\Python\Python39\site-packages\pynput\_util\_*init**.py", line 127, in inner > if f(\*args) is False: > File "C:\Users\David\Desktop\TESTING\keylogger\main.py", line 16, in on\_press > keys.append(str(key)) > NameError: name 'keys' is not defined > Traceback (most recent call last): > File "C:\Users\David\Desktop\TESTING\keylogger\main.py", line 43, in > listener.join() > File "C:\Users\David\AppData\Roaming\Python\Python39\site-packages\pynput\_util\_*init*\_.py", line 259, in join > six.reraise(exc\_type, exc\_value, exc\_traceback) > File "C:\Users\David\AppData\Roaming\Python\Python39\site-packages\six.py", line 702, in reraise > raise value.with\_traceback(tb) > File "C:\Users\David\AppData\Roaming\Python\Python39\site-packages\pynput\_util\_*init*\_.py", line 211, in inner > return f(self, \*args, \*\*kwargs) > File "C:\Users\David\AppData\Roaming\Python\Python39\site-packages\pynput\keyboard\_win32.py", line 280, in *process > self.on\_press(key) > File "C:\Users\David\AppData\Roaming\Python\Python39\site-packages\pynput\_util\_*init**.py", line 127, in inner > if f(\*args) is False: > File "C:\Users\David\Desktop\TESTING\keylogger\main.py", line 16, in on\_press > keys.append(str(key)) > NameError: name 'keys' is not defined > [Finished in 0.614s] > > > I dont know how to fix this, I already installed pyinput with pip install pyinput and still doesn't work :/ --- Code: ``` import pynput from pynput.keyboard import Key, Listener count = 0 key = [] def on_press(key): global keys, count keys.append(str(key)) print("{0} pressed".format(key)) if count >= 10: count = 0 write_file(keys) keys = () def write_file(keys): with open("log.txt", "w" & "a") as f: for key in keys: k = str(key).replace("'","") f.write(str(key)) def on_release(key): if key == Key.esc: return False with Listener(on_press=on_press,on_release=on_release) as listener: listener.join() ``` --- Any help is appriciated, thanks.
2021/02/01
[ "https://Stackoverflow.com/questions/65990047", "https://Stackoverflow.com", "https://Stackoverflow.com/users/14644110/" ]
Because keys is literally not defined anywhere **I think you made a spelling mistake.** You need to replace `key = []` with `keys = []`
I think you have a typo during initialization of the `keys` list. You have declared it as `key` but you need `keys`. You need: ``` keys = [] ``` instead of: ``` key = [] ```
65,990,047
I've been trying to make a keylogger but got this error in python while running the script. > > File "C:\Users\David\AppData\Roaming\Python\Python39\site-packages\pynput\_util\_*init*\_.py", line 211, in inner > return f(self, \*args, \*\*kwargs) > File "C:\Users\David\AppData\Roaming\Python\Python39\site-packages\pynput\keyboard\_win32.py", line 280, in *process > self.on\_press(key) > File "C:\Users\David\AppData\Roaming\Python\Python39\site-packages\pynput\_util\_*init**.py", line 127, in inner > if f(\*args) is False: > File "C:\Users\David\Desktop\TESTING\keylogger\main.py", line 16, in on\_press > keys.append(str(key)) > NameError: name 'keys' is not defined > Traceback (most recent call last): > File "C:\Users\David\Desktop\TESTING\keylogger\main.py", line 43, in > listener.join() > File "C:\Users\David\AppData\Roaming\Python\Python39\site-packages\pynput\_util\_*init*\_.py", line 259, in join > six.reraise(exc\_type, exc\_value, exc\_traceback) > File "C:\Users\David\AppData\Roaming\Python\Python39\site-packages\six.py", line 702, in reraise > raise value.with\_traceback(tb) > File "C:\Users\David\AppData\Roaming\Python\Python39\site-packages\pynput\_util\_*init*\_.py", line 211, in inner > return f(self, \*args, \*\*kwargs) > File "C:\Users\David\AppData\Roaming\Python\Python39\site-packages\pynput\keyboard\_win32.py", line 280, in *process > self.on\_press(key) > File "C:\Users\David\AppData\Roaming\Python\Python39\site-packages\pynput\_util\_*init**.py", line 127, in inner > if f(\*args) is False: > File "C:\Users\David\Desktop\TESTING\keylogger\main.py", line 16, in on\_press > keys.append(str(key)) > NameError: name 'keys' is not defined > [Finished in 0.614s] > > > I dont know how to fix this, I already installed pyinput with pip install pyinput and still doesn't work :/ --- Code: ``` import pynput from pynput.keyboard import Key, Listener count = 0 key = [] def on_press(key): global keys, count keys.append(str(key)) print("{0} pressed".format(key)) if count >= 10: count = 0 write_file(keys) keys = () def write_file(keys): with open("log.txt", "w" & "a") as f: for key in keys: k = str(key).replace("'","") f.write(str(key)) def on_release(key): if key == Key.esc: return False with Listener(on_press=on_press,on_release=on_release) as listener: listener.join() ``` --- Any help is appriciated, thanks.
2021/02/01
[ "https://Stackoverflow.com/questions/65990047", "https://Stackoverflow.com", "https://Stackoverflow.com/users/14644110/" ]
You have made a typo in line 6 and thus `keys` is not declared anywhere as the error message is telling you (e.g. `NameError: name 'keys' is not defined`). Line 6 should instead be `keys = []` of `key = []`.
I think you have a typo during initialization of the `keys` list. You have declared it as `key` but you need `keys`. You need: ``` keys = [] ``` instead of: ``` key = [] ```
4,522,733
Okay, so I'm admittedly a newbie to programming, but I can't determine how to get python v3.2 to generate a random positive integer between parameters I've given it. Just so you can understand the context, I'm trying to create a guessing-game where the user inputs parameters (say 1 to 50), and the computer generates a random number between the given numbers. The user would then have to guess the value that the computer has chosen. I've searched long and hard, but all of the solutions I can find only tell one how to get earlier versions of python to generate a random integer. As near as I can tell, v.3.2 changed how to generate and label a random integer. Anyone know how to do this?
2010/12/23
[ "https://Stackoverflow.com/questions/4522733", "https://Stackoverflow.com", "https://Stackoverflow.com/users/552834/" ]
Use [random.randrange](http://docs.python.org/dev/py3k/library/random.html#random.randrange) or [random.randint](http://docs.python.org/dev/py3k/library/random.html#random.randint) (Note the links are to the Python 3k docs). ``` In [67]: import random In [69]: random.randrange(1,10) Out[69]: 8 ```
You can use `random` module: ``` import random # Random integer >= 5 and < 10 random.randrange(5, 10) ```
50,182,828
I've installed pillow for python 3 on my macbook successfully. But I still can't use PIL library. I tried uninstalling and installing it again. I've also tried `import Image` without `from PIL` as well. I do not have PIL installed, though. It says > > Could not find a version that satisfies the requirement PIL (from > versions: ) > > > ``` from PIL import Image --------------------------------------------------------------------------- ImportError Traceback (most recent call last) <ipython-input-4-b7f01c2f8cfe> in <module>() ----> 1 from PIL import Image /Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/PIL/Image.py in <module>() 58 # Also note that Image.core is not a publicly documented interface, 59 # and should be considered private and subject to change. ---> 60 from . import _imaging as core 61 if PILLOW_VERSION != getattr(core, 'PILLOW_VERSION', None): 62 raise ImportError("The _imaging extension was built for another " ImportError: dlopen(/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/PIL/_imaging.cpython-36m-darwin.so, 2): Symbol not found: _clock_gettime Referenced from: /Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/PIL/.dylibs/liblzma.5.dylib (which was built for Mac OS X 10.12) Expected in: /usr/lib/libSystem.B.dylib in /Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/PIL/.dylibs/liblzma.5.dylib ```
2018/05/04
[ "https://Stackoverflow.com/questions/50182828", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5991761/" ]
If you use `Anaconda`, you may try: ``` conda install Pillow ``` because this works for me.
Pil is depricated and replaced by pillow. Pillow is the official fork of PIL Install using pip or how you usually do it. <https://pillow.readthedocs.io/en/5.1.x/index.html>
28,233,090
I want to get the width % shown. So that i can monitor the progress.How to get the value using selenium in python. I don't know how to achieve this.
2015/01/30
[ "https://Stackoverflow.com/questions/28233090", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4472647/" ]
A `CSS3`-only (actually moved into `CSS4` specs) solution would be the `pointer-events` property, e.g. ``` .media__body:hover:after, .media__body:hover:before { ... pointer-events: none; } ``` supported on [all modern browser](https://developer.mozilla.org/en-US/docs/Web/CSS/pointer-events) but only from `IE11` (on HTML/XML content) Example: <http://jsfiddle.net/8uz7mx6h/> --- Another solution, supported also on older IE, is to apply `position: relative` with a `z-index` to the paragraph, e.g. ``` .media__body p { margin-bottom: 1.5em; position: relative; z-index: 1; } ``` Example: <http://jsfiddle.net/0nrbjwmg/2/>
``` .media__body p { margin-bottom: 1.5em; position: relative; z-index:999; ``` } try this !
33,116,636
I am very new to REGEX and HTML in particular. I know that BeautifulSoup is a way to deal with HTML but would like to try regex I need to search the text for HTML tags (I use findall). I tried multiple scenarios and examples in Stackoverflow but only got [] (empty string). Here is what I tried: ``` #reHTML = r'(?:<([A-Z][A-Z0-9]*)\b[^>]*>(.*?)</\1>)' #reHTML = r'\<p>(.*?)\</p>' #reHTML = r'<p>(.*?)\</p>' #reHTML = r'<raw[^>]*?>(.*?)</raw>' reHTML = r'<p>(.*?)</p>' #reHTML = r'<.*?>' ``` and: ``` rHTML = re.compile(reHTML, re.VERBOSE) HTMLpara = rHTML.findall('http://pythonprogramming.net/parse-website-using- regular-expressions-urllib/', re.IGNORECASE) ``` Obviously, I am missing something. Please, help
2015/10/14
[ "https://Stackoverflow.com/questions/33116636", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5287011/" ]
You misunderstood [regex.findall(string[, pos[, endpos]])](https://docs.python.org/3.5/library/re.html?highlight=findall#re.regex.findall) `HTMLpara = rHTML.findall('http://pythonprogramming.net/parse-website-using- regular-expressions-urllib/', re.IGNORECASE)` means you will match the `rHTML` pattern with the string(`"http://pythonprogramming.net/parse-website-using- regular-expressions-urllib/"`),so you will get `[]` You'd better request the URL to get data, then call findall to analyze the result string, as [below](http://pythonprogramming.net/parse-website-using-regular-expressions-urllib/). ``` import urllib.request import re url = 'http://pythonprogramming.net/parse-website-using-regular-expressions-urllib/' req = urllib.request.Request(url) resp = urllib.request.urlopen(req) respData = resp.read() paragraphs = re.findall(r'<p>(.*?)</p>',str(respData)) ```
This will read in a webpage and find any instances of `<html>` or `</html>`. Is this the solution you are looking for? ``` import re import urllib2 url = "http://stackoverflow.com" f = urllib2.urlopen(url) file = f.read() p = re.compile("<html>|</html>") instances = p.findall(file) print instances ``` Output: ``` ['<html>', '</html>'] ``` I think your problem was you were trying to search the URL string for HTML tags instead of actually loading the webpage and searching it.
15,279,942
Coming from python I could do something like this. ``` values = (1, 'ab', 2.7) s = struct.Struct('I 2s f') packet = s.pack(*values) ``` I can pack together arbitrary types together very simply with python. What is the standard way to do it in Objective C?
2013/03/07
[ "https://Stackoverflow.com/questions/15279942", "https://Stackoverflow.com", "https://Stackoverflow.com/users/299648/" ]
Try using something like this: ``` window.addEventListener('load', function() { document.getElementById("demo").innerHTML="Works"; }, false); ```
Where did you get `document.onready` from? That would **never work**. To ensure the page is loaded, you could use `window.onload`; ``` window.onload = function () { document.getElementById("demo").innerHTML="Works"; } ```
15,279,942
Coming from python I could do something like this. ``` values = (1, 'ab', 2.7) s = struct.Struct('I 2s f') packet = s.pack(*values) ``` I can pack together arbitrary types together very simply with python. What is the standard way to do it in Objective C?
2013/03/07
[ "https://Stackoverflow.com/questions/15279942", "https://Stackoverflow.com", "https://Stackoverflow.com/users/299648/" ]
Try using something like this: ``` window.addEventListener('load', function() { document.getElementById("demo").innerHTML="Works"; }, false); ```
Your syntax is incorrect. ``` document.ready= function () { //code to run when page loaded } window.onload = function () { //code to run when page AND images loaded } ```
15,279,942
Coming from python I could do something like this. ``` values = (1, 'ab', 2.7) s = struct.Struct('I 2s f') packet = s.pack(*values) ``` I can pack together arbitrary types together very simply with python. What is the standard way to do it in Objective C?
2013/03/07
[ "https://Stackoverflow.com/questions/15279942", "https://Stackoverflow.com", "https://Stackoverflow.com/users/299648/" ]
Where did you get `document.onready` from? That would **never work**. To ensure the page is loaded, you could use `window.onload`; ``` window.onload = function () { document.getElementById("demo").innerHTML="Works"; } ```
Your syntax is incorrect. ``` document.ready= function () { //code to run when page loaded } window.onload = function () { //code to run when page AND images loaded } ```
53,185,119
I want to retrieve the list of resources are currently in used region wise by using python script and boto3 library. for example script have to give me the out put as follows Region : us-west-2 service: EC2 //resource list//instance ids //Name service: VPC //resource list//VPC ids//Name
2018/11/07
[ "https://Stackoverflow.com/questions/53185119", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4108278/" ]
There's no easy way to do it, but you can achieve this with a few describe calls. First enumerate through the regions that you use: ``` for regionname in ["us-east-1", "eu-west-1"] ``` Or if you want to check all: ``` ec2client = boto3.client('ec2') regionresponse = ec2client.describe_regions() for region in regionresponse["Regions"] regionname = region["RegionName"] ``` Then for each region iteration you need to create a new client for each region's endpoint, and describe\_instances: ``` ec2client = boto3.client('ec2', region_name=regionname) instanceresponse = ec2client.describe_instances() for reservation in instanceresponse["Reservations"]: for instance in reservation["Instances"]: print(instance["InstanceId"]) ``` Do the same describe call for reach resource type you want.
There is no way to obtain a list of all resources used. You would need to write it yourself. Alternatively, there are third-party companies offering services that will do this for you (eg [Hava](https://www.hava.io/).
53,185,119
I want to retrieve the list of resources are currently in used region wise by using python script and boto3 library. for example script have to give me the out put as follows Region : us-west-2 service: EC2 //resource list//instance ids //Name service: VPC //resource list//VPC ids//Name
2018/11/07
[ "https://Stackoverflow.com/questions/53185119", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4108278/" ]
There's no easy way to do it, but you can achieve this with a few describe calls. First enumerate through the regions that you use: ``` for regionname in ["us-east-1", "eu-west-1"] ``` Or if you want to check all: ``` ec2client = boto3.client('ec2') regionresponse = ec2client.describe_regions() for region in regionresponse["Regions"] regionname = region["RegionName"] ``` Then for each region iteration you need to create a new client for each region's endpoint, and describe\_instances: ``` ec2client = boto3.client('ec2', region_name=regionname) instanceresponse = ec2client.describe_instances() for reservation in instanceresponse["Reservations"]: for instance in reservation["Instances"]: print(instance["InstanceId"]) ``` Do the same describe call for reach resource type you want.
The `aws_list_all` program [available on GitHub](https://github.com/JohannesEbke/aws_list_all) is written in python and is able to find all of the resources you have created in your account. Currently it lists all of the resources into JSON files together with their meta-data. You can work from the [scripts used for the main function](https://github.com/JohannesEbke/aws_list_all/blob/master/aws_list_all/__main__.py) in `aws_list_all` to build your own output. There's a prototype [in one issue there](https://github.com/JohannesEbke/aws_list_all/issues/25) which demonstrates how to extract the information that you want. This prototype runs the following on the JSON files ``` cat *.json | tr '"' '\n' | grep '^arn:aws' | sort | sed 's/:\*$//' | uniq ``` when you look at the python code behind `aws_list_all` you will see that it is not an easy problem since you need to dynamically discover all the available AWS endpoints. Basing your work of that script will make things much easier.
53,185,119
I want to retrieve the list of resources are currently in used region wise by using python script and boto3 library. for example script have to give me the out put as follows Region : us-west-2 service: EC2 //resource list//instance ids //Name service: VPC //resource list//VPC ids//Name
2018/11/07
[ "https://Stackoverflow.com/questions/53185119", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4108278/" ]
The `aws_list_all` program [available on GitHub](https://github.com/JohannesEbke/aws_list_all) is written in python and is able to find all of the resources you have created in your account. Currently it lists all of the resources into JSON files together with their meta-data. You can work from the [scripts used for the main function](https://github.com/JohannesEbke/aws_list_all/blob/master/aws_list_all/__main__.py) in `aws_list_all` to build your own output. There's a prototype [in one issue there](https://github.com/JohannesEbke/aws_list_all/issues/25) which demonstrates how to extract the information that you want. This prototype runs the following on the JSON files ``` cat *.json | tr '"' '\n' | grep '^arn:aws' | sort | sed 's/:\*$//' | uniq ``` when you look at the python code behind `aws_list_all` you will see that it is not an easy problem since you need to dynamically discover all the available AWS endpoints. Basing your work of that script will make things much easier.
There is no way to obtain a list of all resources used. You would need to write it yourself. Alternatively, there are third-party companies offering services that will do this for you (eg [Hava](https://www.hava.io/).
32,714,656
This a recuring question and i've read many topics some helped a bit ([python Qt: main widget scroll bar](https://stackoverflow.com/questions/2130446/python-qt-main-widget-scroll-bar), [PyQt: Put scrollbars in this](https://stackoverflow.com/questions/14159337/pyqt-put-scrollbars-in-this)), some not at all ([PyQt adding a scrollbar to my main window](https://stackoverflow.com/questions/26745849/pyqt-adding-a-scrollbar-to-my-main-window)), I still have problem with the scrollbars. They're not usable, the're 'grey'. Here is my code (I'm using PyQt5) : ``` def setupUi(self, Interface): Interface.setObjectName("Interface") Interface.resize(1152, 1009) sizePolicy = QtWidgets.QSizePolicy(QtWidgets.QSizePolicy.Fixed, QtWidgets.QSizePolicy.Fixed) sizePolicy.setHorizontalStretch(0) sizePolicy.setVerticalStretch(0) sizePolicy.setHeightForWidth(Interface.sizePolicy().hasHeightForWidth()) Interface.setSizePolicy(sizePolicy) Interface.setMouseTracking(False) icon = QtGui.QIcon() self.centralWidget = QtWidgets.QWidget(Interface) self.centralWidget.setObjectName("centralWidget") self.scrollArea = QtWidgets.QScrollArea(self.centralWidget) self.scrollArea.setGeometry(QtCore.QRect(0, 0, 1131, 951)) self.scrollArea.setVerticalScrollBarPolicy(QtCore.Qt.ScrollBarAlwaysOn) self.scrollArea.setHorizontalScrollBarPolicy(QtCore.Qt.ScrollBarAlwaysOn) self.scrollArea.setWidgetResizable(True) self.scrollArea.setObjectName("scrollArea") self.scrollArea.setEnabled(True) self.scrollAreaWidgetContents = QtWidgets.QWidget() self.scrollAreaWidgetContents.setGeometry(QtCore.QRect(0, 0, 1112, 932)) self.scrollAreaWidgetContents.setObjectName("scrollAreaWidgetContents") self.horizontalLayout = QtWidgets.QHBoxLayout(self.scrollAreaWidgetContents) self.horizontalLayout.setObjectName("horizontalLayout") ``` So i would like to put the scrollbars on the main widget, so if the user resizes the main window, the scrollbar appears, and let he move up and down to see child widgets that is outside the smaller window widget, allowing it to move right and left. Help appreciated !
2015/09/22
[ "https://Stackoverflow.com/questions/32714656", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4862605/" ]
There are several things wrong with the example code. The main problems are that you are not using layouts properly, and the content widget is not being added to the scroll-area. Below is a fixed version (the commented lines are all junk, and can be removed): ``` def setupUi(self, Interface): # Interface.setObjectName("Interface") # Interface.resize(1152, 1009) # sizePolicy = QtWidgets.QSizePolicy(QtWidgets.QSizePolicy.Fixed, QtWidgets.QSizePolicy.Fixed) # sizePolicy.setHorizontalStretch(0) # sizePolicy.setVerticalStretch(0) # sizePolicy.setHeightForWidth(Interface.sizePolicy().hasHeightForWidth()) # Interface.setSizePolicy(sizePolicy) # Interface.setMouseTracking(False) # icon = QtGui.QIcon() self.centralWidget = QtWidgets.QWidget(Interface) # self.centralWidget.setObjectName("centralWidget") layout = QtWidgets.QVBoxLayout(self.centralWidget) self.scrollArea = QtWidgets.QScrollArea(self.centralWidget) # self.scrollArea.setGeometry(QtCore.QRect(0, 0, 1131, 951)) # self.scrollArea.setVerticalScrollBarPolicy(QtCore.Qt.ScrollBarAlwaysOn) # self.scrollArea.setHorizontalScrollBarPolicy(QtCore.Qt.ScrollBarAlwaysOn) # self.scrollArea.setWidgetResizable(True) # self.scrollArea.setObjectName("scrollArea") # self.scrollArea.setEnabled(True) layout.addWidget(self.scrollArea) self.scrollAreaWidgetContents = QtWidgets.QWidget() self.scrollAreaWidgetContents.setGeometry(QtCore.QRect(0, 0, 1112, 932)) # self.scrollAreaWidgetContents.setObjectName("scrollAreaWidgetContents") self.scrollArea.setWidget(self.scrollAreaWidgetContents) layout = QtWidgets.QHBoxLayout(self.scrollAreaWidgetContents) # self.horizontalLayout.setObjectName("horizontalLayout") # add child widgets to this layout... Interface.setCentralWidget(self.centralWidget) ```
The scrollbars are grayed out because you made them always visible by setting the scrollbar policy to `Qt.ScrollBarAlwaysOn` but actually there is no content to be scrolled so they are disabled. If you want scrollbars to appear only when they are needed you need to use `Qt.ScrollBarAsNeeded`. There is no content to be scrolled because there is only 1 widget in the `QHBoxLayout` (see `self.scrollAreaWidgetContents`). Also if this method is being executed from a `QMainWindow` you also have an error when setting the central widget: `self.centralWidget` is a method to retrieve the central widget. It's working because you are overwriting it with a `QWidget` instance (and I believe python allows you to do that). To correctly set the central widget you need to use `setCentralWidget()` in `QMainWindow`. ``` def setupUi(self, Interface): Interface.setObjectName("Interface") Interface.resize(1152, 1009) sizePolicy = QtWidgets.QSizePolicy(QtWidgets.QSizePolicy.Fixed, QtWidgets.QSizePolicy.Fixed) sizePolicy.setHorizontalStretch(0) sizePolicy.setVerticalStretch(0) sizePolicy.setHeightForWidth(Interface.sizePolicy().hasHeightForWidth()) Interface.setSizePolicy(sizePolicy) Interface.setMouseTracking(False) icon = QtGui.QIcon() self.horizontalLayout = QtWidgets.QHBoxLayout() self.horizontalLayout.setObjectName("horizontalLayout") self.scrollArea = QtWidgets.QScrollArea() self.scrollArea.setGeometry(QtCore.QRect(0, 0, 1131, 951)) self.scrollArea.setVerticalScrollBarPolicy(QtCore.Qt.ScrollBarAsNeeded) self.scrollArea.setHorizontalScrollBarPolicy(QtCore.Qt.ScrollBarAsNeeded) self.scrollArea.setWidgetResizable(True) self.scrollArea.setObjectName("scrollArea") self.scrollArea.setEnabled(True) self.horizontalLayout.addWidget(self.scrollArea) centralWidget = QWidgets.QWidget() centralWidget.setObjectName("centralWidget") centralWidget.setLayout(self.horizontalLayout) self.setCentralWidget(centralWidget) ``` I left `Interface` out since I don't know what it is, but the rest should be ok.
52,456,516
I am doing a list comprehension in python 3 for list of list serially numbers according to given range. My code works fine with but problem is with big range it slows down and take a lot of time. I there any way to do it other way? I don't want to use numpy. ``` global xy xy = 0 a = 3 def func(x): global xy xy += 1 return xy my_list = [[func(x) for x in range(a)] for x in range(a)] xy = 0 print(my_list) ```
2018/09/22
[ "https://Stackoverflow.com/questions/52456516", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4217784/" ]
In exactly the form you asked ``` [list(range(x, x + a)) for x in range(1, a**2 + 1, a)] ``` **Optimizations** If you're only iterating over inner list elements ``` (range(x, x + a) for x in range(1, a**2 + 1, a)) ``` If you're only indexing inner elements (for indices `inner` and `outer`) ``` range(1, a**2 + 1)[inner * a + outer] ``` And if you're doing both ``` [range(x, x + a) for x in range(1, a**2 + 1, a)] ``` Note: I think you can eek a little more performance out of these by combining them with mousetail's answer
Try: ``` t = [list(range(a*i + 1, a*(i+1) + 1)) for i in range(a)] ``` It seems pretty fast for a>100 even, though not fast enough for inside a graphical loop or event handler
52,456,516
I am doing a list comprehension in python 3 for list of list serially numbers according to given range. My code works fine with but problem is with big range it slows down and take a lot of time. I there any way to do it other way? I don't want to use numpy. ``` global xy xy = 0 a = 3 def func(x): global xy xy += 1 return xy my_list = [[func(x) for x in range(a)] for x in range(a)] xy = 0 print(my_list) ```
2018/09/22
[ "https://Stackoverflow.com/questions/52456516", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4217784/" ]
In exactly the form you asked ``` [list(range(x, x + a)) for x in range(1, a**2 + 1, a)] ``` **Optimizations** If you're only iterating over inner list elements ``` (range(x, x + a) for x in range(1, a**2 + 1, a)) ``` If you're only indexing inner elements (for indices `inner` and `outer`) ``` range(1, a**2 + 1)[inner * a + outer] ``` And if you're doing both ``` [range(x, x + a) for x in range(1, a**2 + 1, a)] ``` Note: I think you can eek a little more performance out of these by combining them with mousetail's answer
There are multiple ways to do what you have done. You can have a clearer picture once we do a performance check ``` import timeit # Process 1 setup = ''' global xy xy =0 a=30 def func(x): global xy xy+=1 return xy ''' code = ''' my_list = [] for x in range(a): my_list1 = [] for x in range(a): my_list1.append(func(x)) my_list.append(my_list1) ''' print(min(timeit.Timer(code, setup=setup).repeat(7, 1000))) # Process 2 setup_2 = """ global xy xy =0 a=30 def func(x): global xy xy+=1 return xy """ print(min(timeit.Timer('[[func(x)for x in range(a)] for x in range(a)]', setup=setup_2).repeat(7, 1000))) # Process 3 setup_3 = 'a=30' print(min(timeit.Timer('[[(x*a)+b for b in range(a)] for x in range(a)]', setup=setup_3).repeat(7, 1000))) # process 4 setup_4 = 'a=30' print(min(timeit.Timer('[list(range(a*i + 1, a*(i+1) + 1)) for i in range(a)]', setup=setup_4).repeat(7, 1000))) ``` OUTPUT ``` 0.21270840199999996 0.1699727179999999 0.08638116599999979 0.028964930000000333 ``` You can see that the last process is fastest because all the operation is in local variable rather than global variable. Almost the speed is 10x faster. ``` [list(range(a*i + 1, a*(i+1) + 1)) for i in range(a)] ``` > > Use local variable if possible: > > > Python is faster retrieving a local > variable than retrieving a global variable. That is, avoid the > “global” keyword. > > >
52,456,516
I am doing a list comprehension in python 3 for list of list serially numbers according to given range. My code works fine with but problem is with big range it slows down and take a lot of time. I there any way to do it other way? I don't want to use numpy. ``` global xy xy = 0 a = 3 def func(x): global xy xy += 1 return xy my_list = [[func(x) for x in range(a)] for x in range(a)] xy = 0 print(my_list) ```
2018/09/22
[ "https://Stackoverflow.com/questions/52456516", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4217784/" ]
Try: ``` t = [list(range(a*i + 1, a*(i+1) + 1)) for i in range(a)] ``` It seems pretty fast for a>100 even, though not fast enough for inside a graphical loop or event handler
There are multiple ways to do what you have done. You can have a clearer picture once we do a performance check ``` import timeit # Process 1 setup = ''' global xy xy =0 a=30 def func(x): global xy xy+=1 return xy ''' code = ''' my_list = [] for x in range(a): my_list1 = [] for x in range(a): my_list1.append(func(x)) my_list.append(my_list1) ''' print(min(timeit.Timer(code, setup=setup).repeat(7, 1000))) # Process 2 setup_2 = """ global xy xy =0 a=30 def func(x): global xy xy+=1 return xy """ print(min(timeit.Timer('[[func(x)for x in range(a)] for x in range(a)]', setup=setup_2).repeat(7, 1000))) # Process 3 setup_3 = 'a=30' print(min(timeit.Timer('[[(x*a)+b for b in range(a)] for x in range(a)]', setup=setup_3).repeat(7, 1000))) # process 4 setup_4 = 'a=30' print(min(timeit.Timer('[list(range(a*i + 1, a*(i+1) + 1)) for i in range(a)]', setup=setup_4).repeat(7, 1000))) ``` OUTPUT ``` 0.21270840199999996 0.1699727179999999 0.08638116599999979 0.028964930000000333 ``` You can see that the last process is fastest because all the operation is in local variable rather than global variable. Almost the speed is 10x faster. ``` [list(range(a*i + 1, a*(i+1) + 1)) for i in range(a)] ``` > > Use local variable if possible: > > > Python is faster retrieving a local > variable than retrieving a global variable. That is, avoid the > “global” keyword. > > >
18,464,237
I've have been making changes and uploading them for testing on AppEngine Python 2.7 runtime. When uploading I only get as far as seeing the message "Getting current resource limits". The next expected message is "Scanning files on local disk", but this never comes, I always get an error instead. My last successful deploy was at 11:05 AM (UK time). My next attempt to deploy was at 11:09 AM and this failed with a 503 error ``` ERROR __init__.py:1294 An error occurred processing file '': HTTP Error 503: Service Unavailable. Aborting. ``` Ever since 11:09 i've been getting HTTP 503 errors. I have also had 1, HTTP 500 error. I normally use the command line and have tried this multiple times and have also tried using the GUI "Google AppEngine Launcher" too. It was when using the GUI that I got the 500 error, using the command line always gives me 503. ``` ERROR __init__.py:1294 An error occurred processing file '': HTTP Error 500: Internal Server Error. Aborting. ``` I have tried getting more information to be reported by using the --verbose and --noisy options but they don't give any further information. The command line I am using is: ``` python appcfg.py --email=*my_email* update "*my_path*" -A *alternate_appID* -V *alternateVersion* ``` This command was working at 11:05 but 4 minutes later it does not. In those 4 minutes I only changed a single line of code (verified using git diff) and have tried rolling that change back so that the code being deployed is the same as the code that I know already deployed fine. Am I forgetting something obvious or doing something stupid? Is this happening to anyone else? Has this previously happened to anyone else? If so how did you resolve it?
2013/08/27
[ "https://Stackoverflow.com/questions/18464237", "https://Stackoverflow.com", "https://Stackoverflow.com/users/498463/" ]
I encountered the same problem. Fortunately, I was able to deploy the project successfully. Just stop the **appengine** from running your project and try to deploy it again. Hope this helps.
You wouldn't believe it.... I tried deploying again just before posting this question. Everything the same and it works now. 12:30 UK time.... so just a bit of an AppEngine issue? Did anyone else run into this? Can anyone explain what was going on? I can't be affording to spend 90 minutes messing about everytime I want to deploy my app...
61,770,551
I have a Django project running on my local machine with dev server `manage.py runserver` and I'm trying to run it with Uvicorn before I deploy it in a virtual machine. So in my virtual environment I installed `uvicorn` and started the server, but as you can see below it fails to find Django static css files. ``` (envdev) user@lenovo:~/python/myproject$ uvicorn myproject.asgi:application --port 8001 Started server process [17426] Waiting for application startup. ASGI 'lifespan' protocol appears unsupported. Application startup complete. Uvicorn running on http://127.0.0.1:8001 (Press CTRL+C to quit) INFO: 127.0.0.1:45720 - "GET /admin/ HTTP/1.1" 200 OK Not Found: /static/admin/css/base.css Not Found: /static/admin/css/base.css INFO: 127.0.0.1:45720 - "GET /static/admin/css/base.css HTTP/1.1" 404 Not Found Not Found: /static/admin/css/dashboard.css Not Found: /static/admin/css/dashboard.css INFO: 127.0.0.1:45724 - "GET /static/admin/css/dashboard.css HTTP/1.1" 404 Not Found Not Found: /static/admin/css/responsive.css Not Found: /static/admin/css/responsive.css INFO: 127.0.0.1:45726 - "GET /static/admin/css/responsive.css HTTP/1.1" 404 Not Found ``` Uvicorn has an option `--root-path` so I tried to specify the directory where these files are located but there is still the same error (path is correct). How can I solve this issue?
2020/05/13
[ "https://Stackoverflow.com/questions/61770551", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3423825/" ]
When not running with the built-in development server, you'll need to either * use [whitenoise](http://whitenoise.evans.io/en/stable/) which does this as a Django/WSGI middleware (my recommendation) * use [the classic staticfile deployment procedure which collects all static files into some root](https://docs.djangoproject.com/en/3.0/howto/static-files/#deployment) and a static file server is expected to serve them. Uvicorn doesn't seem to support static file serving, so you might need something else too (see e.g. <https://www.uvicorn.org/deployment/#running-behind-nginx>). * (very, very unpreferably!) [have Django serve static files like it does in dev](https://docs.djangoproject.com/en/3.0/howto/static-files/#serving-static-files-during-development)
Add below code your settings.py file ``` STATIC_ROOT = os.path.join(BASE_DIR, 'static', ) ``` Add below code in your urls.py ``` from django.conf.urls.static import static from django.conf import settings urlpatterns = [. .....] + static(settings.STATIC_URL, document_root=settings.STATIC_ROOT) ``` Then run below command but static directory must exist ``` python manage.py collectstatic --noinput ``` start server ``` uvicorn main.asgi:application --host 0.0.0.0 ```
50,396,802
I want to check if an arg was actually passed on the command line when there is a default value for that arg. Specifically in my case, I am using SCons and scons has a class which inherits from pythons optparse. So my code is like this so far: ``` from SCons.Environment import Environment from SCons.Script.SConsOptions import Parser MAIN_ENV = Environment() argparser = Parser(MAIN_ENV._get_major_minor_revision(SCons.__version__)) print(argparser.parse_args()) ``` Which prints all the args with the values, but I can't tell if one of the args was set or just has the default value in place. In the case I am looking at SCons 'num\_jobs' option, which defaults to 1. I would like to check if the user supplied a num\_jobs value, and use that if so, or otherwise just set num\_jobs to the number of CPUs reported by the system. I can use sys.argv like this, but would prefer a cleaner option using the option parser: ``` ################################################### # Determine number of Jobs # start by assuming num_jobs was not set NUM_JOBS_SET = False if GetOption("num_jobs") == 1: # if num_jobs is the default we need to check sys.argv # to see if the user happened to set the default for arg in sys.argv: if arg.startswith("-j") or arg.startswith("--jobs"): if arg == "-j" or arg == "--jobs": if(int(sys.argv[sys.argv.index(arg)+1]) == 1): NUM_JOBS_SET = True else: if arg.startswith("-j"): if(int(arg[2:]) == 1): NUM_JOBS_SET = True else: # user must have set something if it wasn't default NUM_JOBS_SET = True # num_jobs wasn't specificed so let use the # max number since the user doesn't seem to care if not NUM_JOBS_SET: NUM_CPUS = get_num_cpus() print("Building with " + str(NUM_CPUS) + " parallel jobs") MAIN_ENV.SetOption("num_jobs", NUM_CPUS) else: # user wants a certain number of jobs so do that print("Building with " + str(GetOption('num_jobs')) + " parallel jobs") ``` I tried using pythons OptionParser, but if I call parse\_args() from the OptionParser from python inside a scons script, SCons parser doesn't seem to work, it fails to recognize valid options. If someone has an example of how to check if the arg was passed with just pythons optparse, that should be sufficient for me to work into scons option parser.
2018/05/17
[ "https://Stackoverflow.com/questions/50396802", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1644736/" ]
Figured it out. I didn't add my redirect uri to the list of authorized ones as the option doesn't appear if you set your app type to "Other". I set it to "Web Application" (even though it isn't) and added my redirect uri and that fixed it.
Your code snippet lists "<https://googleapis.com/oauth/v4/token>". The token endpoint is "<https://googleapis.com/oauth2/v4/token>".
50,396,802
I want to check if an arg was actually passed on the command line when there is a default value for that arg. Specifically in my case, I am using SCons and scons has a class which inherits from pythons optparse. So my code is like this so far: ``` from SCons.Environment import Environment from SCons.Script.SConsOptions import Parser MAIN_ENV = Environment() argparser = Parser(MAIN_ENV._get_major_minor_revision(SCons.__version__)) print(argparser.parse_args()) ``` Which prints all the args with the values, but I can't tell if one of the args was set or just has the default value in place. In the case I am looking at SCons 'num\_jobs' option, which defaults to 1. I would like to check if the user supplied a num\_jobs value, and use that if so, or otherwise just set num\_jobs to the number of CPUs reported by the system. I can use sys.argv like this, but would prefer a cleaner option using the option parser: ``` ################################################### # Determine number of Jobs # start by assuming num_jobs was not set NUM_JOBS_SET = False if GetOption("num_jobs") == 1: # if num_jobs is the default we need to check sys.argv # to see if the user happened to set the default for arg in sys.argv: if arg.startswith("-j") or arg.startswith("--jobs"): if arg == "-j" or arg == "--jobs": if(int(sys.argv[sys.argv.index(arg)+1]) == 1): NUM_JOBS_SET = True else: if arg.startswith("-j"): if(int(arg[2:]) == 1): NUM_JOBS_SET = True else: # user must have set something if it wasn't default NUM_JOBS_SET = True # num_jobs wasn't specificed so let use the # max number since the user doesn't seem to care if not NUM_JOBS_SET: NUM_CPUS = get_num_cpus() print("Building with " + str(NUM_CPUS) + " parallel jobs") MAIN_ENV.SetOption("num_jobs", NUM_CPUS) else: # user wants a certain number of jobs so do that print("Building with " + str(GetOption('num_jobs')) + " parallel jobs") ``` I tried using pythons OptionParser, but if I call parse\_args() from the OptionParser from python inside a scons script, SCons parser doesn't seem to work, it fails to recognize valid options. If someone has an example of how to check if the arg was passed with just pythons optparse, that should be sufficient for me to work into scons option parser.
2018/05/17
[ "https://Stackoverflow.com/questions/50396802", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1644736/" ]
For me the issue was that I was sending a GET. You have to send a POST.
Your code snippet lists "<https://googleapis.com/oauth/v4/token>". The token endpoint is "<https://googleapis.com/oauth2/v4/token>".
50,396,802
I want to check if an arg was actually passed on the command line when there is a default value for that arg. Specifically in my case, I am using SCons and scons has a class which inherits from pythons optparse. So my code is like this so far: ``` from SCons.Environment import Environment from SCons.Script.SConsOptions import Parser MAIN_ENV = Environment() argparser = Parser(MAIN_ENV._get_major_minor_revision(SCons.__version__)) print(argparser.parse_args()) ``` Which prints all the args with the values, but I can't tell if one of the args was set or just has the default value in place. In the case I am looking at SCons 'num\_jobs' option, which defaults to 1. I would like to check if the user supplied a num\_jobs value, and use that if so, or otherwise just set num\_jobs to the number of CPUs reported by the system. I can use sys.argv like this, but would prefer a cleaner option using the option parser: ``` ################################################### # Determine number of Jobs # start by assuming num_jobs was not set NUM_JOBS_SET = False if GetOption("num_jobs") == 1: # if num_jobs is the default we need to check sys.argv # to see if the user happened to set the default for arg in sys.argv: if arg.startswith("-j") or arg.startswith("--jobs"): if arg == "-j" or arg == "--jobs": if(int(sys.argv[sys.argv.index(arg)+1]) == 1): NUM_JOBS_SET = True else: if arg.startswith("-j"): if(int(arg[2:]) == 1): NUM_JOBS_SET = True else: # user must have set something if it wasn't default NUM_JOBS_SET = True # num_jobs wasn't specificed so let use the # max number since the user doesn't seem to care if not NUM_JOBS_SET: NUM_CPUS = get_num_cpus() print("Building with " + str(NUM_CPUS) + " parallel jobs") MAIN_ENV.SetOption("num_jobs", NUM_CPUS) else: # user wants a certain number of jobs so do that print("Building with " + str(GetOption('num_jobs')) + " parallel jobs") ``` I tried using pythons OptionParser, but if I call parse\_args() from the OptionParser from python inside a scons script, SCons parser doesn't seem to work, it fails to recognize valid options. If someone has an example of how to check if the arg was passed with just pythons optparse, that should be sufficient for me to work into scons option parser.
2018/05/17
[ "https://Stackoverflow.com/questions/50396802", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1644736/" ]
Your code snippet lists "<https://googleapis.com/oauth/v4/token>". The token endpoint is "<https://googleapis.com/oauth2/v4/token>".
In my edge case, I was requesting the token pass ``` val tokenRequest = Request.Builder() .method( "POST", "".toRequestBody("application/x-www-form-urlencoded".toMediaType()) ) ``` The snippet above didn't work until I changed "POST" to "post".
50,396,802
I want to check if an arg was actually passed on the command line when there is a default value for that arg. Specifically in my case, I am using SCons and scons has a class which inherits from pythons optparse. So my code is like this so far: ``` from SCons.Environment import Environment from SCons.Script.SConsOptions import Parser MAIN_ENV = Environment() argparser = Parser(MAIN_ENV._get_major_minor_revision(SCons.__version__)) print(argparser.parse_args()) ``` Which prints all the args with the values, but I can't tell if one of the args was set or just has the default value in place. In the case I am looking at SCons 'num\_jobs' option, which defaults to 1. I would like to check if the user supplied a num\_jobs value, and use that if so, or otherwise just set num\_jobs to the number of CPUs reported by the system. I can use sys.argv like this, but would prefer a cleaner option using the option parser: ``` ################################################### # Determine number of Jobs # start by assuming num_jobs was not set NUM_JOBS_SET = False if GetOption("num_jobs") == 1: # if num_jobs is the default we need to check sys.argv # to see if the user happened to set the default for arg in sys.argv: if arg.startswith("-j") or arg.startswith("--jobs"): if arg == "-j" or arg == "--jobs": if(int(sys.argv[sys.argv.index(arg)+1]) == 1): NUM_JOBS_SET = True else: if arg.startswith("-j"): if(int(arg[2:]) == 1): NUM_JOBS_SET = True else: # user must have set something if it wasn't default NUM_JOBS_SET = True # num_jobs wasn't specificed so let use the # max number since the user doesn't seem to care if not NUM_JOBS_SET: NUM_CPUS = get_num_cpus() print("Building with " + str(NUM_CPUS) + " parallel jobs") MAIN_ENV.SetOption("num_jobs", NUM_CPUS) else: # user wants a certain number of jobs so do that print("Building with " + str(GetOption('num_jobs')) + " parallel jobs") ``` I tried using pythons OptionParser, but if I call parse\_args() from the OptionParser from python inside a scons script, SCons parser doesn't seem to work, it fails to recognize valid options. If someone has an example of how to check if the arg was passed with just pythons optparse, that should be sufficient for me to work into scons option parser.
2018/05/17
[ "https://Stackoverflow.com/questions/50396802", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1644736/" ]
Figured it out. I didn't add my redirect uri to the list of authorized ones as the option doesn't appear if you set your app type to "Other". I set it to "Web Application" (even though it isn't) and added my redirect uri and that fixed it.
In my edge case, I was requesting the token pass ``` val tokenRequest = Request.Builder() .method( "POST", "".toRequestBody("application/x-www-form-urlencoded".toMediaType()) ) ``` The snippet above didn't work until I changed "POST" to "post".
50,396,802
I want to check if an arg was actually passed on the command line when there is a default value for that arg. Specifically in my case, I am using SCons and scons has a class which inherits from pythons optparse. So my code is like this so far: ``` from SCons.Environment import Environment from SCons.Script.SConsOptions import Parser MAIN_ENV = Environment() argparser = Parser(MAIN_ENV._get_major_minor_revision(SCons.__version__)) print(argparser.parse_args()) ``` Which prints all the args with the values, but I can't tell if one of the args was set or just has the default value in place. In the case I am looking at SCons 'num\_jobs' option, which defaults to 1. I would like to check if the user supplied a num\_jobs value, and use that if so, or otherwise just set num\_jobs to the number of CPUs reported by the system. I can use sys.argv like this, but would prefer a cleaner option using the option parser: ``` ################################################### # Determine number of Jobs # start by assuming num_jobs was not set NUM_JOBS_SET = False if GetOption("num_jobs") == 1: # if num_jobs is the default we need to check sys.argv # to see if the user happened to set the default for arg in sys.argv: if arg.startswith("-j") or arg.startswith("--jobs"): if arg == "-j" or arg == "--jobs": if(int(sys.argv[sys.argv.index(arg)+1]) == 1): NUM_JOBS_SET = True else: if arg.startswith("-j"): if(int(arg[2:]) == 1): NUM_JOBS_SET = True else: # user must have set something if it wasn't default NUM_JOBS_SET = True # num_jobs wasn't specificed so let use the # max number since the user doesn't seem to care if not NUM_JOBS_SET: NUM_CPUS = get_num_cpus() print("Building with " + str(NUM_CPUS) + " parallel jobs") MAIN_ENV.SetOption("num_jobs", NUM_CPUS) else: # user wants a certain number of jobs so do that print("Building with " + str(GetOption('num_jobs')) + " parallel jobs") ``` I tried using pythons OptionParser, but if I call parse\_args() from the OptionParser from python inside a scons script, SCons parser doesn't seem to work, it fails to recognize valid options. If someone has an example of how to check if the arg was passed with just pythons optparse, that should be sufficient for me to work into scons option parser.
2018/05/17
[ "https://Stackoverflow.com/questions/50396802", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1644736/" ]
For me the issue was that I was sending a GET. You have to send a POST.
In my edge case, I was requesting the token pass ``` val tokenRequest = Request.Builder() .method( "POST", "".toRequestBody("application/x-www-form-urlencoded".toMediaType()) ) ``` The snippet above didn't work until I changed "POST" to "post".
21,678,165
it's still not a year that i code in python in my spare time, and it is my first programming language. I need to generate serie of numbers in range ("1, 2, 3...99; 1, 2, 3...99;" ) and match them against a list. I managed to do this but the code looks pathetic and i failed in some tasks, for example by skipping series with duplicated/non unique numbers in an elegant way, or creating one single function that takes the length of the serie as parameter (for example 3 for 1-99, 1-99 and 1-99, 2 for 1-99 and 1-99 and so on) to avoid handwriting each serie's function. What i have been capable of doing after exploring numpy multidimensional range ndindex (slow in my tests), creating pre filled lists (too huge), using set to uniquify (slow), tried all() for the "if n in y...", and many other things it, i have a very basic code which is also the fastest till now. After much changes, i'm simply back to the start, i moved the "if n != n" to the beginning of each for cycle in order to save loops and now have even no ideas on how to improve the function, or transforming it in a master function which generates n series of numbers. Any suggestion is really appreciated! ``` y = [#numbers] def four(a,b): for i in range(a,b): for ii in range(a,b): if i != ii: for iii in range(a,b): if i != iii and ii != iii: for iiii in range(a,b): if i != iiii and ii != iiii and iii != iiii: if i in y and ii in y and iii in y and iiii in y:#exact match #do something four(1,100) ```
2014/02/10
[ "https://Stackoverflow.com/questions/21678165", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1348293/" ]
thanks to user2357112 suggestion on using itertools.permutations(xrange(a, b), 4) I did the following and i'm very satisfied, it is fast and nice: ``` def solution(d, a, q): for i in itertools.permutations(xrange(d, a), q): #do something solution(1,100,4) ``` Thanks to this brilliant community as well.
Create array[100], fill it numbers from 0 to 99. Then use random generator to mix them. And then just take needed number of numbers.
50,595,357
I have a question of while in python. How to collect the result values using while? ``` ColumnCount_int = 3 while ColumnCount_int > 0 : ColumnCount_text = str('<colspec colnum="'+ str(ColumnCount_int) +'"' ' ' 'colname="'+ str(ColumnCount_int) + '">') Blank_text = "" Blank_text = Blank_text + ColumnCount_text ColumnCount_int = ColumnCount_int - 1 print(Blank_text) ``` result shows as below ``` <colspec colnum="3" colname="3"> <colspec colnum="2" colname="2"> <colspec colnum="1" colname="1"> ``` but i want to collect all result like as below ``` <colspec colnum="3" colname="3"><colspec colnum="2" colname="2"><colspec colnum="1" colname="1"> ``` Would you tell me which part wrong is ?
2018/05/30
[ "https://Stackoverflow.com/questions/50595357", "https://Stackoverflow.com", "https://Stackoverflow.com/users/9817554/" ]
You can fix the code by following where `Blank_text = ""` is moved before `while loop` and `print(Blank_text)` is called after the `loop`. (**Note**: *since `Blank_text` accumulates, variable name changed to `accumulated_text` as suggested in the comment*): ``` ColumnCount_int = 3 accumulated_text = "" # variable name changed, used instead of Blank_text while ColumnCount_int > 0 : ColumnCount_text = str('<colspec colnum="'+ str(ColumnCount_int) +'"' ' ' 'colname="'+ str(ColumnCount_int) + '">') accumulated_text = accumulated_text + ColumnCount_text ColumnCount_int = ColumnCount_int - 1 print(accumulated_text) ``` Result: ``` <colspec colnum="3" colname="3"><colspec colnum="2" colname="2"><colspec colnum="1" colname="1"> ``` Update: ======= However, same result can be from following in little compact way with `.join`: ``` result = ''.join('<colspec colnum="{0}" colname="{1}">'.format(i,i) for i in range(3,0,-1)) print(result) ```
Try appending it to the new list i created `l`, then do [`''.join(l)`](https://www.tutorialspoint.com/python3/string_join.htm) to output it in one line : ``` l = [] ColumnCount_int = 3 while ColumnCount_int > 0 : ColumnCount_text = str('<colspec colnum="'+ str(ColumnCount_int) +'"' ' ' 'colname="'+ str(ColumnCount_int) + '">') Blank_text = ColumnCount_text ColumnCount_int = ColumnCount_int - 1 l.append(Blank_text) print(''.join(l)) ``` Output: ``` <colspec colnum="3" colname="3"><colspec colnum="2" colname="2"><colspec colnum="1" colname="1"> ``` Shorter Way =========== Also try this: ``` l = [] ColumnCount_int = 3 while ColumnCount_int > 0 : l.append(str('<colspec colnum="'+str(ColumnCount_int)+'"'' ''colname="'+str(ColumnCount_int)+'">')) ColumnCount_int-=1 print(''.join(l)) ``` Output: ``` <colspec colnum="3" colname="3"><colspec colnum="2" colname="2"><colspec colnum="1" colname="1"> ```
61,027,226
I have already translated in this project in the same way a number of similar templates in the same directory, which are nearly equal to this one. But this template makes me helpless. Without a translation tag `{% blocktrans %}` it properly works and renders the variable. [![enter image description here](https://i.stack.imgur.com/IBtrs.jpg)](https://i.stack.imgur.com/IBtrs.jpg) ``` c_filter_size.html {% load i18n %} {% if ffilter %} <div class="badge badge-success text-wrap" style="width: 12rem;"">{% trans "Filter sizing check" %}</div> <h6><small><p class="p-1 mb-2 bg-info text-white">{% trans "The filter sizing is successfully performed." %} </p></small></h6> {% if ffilter1 and ffilter.wfsubtype != ffilter1.wfsubtype %} <div class="badge badge-success text-wrap" style="width: 12rem;"">{% trans "Filter sizing check" %}</div> <h6><small><p class="p-1 mb-2 bg-info text-white"> If you insist on the fineness, but allow to reduce flow rate up to {{ffilter1.flowrate}} m3/hr the filter size and therefore filter price can be reduced. </p></small></h6> {% endif %} ``` with a translation tag `{% blocktrans %}` it works neither in English nor in the translated language for the rendered variable. Other similar templates smoothely work. [![enter image description here](https://i.stack.imgur.com/MrENi.jpg)](https://i.stack.imgur.com/MrENi.jpg) ``` c_filter_size.html {% load i18n %} {% if ffilter %} <div class="badge badge-success text-wrap" style="width: 12rem;"">{% trans "Filter sizing check" %}</div> <h6><small><p class="p-1 mb-2 bg-info text-white">{% trans "The filter sizing is successfully performed." %} </p></small></h6> {% if ffilter1 and ffilter.wfsubtype != ffilter1.wfsubtype %} <div class="badge badge-success text-wrap" style="width: 12rem;"">{% trans "Filter sizing check" %}</div> <h6><small><p class="p-1 mb-2 bg-info text-white"> {% blocktrans %} If you insist on the fineness, but allow to reduce flow rate up to {{ffilter1.flowrate}} m3/hr the filter size and therefore filter price can be reduced. {% endblocktrans %} </p></small></h6> {% endif %} ``` [![enter image description here](https://i.stack.imgur.com/cknNR.jpg)](https://i.stack.imgur.com/cknNR.jpg) ``` django.po ... #: rsf/templates/rsf/comments/c_filter_size.html:11 #, python-format msgid "" "\n" " If you insist on the fineness, but allow\n" " to reduce flow rate up to <b>%(ffilter1.flowrate)s</b> m3/hr the " "filter size and therefore filter\n" " price can be reduced.\n" " " msgstr "" "\n" " Если тонкость фильтрации изменить невозможно, но возможно уменьшить " "расход до <b>%(ffilter1.flowrate)s</b> м3/час, то " "размер фильтра и соответственно его цена могут быть уменьшены." ... ``` Thank you
2020/04/04
[ "https://Stackoverflow.com/questions/61027226", "https://Stackoverflow.com", "https://Stackoverflow.com/users/13099284/" ]
You can't access to variable properties inside `blocktrans`. Instead of using `{{ffilter1.flowrate}}` inside `blocktrans` you should use the keyword `with`: ``` {% blocktrans with flowrate=ffilter1.flowrate %} If you insist on the fineness, but allow to reduce flow rate up to {{ flowrate }} m3/hr the filter size and therefore filter price can be reduced. {% endblocktrans %} ``` Also, to avoid having indents inside your translation use the keyword trimmed: ``` {% blocktrans with flowrate=ffilter1.flowrate trimmed %} ``` Source: <https://docs.djangoproject.com/en/3.0/topics/i18n/translation/#blocktrans-template-tag>
Maybe you could break up your trans blocks into two sections. ``` {% blocktrans %} If you insist on the fineness, but allow to reduce flow rate up to {% endblocktrans %} {{ffilter1.flowrate}} {% blocktrans %} m3/hr the filter size and therefore filter price can be reduced. {% endblocktrans %} ``` It doesn't look the best, but it would be impossible to put a variable inside the trans block I think. On a different note. I noticed you have an error in your inline styles, you have an extra " mark in the following line. `<div class="badge badge-success text-wrap" style="width: 12rem;"">`
48,376,714
I am using python 3.5 and Anaconda 4.2 and ubuntu 16.04. I get an error in `train.py` file (from object\_detection import trainer: `no module named object_detection`).But I think that i have problem in python 3.5. Can anyone help me with this error? [![enter image description here](https://i.stack.imgur.com/wYUKq.png)](https://i.stack.imgur.com/wYUKq.png)
2018/01/22
[ "https://Stackoverflow.com/questions/48376714", "https://Stackoverflow.com", "https://Stackoverflow.com/users/9250190/" ]
It happened to me. Just copy the "object\_detection" folder from "models" folder into the folder where you are running the train.py. I posted the link to the folder from github but you better copy the folder from your local files so it will match with your code perfectly in case you are using an older version of the object detection api. There are more professional ways to solve the problem I think but I just used the easiest way to solve the problem. Link to object\_detection folder from tensorflow github: <https://github.com/tensorflow/models/tree/master/research/object_detection>
Move the object\_detection folder to upper folder cp /models/research/object\_detection object\_detection
24,469,353
New to python so... I have a list with two columns, like so: ``` >>>print langs [{u'code': u'en', u'name': u'ENGLISH'}, {u'code': u'hy', u'name': u'ARMENIAN'}, ... {u'code': u'ms', u'name': u'MALAY'}] ``` I would like to add another row with: code: xx and name: UNKNOWN Tried with `langs.append` and so on, but can't get the hang of it.
2014/06/28
[ "https://Stackoverflow.com/questions/24469353", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1662464/" ]
It's pretty easy: ``` >>> langs.append({u'code': u'xx', u'name': u'UNKNOWN'}) ``` But I'd use `collections.namedtuple` for this kind of job(when columns are well-defined): ``` In [1]: from collections import namedtuple In [2]: Lang = namedtuple("Lang", ("code", "name")) In [3]: langs = [] In [4]: langs.append(Lang("xx", "unknown")) In [5]: langs[0] Out[5]: Lang(code='xx', name='unknown') In [6]: langs[0].code Out[6]: 'xx' In [7]: langs[0].name Out[7]: 'unknown' ```
This is one way of doing it... ``` langs += [{u'code': u'xx', u'name': u'UNKNOWN'}] ```
74,006,179
i'm new to python and i got a problem with dictionary filter. I searched a really long time for solution and asked on several discord server, but no one could really help me. If i have a dictionary like this: ``` [ {"champion": "ahri", "kills": 12, "assists": 7, "deaths": 4, "puuid": "17hd72he7wu"} {"champion": "sett", "kills": 14, "assists": 5, "deaths": 7, "puuid": "2123r3ze7wu"} {"champion": "thresh", "kills": 9, "assists": 16, "deaths": 2, "puuid": "32d72h5t5gu"} ] ``` How do i filter out only 1 certain part by puuid(value)? So it looks like this: puuid = **"32d72h5t5gu"** ``` [ {"champion": "thresh", "kills": 9, "assists": 16, "deaths": 2, "puuid": "32d72h5t5gu"} ] ``` with all other parts of dictionary **removed**.
2022/10/09
[ "https://Stackoverflow.com/questions/74006179", "https://Stackoverflow.com", "https://Stackoverflow.com/users/20198636/" ]
use a list comprehension and cycle through the dictionaries in your list to only keep the one that meets the specified conditions. ``` [ {"champion": ahri, "kills": 12, "assists": 7, "deaths": 4, "puuid": 17hd72he7wu} {"champion": sett, "kills": 14, "assists": 5, "deaths": 7, "puuid": 2123r3ze7wu} {"champion": thresh, "kills": 9, "assists": 16, "deaths": 2, "puuid": 32d72h5t5gu} ] ``` ``` newlist = [i for i in oldlist if (i['puuid'] == '32d72h5t5gu')] ```
you want something like this: new\_list = [ x for x in orgininal\_list if x[puuid] == value ] its called a list comprehension
74,006,179
i'm new to python and i got a problem with dictionary filter. I searched a really long time for solution and asked on several discord server, but no one could really help me. If i have a dictionary like this: ``` [ {"champion": "ahri", "kills": 12, "assists": 7, "deaths": 4, "puuid": "17hd72he7wu"} {"champion": "sett", "kills": 14, "assists": 5, "deaths": 7, "puuid": "2123r3ze7wu"} {"champion": "thresh", "kills": 9, "assists": 16, "deaths": 2, "puuid": "32d72h5t5gu"} ] ``` How do i filter out only 1 certain part by puuid(value)? So it looks like this: puuid = **"32d72h5t5gu"** ``` [ {"champion": "thresh", "kills": 9, "assists": 16, "deaths": 2, "puuid": "32d72h5t5gu"} ] ``` with all other parts of dictionary **removed**.
2022/10/09
[ "https://Stackoverflow.com/questions/74006179", "https://Stackoverflow.com", "https://Stackoverflow.com/users/20198636/" ]
use a list comprehension and cycle through the dictionaries in your list to only keep the one that meets the specified conditions. ``` [ {"champion": ahri, "kills": 12, "assists": 7, "deaths": 4, "puuid": 17hd72he7wu} {"champion": sett, "kills": 14, "assists": 5, "deaths": 7, "puuid": 2123r3ze7wu} {"champion": thresh, "kills": 9, "assists": 16, "deaths": 2, "puuid": 32d72h5t5gu} ] ``` ``` newlist = [i for i in oldlist if (i['puuid'] == '32d72h5t5gu')] ```
As the first thing, you should search for a question similar to yours. * <https://www.programiz.com/python-programming/list-comprehension> * [How to filter a dictionary according to an arbitrary condition function?](https://stackoverflow.com/questions/2844516/how-to-filter-a-dictionary-according-to-an-arbitrary-condition-function) This gives you half of the answer you're looking for: you can use a **`list` comprehension**. ``` dictionaries_list = [ {"champion": "ahri", "kills": 12, "assists": 7, "deaths": 4, "puuid": "17hd72he7wu"} {"champion": "sett", "kills": 14, "assists": 5, "deaths": 7, "puuid": "2123r3ze7wu"} {"champion": "thresh", "kills": 9, "assists": 16, "deaths": 2, "puuid": "32d72h5t5gu"} ] result = [d for d in dictionaries_list if d["puuid"] == "32d72h5t5gu"] ```
74,006,179
i'm new to python and i got a problem with dictionary filter. I searched a really long time for solution and asked on several discord server, but no one could really help me. If i have a dictionary like this: ``` [ {"champion": "ahri", "kills": 12, "assists": 7, "deaths": 4, "puuid": "17hd72he7wu"} {"champion": "sett", "kills": 14, "assists": 5, "deaths": 7, "puuid": "2123r3ze7wu"} {"champion": "thresh", "kills": 9, "assists": 16, "deaths": 2, "puuid": "32d72h5t5gu"} ] ``` How do i filter out only 1 certain part by puuid(value)? So it looks like this: puuid = **"32d72h5t5gu"** ``` [ {"champion": "thresh", "kills": 9, "assists": 16, "deaths": 2, "puuid": "32d72h5t5gu"} ] ``` with all other parts of dictionary **removed**.
2022/10/09
[ "https://Stackoverflow.com/questions/74006179", "https://Stackoverflow.com", "https://Stackoverflow.com/users/20198636/" ]
use a list comprehension and cycle through the dictionaries in your list to only keep the one that meets the specified conditions. ``` [ {"champion": ahri, "kills": 12, "assists": 7, "deaths": 4, "puuid": 17hd72he7wu} {"champion": sett, "kills": 14, "assists": 5, "deaths": 7, "puuid": 2123r3ze7wu} {"champion": thresh, "kills": 9, "assists": 16, "deaths": 2, "puuid": 32d72h5t5gu} ] ``` ``` newlist = [i for i in oldlist if (i['puuid'] == '32d72h5t5gu')] ```
``` dictionary = [ {"champion": "ahri", "kills": 12, "assists": 7, "deaths": 4, "puuid": "17hd72he7wu"}, {"champion": "sett", "kills": 14, "assists": 5, "deaths": 7, "puuid": "2123r3ze7wu"}, {"champion": "thresh", "kills": 9, "assists": 16, "deaths": 2, "puuid": "32d72h5t5gu"} ] new_list = [i for i in dictionary if (i['puuid'] == '32d72h5t5gu')] ``` [list Comprehension](https://www.w3schools.com/python/python_lists_comprehension.asp%5C)
73,323,330
so im learning some python but i got a list out of index error at this point if i put my header index to 0 it works not the way i see it works but ok, but if i get an upper index for header it wont can you help me out why?[photo of my csv file](https://i.stack.imgur.com/cDD13.png) [heres the first picture of my code][and 2nd pic](https://i.stack.imgur.com/Mi0AC.png)[3](https://i.stack.imgur.com/UrhK8.png)
2022/08/11
[ "https://Stackoverflow.com/questions/73323330", "https://Stackoverflow.com", "https://Stackoverflow.com/users/19744452/" ]
When you put your header index as `0` what is output? Assuming your delimiter is `tab` ```py with open("sample.csv", "r") as csvfile: reader = csv.reader(csvfile,delimiter='\t') headers = next(reader, None) print(headers[1]) ```
My delimeter was \t after i changed it to it problem all solved
67,933,453
I have a file that consists of data shown below ``` GS*642510*18762293*0*0*0*0*0*0*0*HN*056000522*601200162*20210513*101046*200018825*X*005010X214 ST*642510*18762293*1*0*0*0*0*0*0*277*000000001*005010X214 BHT*642510*18762293*1*0*0*0*0*0*0*0085*08*1*20210513*101046*TH NM1*642510*18762293*1*1*1*1*1*0*0*QC*1*TORIBIO QUEZADA*YERINSON****MI*1000836598 NM1*642510*18762293*1*1*1*4*1*0*0*QC*1*DELACRUZ*JENNIFER*L***MI*1000232209 ``` I want to delete the data from the 1st \* till the 9th \* I have written some code in python but not sure how to use regular expression as the string itself contains a "\*" itself. My PY Code is ``` import os import re path2 = "C:/Users/Lsaxena2/Desktop/RI Stuff/RI RITM1456876 Response files/Processed files with no Logs" files = os.listdir(path2) print(files) for x in files: with open(path2+'/'+x,'r') as f: newText = f.read().replace("*446607*12004230*","*") with open(path2+'/'+x,'w') as f: f.write(newText) ``` After update the data should looks like ``` GS*HN*056000522*601200162*20210513*101046*200018825*X*005010X214 ST*277*000000001*005010X214 BHT*0085*08*1*20210513*101046*TH NM1*QC*1*TORIBIO QUEZADA*YERINSON****MI*1000836598 NM1*QC*1*DELACRUZ*JENNIFER*L***MI*1000232209 ```
2021/06/11
[ "https://Stackoverflow.com/questions/67933453", "https://Stackoverflow.com", "https://Stackoverflow.com/users/15153070/" ]
<https://regex101.com/r/khgAVj/1> [![enter image description here](https://i.stack.imgur.com/5Ru4N.png)](https://i.stack.imgur.com/5Ru4N.png) regex pattern: `(\*\w+){9}\*` Explanation of the regex pattern can be found on the regex101 page on the right side. There is a code generator and replacing/removing the section should be fairly trivial. StackOverflow isn't a place that writes code for you. It helps you at debugging the code. So I will not give a finished solution but merely point you in the direction.
You could write a regular expression to solve this, but if you know that you always want to remove the content between the first and ninth stars, then I would split your strings into lists by "\*" and rejoin select slices. For example: ```py mystring = "GS*642510*18762293*0*0*0*0*0*0*0*HN*056000522*601200162*20210513*101046*200018825*X*005010X214" split_string = mystring.split("*") # ['GS', '642510', '18762293', '0', '0', '0', '0', '0', '0', '0', 'HN', '056000522', '601200162', '20210513', '101046', '200018825', 'X', '005010X214'] desired_slices = split_string[:1] + split_string[10:] pruned_string = "*".join(desired_slices) pruned_string # 'GS*HN*056000522*601200162*20210513*101046*200018825*X*005010X214' ```
67,933,453
I have a file that consists of data shown below ``` GS*642510*18762293*0*0*0*0*0*0*0*HN*056000522*601200162*20210513*101046*200018825*X*005010X214 ST*642510*18762293*1*0*0*0*0*0*0*277*000000001*005010X214 BHT*642510*18762293*1*0*0*0*0*0*0*0085*08*1*20210513*101046*TH NM1*642510*18762293*1*1*1*1*1*0*0*QC*1*TORIBIO QUEZADA*YERINSON****MI*1000836598 NM1*642510*18762293*1*1*1*4*1*0*0*QC*1*DELACRUZ*JENNIFER*L***MI*1000232209 ``` I want to delete the data from the 1st \* till the 9th \* I have written some code in python but not sure how to use regular expression as the string itself contains a "\*" itself. My PY Code is ``` import os import re path2 = "C:/Users/Lsaxena2/Desktop/RI Stuff/RI RITM1456876 Response files/Processed files with no Logs" files = os.listdir(path2) print(files) for x in files: with open(path2+'/'+x,'r') as f: newText = f.read().replace("*446607*12004230*","*") with open(path2+'/'+x,'w') as f: f.write(newText) ``` After update the data should looks like ``` GS*HN*056000522*601200162*20210513*101046*200018825*X*005010X214 ST*277*000000001*005010X214 BHT*0085*08*1*20210513*101046*TH NM1*QC*1*TORIBIO QUEZADA*YERINSON****MI*1000836598 NM1*QC*1*DELACRUZ*JENNIFER*L***MI*1000232209 ```
2021/06/11
[ "https://Stackoverflow.com/questions/67933453", "https://Stackoverflow.com", "https://Stackoverflow.com/users/15153070/" ]
You could write a regular expression to solve this, but if you know that you always want to remove the content between the first and ninth stars, then I would split your strings into lists by "\*" and rejoin select slices. For example: ```py mystring = "GS*642510*18762293*0*0*0*0*0*0*0*HN*056000522*601200162*20210513*101046*200018825*X*005010X214" split_string = mystring.split("*") # ['GS', '642510', '18762293', '0', '0', '0', '0', '0', '0', '0', 'HN', '056000522', '601200162', '20210513', '101046', '200018825', 'X', '005010X214'] desired_slices = split_string[:1] + split_string[10:] pruned_string = "*".join(desired_slices) pruned_string # 'GS*HN*056000522*601200162*20210513*101046*200018825*X*005010X214' ```
You can also use this [regex](https://regex101.com/r/oK0eO2/2) ``` /\*(.*?\*){9}/g ``` You did not mention in the question that you want only the first occurrence of this pattern in a line. If you are using regex this, or the accepted one, be sure to only take the first match in a line. In case if the pattern repeats in the same line, I think you don't want to replace that. [![Accepted Answer Matchs](https://i.stack.imgur.com/u05fU.png)](https://i.stack.imgur.com/u05fU.png) As you can see both my suggestion and the accepted answer will match every pattern even if it re occurs in the same line. It will be better if you use the Answer suggested by [Bruce Schultz](https://stackoverflow.com/users/13011577/bruce-schultz)
67,933,453
I have a file that consists of data shown below ``` GS*642510*18762293*0*0*0*0*0*0*0*HN*056000522*601200162*20210513*101046*200018825*X*005010X214 ST*642510*18762293*1*0*0*0*0*0*0*277*000000001*005010X214 BHT*642510*18762293*1*0*0*0*0*0*0*0085*08*1*20210513*101046*TH NM1*642510*18762293*1*1*1*1*1*0*0*QC*1*TORIBIO QUEZADA*YERINSON****MI*1000836598 NM1*642510*18762293*1*1*1*4*1*0*0*QC*1*DELACRUZ*JENNIFER*L***MI*1000232209 ``` I want to delete the data from the 1st \* till the 9th \* I have written some code in python but not sure how to use regular expression as the string itself contains a "\*" itself. My PY Code is ``` import os import re path2 = "C:/Users/Lsaxena2/Desktop/RI Stuff/RI RITM1456876 Response files/Processed files with no Logs" files = os.listdir(path2) print(files) for x in files: with open(path2+'/'+x,'r') as f: newText = f.read().replace("*446607*12004230*","*") with open(path2+'/'+x,'w') as f: f.write(newText) ``` After update the data should looks like ``` GS*HN*056000522*601200162*20210513*101046*200018825*X*005010X214 ST*277*000000001*005010X214 BHT*0085*08*1*20210513*101046*TH NM1*QC*1*TORIBIO QUEZADA*YERINSON****MI*1000836598 NM1*QC*1*DELACRUZ*JENNIFER*L***MI*1000232209 ```
2021/06/11
[ "https://Stackoverflow.com/questions/67933453", "https://Stackoverflow.com", "https://Stackoverflow.com/users/15153070/" ]
<https://regex101.com/r/khgAVj/1> [![enter image description here](https://i.stack.imgur.com/5Ru4N.png)](https://i.stack.imgur.com/5Ru4N.png) regex pattern: `(\*\w+){9}\*` Explanation of the regex pattern can be found on the regex101 page on the right side. There is a code generator and replacing/removing the section should be fairly trivial. StackOverflow isn't a place that writes code for you. It helps you at debugging the code. So I will not give a finished solution but merely point you in the direction.
You can also use this [regex](https://regex101.com/r/oK0eO2/2) ``` /\*(.*?\*){9}/g ``` You did not mention in the question that you want only the first occurrence of this pattern in a line. If you are using regex this, or the accepted one, be sure to only take the first match in a line. In case if the pattern repeats in the same line, I think you don't want to replace that. [![Accepted Answer Matchs](https://i.stack.imgur.com/u05fU.png)](https://i.stack.imgur.com/u05fU.png) As you can see both my suggestion and the accepted answer will match every pattern even if it re occurs in the same line. It will be better if you use the Answer suggested by [Bruce Schultz](https://stackoverflow.com/users/13011577/bruce-schultz)
58,335,344
I want scheduled to run my python script every hour and save the data in elasticsearch index. So that I used a function I wrote, set\_interval which uses the tweepy library. But it doesn't work as I need it to work. It runs every minute and save the data in index. Even after the set that seconds equal to 3600 it runs in every minute. But I want to configure this to run on an hourly basis. How can I fix this? Heres my python script: ``` def call_at_interval(time, callback, args): while True: timer = Timer(time, callback, args=args) timer.start() timer.join() def set_interval(time, callback, *args): Thread(target=call_at_interval, args=(time, callback, args)).start() def get_all_tweets(screen_name): # authorize twitter, initialize tweepy auth = tweepy.OAuthHandler(consumer_key, consumer_secret) auth.set_access_token(access_key, access_secret) api = tweepy.API(auth) screen_name = "" # initialize a list to hold all the tweepy Tweets alltweets = [] # make initial request for most recent tweets (200 is the maximum allowed count) new_tweets = api.user_timeline(screen_name=screen_name, count=200) # save most recent tweets alltweets.extend(new_tweets) # save the id of the oldest tweet less one oldest = alltweets[-1].id - 1 # keep grabbing tweets until there are no tweets left to grab while len(new_tweets) > 0: #print #"getting tweets before %s" % (oldest) # all subsiquent requests use the max_id param to prevent duplicates new_tweets = api.user_timeline(screen_name=screen_name, count=200, max_id=oldest) # save most recent tweets alltweets.extend(new_tweets) # update the id of the oldest tweet less one oldest = alltweets[-1].id - 1 #print #"...%s tweets downloaded so far" % (len(alltweets)) outtweets = [{'ID': tweet.id_str, 'Text': tweet.text, 'Date': tweet.created_at, 'author': tweet.user.screen_name} for tweet in alltweets] def save_es(outtweets, es): # Peps8 convention data = [ # Please without s in data { "_index": "index name", "_type": "type name", "_id": index, "_source": ID } for index, ID in enumerate(outtweets) ] helpers.bulk(es, data) save_es(outtweets, es) print('Run at:') print(datetime.now()) print("\n") set_interval(3600, get_all_tweets(screen_name)) ```
2019/10/11
[ "https://Stackoverflow.com/questions/58335344", "https://Stackoverflow.com", "https://Stackoverflow.com/users/11931186/" ]
Why do you need so much complexity to do some task every hour? You can run script every one hour this way below, note that it is runned 1 hour + time to do work: ``` import time def do_some_work(): print("Do some work") time.sleep(1) print("Some work is done!") if __name__ == "__main__": time.sleep(60) # imagine you would like to start work in 1 minute first time while True: do_some_work() time.sleep(3600) # do work every one hour ``` If you want to run script exactly every one hour, do the following code below: ``` import time import threading def do_some_work(): print("Do some work") time.sleep(4) print("Some work is done!") if __name__ == "__main__": time.sleep(60) # imagine you would like to start work in 1 minute first time while True: thr = threading.Thread(target=do_some_work) thr.start() time.sleep(3600) # do work every one hour ``` In this case thr is supposed to finish it's work faster than 3600 seconds, though it does not, you'll still get results, but results will be from another attempt, see the example below: ``` import time import threading class AttemptCount: def __init__(self, attempt_number): self.attempt_number = attempt_number def do_some_work(_attempt_number): print(f"Do some work {_attempt_number.attempt_number}") time.sleep(4) print(f"Some work is done! {_attempt_number.attempt_number}") _attempt_number.attempt_number += 1 if __name__ == "__main__": attempt_number = AttemptCount(1) time.sleep(1) # imagine you would like to start work in 1 minute first time while True: thr = threading.Thread(target=do_some_work, args=(attempt_number, ),) thr.start() time.sleep(1) # do work every one hour ``` The result you'll gey in the case is: Do some work 1 Do some work 1 Do some work 1 Do some work 1 Some work is done! 1 Do some work 2 Some work is done! 2 Do some work 3 Some work is done! 3 Do some work 4 Some work is done! 4 Do some work 5 Some work is done! 5 Do some work 6 Some work is done! 6 Do some work 7 Some work is done! 7 Do some work 8 Some work is done! 8 Do some work 9 I like using subprocess.Popen for such tasks, if the child subprocess did not finish it's work within one hour due to any reason, you just terminate it and start a new one. You also can use CRON to schedule some process to run every one hour.
Get rid of all timer code just write the logic and **cron** will do the job for you add this to the end of the file after `crontab -e` ``` 0 * * * * /path/to/python /path/to/script.py ``` `0 * * * *` means run at every *zero* minute you can find more explanation [here](https://www.computerhope.com/unix/ucrontab.htm) And also I noticed you are recursively calling `get_all_tweets(screen_name)` I think you might have to call it from outside Just keep your script this much ``` def get_all_tweets(screen_name): # authorize twitter, initialize tweepy auth = tweepy.OAuthHandler(consumer_key, consumer_secret) auth.set_access_token(access_key, access_secret) api = tweepy.API(auth) screen_name = "" # initialize a list to hold all the tweepy Tweets alltweets = [] # make initial request for most recent tweets (200 is the maximum allowed count) new_tweets = api.user_timeline(screen_name=screen_name, count=200) # save most recent tweets alltweets.extend(new_tweets) # save the id of the oldest tweet less one oldest = alltweets[-1].id - 1 # keep grabbing tweets until there are no tweets left to grab while len(new_tweets) > 0: #print #"getting tweets before %s" % (oldest) # all subsiquent requests use the max_id param to prevent duplicates new_tweets = api.user_timeline(screen_name=screen_name, count=200, max_id=oldest) # save most recent tweets alltweets.extend(new_tweets) # update the id of the oldest tweet less one oldest = alltweets[-1].id - 1 #print #"...%s tweets downloaded so far" % (len(alltweets)) outtweets = [{'ID': tweet.id_str, 'Text': tweet.text, 'Date': tweet.created_at, 'author': tweet.user.screen_name} for tweet in alltweets] def save_es(outtweets, es): # Peps8 convention data = [ # Please without s in data { "_index": "index name", "_type": "type name", "_id": index, "_source": ID } for index, ID in enumerate(outtweets) ] helpers.bulk(es, data) save_es(outtweets, es) get_all_tweets("") #your screen name here ```
40,753,137
suppose I have a list which calls name: name=['ACCBCDB','CCABACB','CAABBCB'] I want to use python to remove middle B from each element in the list. the output should display : ['ACCCDB','CCAACB','CAABCB']
2016/11/22
[ "https://Stackoverflow.com/questions/40753137", "https://Stackoverflow.com", "https://Stackoverflow.com/users/6169548/" ]
It is not possible to test for `NULL` values with comparison operators, such as `=`, `<`, or `<>`. You have to use the `IS NULL` and `IS NOT NULL` operators instead, or you have to use functions like `ISNULL()` and `COALESCE()` ``` Select * From MyTableName where [boolfieldX] <> 1 OR [boolfieldX] IS NULL ``` **OR** ``` Select * From MyTableName where ISNULL([boolfieldX],0) <> 1 ``` Read more about null comparison in [Stackoverflow Documentation](http://www.riptutorial.com/sql-server/example/17804/null-comparison) Read more about `ISNULL()` and `COALESCE()` Functions in [Stackoverflow Documentation](http://www.riptutorial.com/sql-server/example/25800/coalesce---)
Hi try to use this query: ``` select * from mytablename where [boolFieldX] is null And [boolFieldX] <> 1 ```
40,753,137
suppose I have a list which calls name: name=['ACCBCDB','CCABACB','CAABBCB'] I want to use python to remove middle B from each element in the list. the output should display : ['ACCCDB','CCAACB','CAABCB']
2016/11/22
[ "https://Stackoverflow.com/questions/40753137", "https://Stackoverflow.com", "https://Stackoverflow.com/users/6169548/" ]
It is not possible to test for `NULL` values with comparison operators, such as `=`, `<`, or `<>`. You have to use the `IS NULL` and `IS NOT NULL` operators instead, or you have to use functions like `ISNULL()` and `COALESCE()` ``` Select * From MyTableName where [boolfieldX] <> 1 OR [boolfieldX] IS NULL ``` **OR** ``` Select * From MyTableName where ISNULL([boolfieldX],0) <> 1 ``` Read more about null comparison in [Stackoverflow Documentation](http://www.riptutorial.com/sql-server/example/17804/null-comparison) Read more about `ISNULL()` and `COALESCE()` Functions in [Stackoverflow Documentation](http://www.riptutorial.com/sql-server/example/25800/coalesce---)
I believe that it's because null is an unknown value. You can't query against an unknown 'value'. In my opinion, referring to null as a 'value' is an oxymoron because it represents an unknown. Using the operators "Is Null" and "Not Is Null" in conjunction with whatever selection criteria will return the desired results, or translating it by converting a null an alternate value will work like this: IsNull([boolfield], 'some compatible value')
40,753,137
suppose I have a list which calls name: name=['ACCBCDB','CCABACB','CAABBCB'] I want to use python to remove middle B from each element in the list. the output should display : ['ACCCDB','CCAACB','CAABCB']
2016/11/22
[ "https://Stackoverflow.com/questions/40753137", "https://Stackoverflow.com", "https://Stackoverflow.com/users/6169548/" ]
I believe that it's because null is an unknown value. You can't query against an unknown 'value'. In my opinion, referring to null as a 'value' is an oxymoron because it represents an unknown. Using the operators "Is Null" and "Not Is Null" in conjunction with whatever selection criteria will return the desired results, or translating it by converting a null an alternate value will work like this: IsNull([boolfield], 'some compatible value')
Hi try to use this query: ``` select * from mytablename where [boolFieldX] is null And [boolFieldX] <> 1 ```
28,530,928
I am trying to have a python script execute on the click of an image but whenever the python script gets called it always throws a 500 Internal Server Error? Here is the text from the log ``` [Sun Feb 15 20:31:04 2015] [error] (2)No such file or directory: exec of '/var/www/forward.py' failed [Sun Feb 15 20:31:04 2015] [error] [client 192.168.15.51] Premature end of script headers: forward.py, referer: http://192.168.15.76/Testing.html ``` I don't understand why it is saying `Premature end of script header`? I can execute a basic python script that just prints an html header or text. The file I am trying to execute just has some basic wiringPi code that executes fine from the sudo python forward.py command? EDIT This is the script I'm trying to execute: ``` #!/usr/bin/python import time import wiringpi2 as wiringpi wiringpi.wiringPiSetupPhys() wiringpi.pinMode(40, 1) wiringpi.digitalWrite(40, 1) time.sleep(2) wiringpi.digitalWrite(40, 0) wiringpi.pinMode(40, 0) ```
2015/02/15
[ "https://Stackoverflow.com/questions/28530928", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4547894/" ]
Things to check: 1. Make sure the script is executable (`chmod +x forward.py`) and that it has a she-bang line (e.g. `#!/usr/bin/env python`). 2. Make sure the script's owner & group match up with what user apache is running as. 3. Try running the script from the command line as the apache user. I see that you're testing it with `sudo`. Does it really need `sudo`? If so whatever user is running as apache will also need sudoers access. 4. Since it looks like you're using CGI, try adding: `import cgitb; cgitb.enable();` to the top of your script. This will catch any exceptions and return it as a response instead of causing the script to die.
In addition to @lost-theory's recommendations, I would also check to make sure: * You have executable permission on the script. * Apache has permission to access the folder/file of the script. For example: ``` <Directory /path/to/your/dir> <Files *> Order allow,deny Allow from all Require all granted </Files> </Directory> ```
44,579,050
I am using python and OpenCV. I am trying to find the center and angle of the batteries: [Image of batteries with random angles:](https://i.stack.imgur.com/qB8S7.jpg) [![enter image description here](https://i.stack.imgur.com/DgD8p.jpg)](https://i.stack.imgur.com/DgD8p.jpg) The code than I have is this: ``` import cv2 import numpy as np img = cv2.imread('image/baterias2.png') gray = cv2.cvtColor(img,cv2.COLOR_BGR2GRAY) img2 = cv2.imread('image/baterias4.png',0) minLineLength = 300 maxLineGap = 5 edges = cv2.Canny(img2,50,200) cv2.imshow('Canny',edges) lines = cv2.HoughLinesP(edges,1,np.pi/180,80,minLineLength,maxLineGap) print lines salida = np.zeros((img.shape[0],img.shape[1])) for x in range(0, len(lines)): for x1,y1,x2,y2 in lines[x]: cv2.line(salida,(x1,y1),(x2,y2),(125,125,125),0)# rgb cv2.imshow('final',salida) cv2.imwrite('result/hough.jpg',img) cv2.waitKey(0) ``` Any ideas to work it out?
2017/06/16
[ "https://Stackoverflow.com/questions/44579050", "https://Stackoverflow.com", "https://Stackoverflow.com/users/8093906/" ]
Almost identical to [one of my other answers](https://stackoverflow.com/questions/43863931/contour-axis-for-image/43883758#43883758). PCA seems to work fine. ``` import cv2 import numpy as np img = cv2.imread("test_images/battery001.png") #load an image of a single battery img_gs = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY) #convert to grayscale #inverted binary threshold: 1 for the battery, 0 for the background _, thresh = cv2.threshold(img_gs, 250, 1, cv2.THRESH_BINARY_INV) #From a matrix of pixels to a matrix of coordinates of non-black points. #(note: mind the col/row order, pixels are accessed as [row, col] #but when we draw, it's (x, y), so have to swap here or there) mat = np.argwhere(thresh != 0) #let's swap here... (e. g. [[row, col], ...] to [[col, row], ...]) mat[:, [0, 1]] = mat[:, [1, 0]] #or we could've swapped at the end, when drawing #(e. g. center[0], center[1] = center[1], center[0], same for endpoint1 and endpoint2), #probably better performance-wise mat = np.array(mat).astype(np.float32) #have to convert type for PCA #mean (e. g. the geometrical center) #and eigenvectors (e. g. directions of principal components) m, e = cv2.PCACompute(mat, mean = np.array([])) #now to draw: let's scale our primary axis by 100, #and the secondary by 50 center = tuple(m[0]) endpoint1 = tuple(m[0] + e[0]*100) endpoint2 = tuple(m[0] + e[1]*50) red_color = (0, 0, 255) cv2.circle(img, center, 5, red_color) cv2.line(img, center, endpoint1, red_color) cv2.line(img, center, endpoint2, red_color) cv2.imwrite("out.png", img) ``` [![enter image description here](https://i.stack.imgur.com/ufsce.png)](https://i.stack.imgur.com/ufsce.png)
You can reference the code. ``` import cv2 import imutils import numpy as np PIC_PATH = r"E:\temp\Battery.jpg" image = cv2.imread(PIC_PATH) gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY) blurred = cv2.GaussianBlur(gray, (5, 5), 0) edged = cv2.Canny(gray, 100, 220) kernel = np.ones((5,5),np.uint8) closed = cv2.morphologyEx(edged, cv2.MORPH_CLOSE, kernel) cnts = cv2.findContours(closed.copy(), cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE) cnts = cnts[0] if imutils.is_cv2() else cnts[1] cv2.drawContours(image, cnts, -1, (0, 255, 0), 4) cv2.imshow("Output", image) cv2.waitKey(0) ``` The result picture is, [![enter image description here](https://i.stack.imgur.com/Jgnmp.png)](https://i.stack.imgur.com/Jgnmp.png)
44,579,050
I am using python and OpenCV. I am trying to find the center and angle of the batteries: [Image of batteries with random angles:](https://i.stack.imgur.com/qB8S7.jpg) [![enter image description here](https://i.stack.imgur.com/DgD8p.jpg)](https://i.stack.imgur.com/DgD8p.jpg) The code than I have is this: ``` import cv2 import numpy as np img = cv2.imread('image/baterias2.png') gray = cv2.cvtColor(img,cv2.COLOR_BGR2GRAY) img2 = cv2.imread('image/baterias4.png',0) minLineLength = 300 maxLineGap = 5 edges = cv2.Canny(img2,50,200) cv2.imshow('Canny',edges) lines = cv2.HoughLinesP(edges,1,np.pi/180,80,minLineLength,maxLineGap) print lines salida = np.zeros((img.shape[0],img.shape[1])) for x in range(0, len(lines)): for x1,y1,x2,y2 in lines[x]: cv2.line(salida,(x1,y1),(x2,y2),(125,125,125),0)# rgb cv2.imshow('final',salida) cv2.imwrite('result/hough.jpg',img) cv2.waitKey(0) ``` Any ideas to work it out?
2017/06/16
[ "https://Stackoverflow.com/questions/44579050", "https://Stackoverflow.com", "https://Stackoverflow.com/users/8093906/" ]
* To find out the center of an object, you can use the [Moments](http://docs.opencv.org/2.4/modules/imgproc/doc/structural_analysis_and_shape_descriptors.html?highlight=moments#moments). Threshold the image and get the contours of the object with [findContours](http://docs.opencv.org/2.4/modules/imgproc/doc/structural_analysis_and_shape_descriptors.html?highlight=findcontours#findcontours). Compute the Moments with`cv.Moments(arr, binary=0) → moments`. As `arr` you can pass the contours. Then the coordinates of the center are computed as `x = m10/m00` and `y = m01/m00`. * To get the orientation, you can draw a minimum Rectangle around the object and compute the angle between the longer side of the rectangle and a vertical line.
You can reference the code. ``` import cv2 import imutils import numpy as np PIC_PATH = r"E:\temp\Battery.jpg" image = cv2.imread(PIC_PATH) gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY) blurred = cv2.GaussianBlur(gray, (5, 5), 0) edged = cv2.Canny(gray, 100, 220) kernel = np.ones((5,5),np.uint8) closed = cv2.morphologyEx(edged, cv2.MORPH_CLOSE, kernel) cnts = cv2.findContours(closed.copy(), cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE) cnts = cnts[0] if imutils.is_cv2() else cnts[1] cv2.drawContours(image, cnts, -1, (0, 255, 0), 4) cv2.imshow("Output", image) cv2.waitKey(0) ``` The result picture is, [![enter image description here](https://i.stack.imgur.com/Jgnmp.png)](https://i.stack.imgur.com/Jgnmp.png)
44,579,050
I am using python and OpenCV. I am trying to find the center and angle of the batteries: [Image of batteries with random angles:](https://i.stack.imgur.com/qB8S7.jpg) [![enter image description here](https://i.stack.imgur.com/DgD8p.jpg)](https://i.stack.imgur.com/DgD8p.jpg) The code than I have is this: ``` import cv2 import numpy as np img = cv2.imread('image/baterias2.png') gray = cv2.cvtColor(img,cv2.COLOR_BGR2GRAY) img2 = cv2.imread('image/baterias4.png',0) minLineLength = 300 maxLineGap = 5 edges = cv2.Canny(img2,50,200) cv2.imshow('Canny',edges) lines = cv2.HoughLinesP(edges,1,np.pi/180,80,minLineLength,maxLineGap) print lines salida = np.zeros((img.shape[0],img.shape[1])) for x in range(0, len(lines)): for x1,y1,x2,y2 in lines[x]: cv2.line(salida,(x1,y1),(x2,y2),(125,125,125),0)# rgb cv2.imshow('final',salida) cv2.imwrite('result/hough.jpg',img) cv2.waitKey(0) ``` Any ideas to work it out?
2017/06/16
[ "https://Stackoverflow.com/questions/44579050", "https://Stackoverflow.com", "https://Stackoverflow.com/users/8093906/" ]
Almost identical to [one of my other answers](https://stackoverflow.com/questions/43863931/contour-axis-for-image/43883758#43883758). PCA seems to work fine. ``` import cv2 import numpy as np img = cv2.imread("test_images/battery001.png") #load an image of a single battery img_gs = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY) #convert to grayscale #inverted binary threshold: 1 for the battery, 0 for the background _, thresh = cv2.threshold(img_gs, 250, 1, cv2.THRESH_BINARY_INV) #From a matrix of pixels to a matrix of coordinates of non-black points. #(note: mind the col/row order, pixels are accessed as [row, col] #but when we draw, it's (x, y), so have to swap here or there) mat = np.argwhere(thresh != 0) #let's swap here... (e. g. [[row, col], ...] to [[col, row], ...]) mat[:, [0, 1]] = mat[:, [1, 0]] #or we could've swapped at the end, when drawing #(e. g. center[0], center[1] = center[1], center[0], same for endpoint1 and endpoint2), #probably better performance-wise mat = np.array(mat).astype(np.float32) #have to convert type for PCA #mean (e. g. the geometrical center) #and eigenvectors (e. g. directions of principal components) m, e = cv2.PCACompute(mat, mean = np.array([])) #now to draw: let's scale our primary axis by 100, #and the secondary by 50 center = tuple(m[0]) endpoint1 = tuple(m[0] + e[0]*100) endpoint2 = tuple(m[0] + e[1]*50) red_color = (0, 0, 255) cv2.circle(img, center, 5, red_color) cv2.line(img, center, endpoint1, red_color) cv2.line(img, center, endpoint2, red_color) cv2.imwrite("out.png", img) ``` [![enter image description here](https://i.stack.imgur.com/ufsce.png)](https://i.stack.imgur.com/ufsce.png)
* To find out the center of an object, you can use the [Moments](http://docs.opencv.org/2.4/modules/imgproc/doc/structural_analysis_and_shape_descriptors.html?highlight=moments#moments). Threshold the image and get the contours of the object with [findContours](http://docs.opencv.org/2.4/modules/imgproc/doc/structural_analysis_and_shape_descriptors.html?highlight=findcontours#findcontours). Compute the Moments with`cv.Moments(arr, binary=0) → moments`. As `arr` you can pass the contours. Then the coordinates of the center are computed as `x = m10/m00` and `y = m01/m00`. * To get the orientation, you can draw a minimum Rectangle around the object and compute the angle between the longer side of the rectangle and a vertical line.
22,948,119
i am trying to make a program for my computer science class that has us create a lottery game generator. this game has you input your number, then it creates tickets winning tickets to match to your ticket. so if you match 3, it says you matched 3, 4 says 4, 5 says 5 and at 6 matches it will stop the program. my problem is that if you got a match of 6 on this first randomly generated set (highly unlikely, but possible), it doesn't keep going until a match of 3, 4 and 5. i need it to match a set of 3, say so, then ignore matching another set of three and only worry about match 4, 5 and 6. ``` from random import * import random def draw(): #return a list of six randomly picked numbers numbers=list(range(1,50)) drawn=[] for n in range (6): x=randint(0,len(numbers)-1) no=numbers.pop(x) drawn.append(no) return drawn a=int(input("What is your first number? (maximum of 49)")) b=int(input("What is your second number? (different from 1)")) c=int(input("What is your third number? (different from 1,2)")) i=int(input("What is your fourth number?(different from 1,2,3)")) e=int(input("What is your fith number?(different from 1,2,3,4)")) f=int(input("What is your sixth number?(different from 1,2,3,4,5)")) def winner(): ticket=[a,b,c,i,e,f] wins=0 costs=0 while True: costs=costs+1 d=draw() matches=0 for h in ticket: if h in d: matches=matches+1 if matches==3: print ("You Matched 3 on try", costs) elif matches==4: print ("Cool! 4 matches on try", costs) elif matches==5: print ("Amazing!", costs, "trys for 5 matches!") elif matches==6: print ("Congratulations! you matched all 6 numbers on try", costs) return False draw() winner() ``` one of my classmates made it have a while true statement for every matching pair, but this causes python to crash while finding each matching set. i have no other ideas on how to make the program stop from posting more than one match.
2014/04/08
[ "https://Stackoverflow.com/questions/22948119", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3512754/" ]
``` from random import randint, sample # Ontario Lotto 6/49 prize schedule COST = 0.50 PRIZES = [0, 0, 0, 5., 50., 500., 1000000.] def draw(): return set(sample(range(1, 50), 6)) def get_ints(prompt): while True: try: return [int(i) for i in input(prompt).split()] except ValueError: pass def pick(): while True: nums = set(get_ints( "Please enter 6 numbers in [1..49], ie 3 4 17 22 44 47: " )) if len(nums) == 6 and 1 <= min(nums) and max(nums) <= 49: return nums def num_matched(picked): return len(picked & draw()) # set intersection def report(matches): total_cost = COST * sum(matches) total_won = sum(m*p for m,p in zip(matches, PRIZES)) net = total_won - total_cost # report on the results: print("\nYou won:") print( " nothing {:>8} times -> ${:>12.2f}" .format(sum(matches[:3]), 0.) ) for i in range(3, 7): print( " ${:>12.2f} {:>8} times -> ${:>12.2f}" .format(PRIZES[i], matches[i], PRIZES[i] * matches[i]) ) print( "\nYou paid ${:0.2f} to win ${:0.2f}, for a net result of ${:0.2f}." .format(total_cost, total_won, net) ) def main(): # pick a set of numbers picked = pick() # repeat until we have seen 3, 4, 5, and 6-ball matches matches = [0, 0, 0, 0, 0, 0, 0] while not all(matches[3:]): matches[num_matched(picked)] += 1 report(matches) if __name__=="__main__": main() ``` which results in ``` Please enter 6 numbers in [1..49], ie 3 4 17 22 44 47: 4 6 9 12 14 19 You won: nothing 10060703 times -> $ 0.00 $ 5.00 181218 times -> $ 906090.00 $ 50.00 9888 times -> $ 494400.00 $ 500.00 189 times -> $ 94500.00 $ 1000000.00 1 times -> $ 1000000.00 You paid $5125999.50 to win $2494990.00, for a net result of $-2631009.50. ```
just keep a record of what you have seen ``` ... costs=0 found = [] while True: ... if matches==3 and 3 not in found: found.append(3) print ("You Matched 3 on try", costs) elif matches==4 add 4 not in found: found.append(4) print ("Cool! 4 matches on try", costs) ... if set([3,4,5,6]).intersection(found) == set([3,4,5,6]): print "You Found em all!" return ```
7,533,677
``` a.zip--- -- b.txt -- c.txt -- d.txt ``` Methods to process the zip files with Python, I could expand the zip file to a temporary directory, then process each txt file one bye one Here, I am more interested to know whether or not python provides such a way so that I don't have to manually expand the zip file and just simply treat the zip file as a specialized folder and process each txt accordingly.
2011/09/23
[ "https://Stackoverflow.com/questions/7533677", "https://Stackoverflow.com", "https://Stackoverflow.com/users/391104/" ]
The [Python standard library](http://docs.python.org/library/zipfile.html) helps you. Doug Hellman writes very informative posts about selected modules: <https://pymotw.com/3/zipfile/> To comment on Davids post: From Python 2.7 on the Zipfile object provides a context manager, so the recommended way would be: ``` import zipfile with zipfile.ZipFile("zipfile.zip", "r") as f: for name in f.namelist(): data = f.read(name) print name, len(data), repr(data[:10]) ``` The `close` method will be called automatically because of the with statement. This is especially important if you write to the file.
Yes you can process each file by itself. Take a look at the tutorial [here](http://effbot.org/librarybook/zipfile.htm). For your needs you can do something like this example from that tutorial: ``` import zipfile file = zipfile.ZipFile("zipfile.zip", "r") for name in file.namelist(): data = file.read(name) print name, len(data), repr(data[:10]) ``` This will iterate over each file in the archive and print out its name, length and the first 10 bytes. The comprehensive reference documentation is [here](http://docs.python.org/library/zipfile.html).
58,381,152
In training a neural network in Tensorflow 2.0 in python, I'm noticing that training accuracy and loss change dramatically between epochs. I'm aware that the metrics printed are an average over the entire epoch, but accuracy seems to drop significantly after each epoch, despite the average always increasing. The loss also exhibits this behavior, dropping significantly each epoch but the average increases. Here is an image of what I mean (from Tensorboard): [![strange training behavior](https://i.stack.imgur.com/fm6KK.png)](https://i.stack.imgur.com/fm6KK.png) I've noticed this behavior on all of the models I've implemented myself, so it could be a bug, but I want a second opinion on whether this is normal behavior and if so what does it mean? Also, I'm using a fairly large dataset (roughly 3 million examples). Batch size is 32 and each dot in the accuracy/loss graphs represent 50 batches (2k on the graph = 100k batches). The learning rate graph is 1:1 for batches.
2019/10/14
[ "https://Stackoverflow.com/questions/58381152", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4607066/" ]
It seems this phenomenon comes from the fact that the model has a high batch-to-batch variance in terms of accuracy and loss. This is illustrated if I take a graph of the model with the actual metrics per step as opposed to the average over the epoch: [![enter image description here](https://i.stack.imgur.com/DRCUp.png)](https://i.stack.imgur.com/DRCUp.png) Here you can see that the model can vary widely. (This graph is just for one epoch, but the fact remains). Since the average metrics were being reported per epoch, at the beginning of the next epoch it is highly likely that the average metrics will be lower than the previous average, leading to a dramatic drop in the running average value, illustrated in red below: [![enter image description here](https://i.stack.imgur.com/zC105.png)](https://i.stack.imgur.com/zC105.png) If you imagine the discontinuities in the red graph as being epoch transitions, you can see why you would observe the phenomenon in the question. TL;DR The model has a very high variance in it's output with respect to each batch.
I have just newly experienced this kind of issue while I was working on a project that is about object localization. For my case, there was three **main** candidates. * I have used no shuffling in my training. That creates a loss increase after each epoch. * I have defined a new loss function that is calculated using IOU. It was something like; ``` def new_loss(y_true, y_pred): mse = tf.losses.mean_squared_error(y_true, y_pred) iou = calculate_iou(y_true, y_pred) return mse + (1 - iou) ``` I also suspect this loss may be a possible candidate of increase in loss after epoch. However, I was not able to replace it. * I was using an *Adam* optimizer. So, a possible thing to do is to change it to see how the training affected. Conclusion ========== I have just changed the *Adam* to *SGD* and shuffled my data in training. There was still a jump in the loss but it was so minimal compared without a change. For example, my loss spike was ~0.3 before the changes and it became ~0.02. **Note** I need to add there are lots of discussions about this topic. I tried to utilize the possible solutions that are possible candidates for my model.
59,439,124
The relatively new keras-tuner module for tensorflow-2 is causing the error 'Failed to create a NewWriteableFile'. The tuner.search function is working, it is only after the trial completes that the error is thrown. This is a tutorial from the sentdex Youtube channel. Here is the code: ``` from tensorflow import keras from tensorflow.keras.datasets import fashion_mnist from tensorflow.keras.layers import Dense, Conv2D, MaxPooling2D, Activation, Flatten from kerastuner.tuners import RandomSearch from kerastuner.engine.hyperparameters import HyperParameters import matplotlib.pyplot as plt import time (x_train, y_train), (x_test, y_test) = fashion_mnist.load_data() x_train = x_train[:1000].reshape(-1, 28, 28, 1) x_test = x_test[:100].reshape(-1, 28, 28, 1) y_train = y_train[:1000] y_test = y_test[:100] # x_train = x_train.reshape(-1, 28, 28, 1) # x_test = x_test.reshape(-1, 28, 28, 1) LOG_DIR = f"{int(time.time())}" def build_model(hp): model = keras.models.Sequential() model.add(Conv2D(hp.Int("layer1_channels", min_value=32, max_value=256, step=32), (3,3), input_shape=x_train.shape[1:])) model.add(Activation('relu')) model.add(MaxPooling2D(pool_size=(2,2))) for i in range(hp.Int("n_layers", 1, 4)): model.add(Conv2D(hp.Int(f"conv_{i}_channels", min_value=32, max_value=256, step=32), (3,3))) model.add(Flatten()) model.add(Dense(10)) model.add(Activation('softmax')) model.compile(optimizer='adam', loss='sparse_categorical_crossentropy', metrics=['accuracy']) return model tuner = RandomSearch(build_model, objective = "val_accuracy", max_trials = 1, executions_per_trial = 1, directory = LOG_DIR, project_name = 'junk') tuner.search(x_train, y_train, epochs=1, batch_size=64, validation_data=(x_test, y_test)) ``` This is the traceback printout: ``` (tf_2.0) C:\Users\redex\OneDrive\Documents\Education\Sentdex Tutorials\Keras-Tuner>C:/Users/redex/Anaconda3/envs/tf_2.0/python.exe "c:/Users/redex/OneDrive/Documents/Education/Sentdex Tutorials/Keras-Tuner/keras-tuner.py" 2019-12-21 10:07:47.556531: I tensorflow/core/platform/cpu_feature_guard.cc:145] This TensorFlow binary is optimized with Intel(R) MKL-DNN to use the following CPU instructions in performance critical operations: AVX AVX2 To enable them in non-MKL-DNN operations, rebuild TensorFlow with the appropriate compiler flags. 2019-12-21 10:07:47.574699: I tensorflow/core/common_runtime/process_util.cc:115] Creating new thread pool with default inter op setting: 8. Tune using inter_op_parallelism_threads for best performance. Train on 1000 samples, validate on 100 samples 960/1000 [===========================>..] - ETA: 0s - loss: 64.0616 - accuracy: 0.2844 2019-12-21 10:07:55.080024: W tensorflow/core/framework/op_kernel.cc:1622] OP_REQUIRES failed at save_restore_v2_ops.cc:109 : Not found: Failed to create a NewWriteableFile: 1576951667\junk\trial_c5a5436b1d28a85446ce55c8d13f9657\checkpoints\epoch_0\checkpoint_temp_8a230a5ae2d046098456d1fdfc696690/part-00000-of-00001.data-00000-of-00001.tempstate15377864750281844169 : The system cannot find the path specified. ; No such process Traceback (most recent call last): File "c:/Users/redex/OneDrive/Documents/Education/Sentdex Tutorials/Keras-Tuner/keras-tuner.py", line 65, in <module> validation_data=(x_test, y_test)) File "C:\Users\redex\Anaconda3\envs\tf_2.0\lib\site-packages\kerastuner\engine\base_tuner.py", line 122, in search self.run_trial(trial, *fit_args, **fit_kwargs) File "C:\Users\redex\Anaconda3\envs\tf_2.0\lib\site-packages\kerastuner\engine\multi_execution_tuner.py", line 95, in run_trial history = model.fit(*fit_args, **fit_kwargs, callbacks=callbacks) File "C:\Users\redex\Anaconda3\envs\tf_2.0\lib\site-packages\tensorflow_core\python\keras\engine\training.py", line 728, in fit use_multiprocessing=use_multiprocessing) File "C:\Users\redex\Anaconda3\envs\tf_2.0\lib\site-packages\tensorflow_core\python\keras\engine\training_v2.py", line 372, in fit prefix='val_') File "C:\Users\redex\Anaconda3\envs\tf_2.0\lib\contextlib.py", line 119, in __exit__ next(self.gen) File "C:\Users\redex\Anaconda3\envs\tf_2.0\lib\site-packages\tensorflow_core\python\keras\engine\training_v2.py", line 685, in on_epoch self.callbacks.on_epoch_end(epoch, epoch_logs) File "C:\Users\redex\Anaconda3\envs\tf_2.0\lib\site-packages\tensorflow_core\python\keras\callbacks.py", line 298, in on_epoch_end callback.on_epoch_end(epoch, logs) File "C:\Users\redex\Anaconda3\envs\tf_2.0\lib\site-packages\tensorflow_core\python\keras\callbacks.py", line 965, in on_epoch_end self._save_model(epoch=epoch, logs=logs) File "C:\Users\redex\Anaconda3\envs\tf_2.0\lib\site-packages\tensorflow_core\python\keras\callbacks.py", line 999, in _save_model self.model.save_weights(filepath, overwrite=True) File "C:\Users\redex\Anaconda3\envs\tf_2.0\lib\site-packages\tensorflow_core\python\keras\engine\network.py", line 1090, in save_weights self._trackable_saver.save(filepath, session=session) File "C:\Users\redex\Anaconda3\envs\tf_2.0\lib\site-packages\tensorflow_core\python\training\tracking\util.py", line 1155, in save file_prefix=file_prefix_tensor, object_graph_tensor=object_graph_tensor) File "C:\Users\redex\Anaconda3\envs\tf_2.0\lib\site-packages\tensorflow_core\python\training\tracking\util.py", line 1103, in _save_cached_when_graph_building save_op = saver.save(file_prefix) File "C:\Users\redex\Anaconda3\envs\tf_2.0\lib\site-packages\tensorflow_core\python\training\saving\functional_saver.py", line 230, in save sharded_saves.append(saver.save(shard_prefix)) File "C:\Users\redex\Anaconda3\envs\tf_2.0\lib\site-packages\tensorflow_core\python\training\saving\functional_saver.py", line 72, in save return io_ops.save_v2(file_prefix, tensor_names, tensor_slices, tensors) File "C:\Users\redex\Anaconda3\envs\tf_2.0\lib\site-packages\tensorflow_core\python\ops\gen_io_ops.py", line 1932, in save_v2 ctx=_ctx) File "C:\Users\redex\Anaconda3\envs\tf_2.0\lib\site-packages\tensorflow_core\python\ops\gen_io_ops.py", line 1969, in save_v2_eager_fallback ctx=_ctx, name=name) File "C:\Users\redex\Anaconda3\envs\tf_2.0\lib\site-packages\tensorflow_core\python\eager\execute.py", line 67, in quick_execute six.raise_from(core._status_to_exception(e.code, message), None) File "<string>", line 3, in raise_from tensorflow.python.framework.errors_impl.NotFoundError: Failed to create a NewWriteableFile: 1576951667\junk\trial_c5a5436b1d28a85446ce55c8d13f9657\checkpoints\epoch_0\checkpoint_temp_8a230a5ae2d046098456d1fdfc696690/part-00000-of-00001.data-00000-of-00001.tempstate15377864750281844169 : The system cannot find the path specified. ; No such process [Op:SaveV2] ``` My machine is Windows 10 The keras-tuner documentation specifies Tensorflow 2.0 and Python 3.6 but I'm using 3.7.4. I presume more recent is OK. I'm no software expert so this is about all I know, any help is appreciated.
2019/12/21
[ "https://Stackoverflow.com/questions/59439124", "https://Stackoverflow.com", "https://Stackoverflow.com/users/9447938/" ]
I had the similas problem while using kerastuner in Windows and I've solved it: 1. The first issue is that the path to the log directory may be too long. I had to reduced it. 2. The second problem is that python (or tf) doens't work in Windows with mixed slashes. But kerastuner forms the path with backslashes. So I should provide the path with backslashes. I've done this with os.path.normpath() method: ``` tuner=RandomSearch(build_model,objective='val_accuracy',max_trials=10,directory=os.path.normpath('C:/')) tuner.search(x_train,y_train,batch_size=256,epochs=30,validation_split=0.2,verbose=1) ``` Now I don't receive this error.
The problem it would appear is a Windows issue. Running the same code in a Linux environment had no issue in this regard.
59,439,124
The relatively new keras-tuner module for tensorflow-2 is causing the error 'Failed to create a NewWriteableFile'. The tuner.search function is working, it is only after the trial completes that the error is thrown. This is a tutorial from the sentdex Youtube channel. Here is the code: ``` from tensorflow import keras from tensorflow.keras.datasets import fashion_mnist from tensorflow.keras.layers import Dense, Conv2D, MaxPooling2D, Activation, Flatten from kerastuner.tuners import RandomSearch from kerastuner.engine.hyperparameters import HyperParameters import matplotlib.pyplot as plt import time (x_train, y_train), (x_test, y_test) = fashion_mnist.load_data() x_train = x_train[:1000].reshape(-1, 28, 28, 1) x_test = x_test[:100].reshape(-1, 28, 28, 1) y_train = y_train[:1000] y_test = y_test[:100] # x_train = x_train.reshape(-1, 28, 28, 1) # x_test = x_test.reshape(-1, 28, 28, 1) LOG_DIR = f"{int(time.time())}" def build_model(hp): model = keras.models.Sequential() model.add(Conv2D(hp.Int("layer1_channels", min_value=32, max_value=256, step=32), (3,3), input_shape=x_train.shape[1:])) model.add(Activation('relu')) model.add(MaxPooling2D(pool_size=(2,2))) for i in range(hp.Int("n_layers", 1, 4)): model.add(Conv2D(hp.Int(f"conv_{i}_channels", min_value=32, max_value=256, step=32), (3,3))) model.add(Flatten()) model.add(Dense(10)) model.add(Activation('softmax')) model.compile(optimizer='adam', loss='sparse_categorical_crossentropy', metrics=['accuracy']) return model tuner = RandomSearch(build_model, objective = "val_accuracy", max_trials = 1, executions_per_trial = 1, directory = LOG_DIR, project_name = 'junk') tuner.search(x_train, y_train, epochs=1, batch_size=64, validation_data=(x_test, y_test)) ``` This is the traceback printout: ``` (tf_2.0) C:\Users\redex\OneDrive\Documents\Education\Sentdex Tutorials\Keras-Tuner>C:/Users/redex/Anaconda3/envs/tf_2.0/python.exe "c:/Users/redex/OneDrive/Documents/Education/Sentdex Tutorials/Keras-Tuner/keras-tuner.py" 2019-12-21 10:07:47.556531: I tensorflow/core/platform/cpu_feature_guard.cc:145] This TensorFlow binary is optimized with Intel(R) MKL-DNN to use the following CPU instructions in performance critical operations: AVX AVX2 To enable them in non-MKL-DNN operations, rebuild TensorFlow with the appropriate compiler flags. 2019-12-21 10:07:47.574699: I tensorflow/core/common_runtime/process_util.cc:115] Creating new thread pool with default inter op setting: 8. Tune using inter_op_parallelism_threads for best performance. Train on 1000 samples, validate on 100 samples 960/1000 [===========================>..] - ETA: 0s - loss: 64.0616 - accuracy: 0.2844 2019-12-21 10:07:55.080024: W tensorflow/core/framework/op_kernel.cc:1622] OP_REQUIRES failed at save_restore_v2_ops.cc:109 : Not found: Failed to create a NewWriteableFile: 1576951667\junk\trial_c5a5436b1d28a85446ce55c8d13f9657\checkpoints\epoch_0\checkpoint_temp_8a230a5ae2d046098456d1fdfc696690/part-00000-of-00001.data-00000-of-00001.tempstate15377864750281844169 : The system cannot find the path specified. ; No such process Traceback (most recent call last): File "c:/Users/redex/OneDrive/Documents/Education/Sentdex Tutorials/Keras-Tuner/keras-tuner.py", line 65, in <module> validation_data=(x_test, y_test)) File "C:\Users\redex\Anaconda3\envs\tf_2.0\lib\site-packages\kerastuner\engine\base_tuner.py", line 122, in search self.run_trial(trial, *fit_args, **fit_kwargs) File "C:\Users\redex\Anaconda3\envs\tf_2.0\lib\site-packages\kerastuner\engine\multi_execution_tuner.py", line 95, in run_trial history = model.fit(*fit_args, **fit_kwargs, callbacks=callbacks) File "C:\Users\redex\Anaconda3\envs\tf_2.0\lib\site-packages\tensorflow_core\python\keras\engine\training.py", line 728, in fit use_multiprocessing=use_multiprocessing) File "C:\Users\redex\Anaconda3\envs\tf_2.0\lib\site-packages\tensorflow_core\python\keras\engine\training_v2.py", line 372, in fit prefix='val_') File "C:\Users\redex\Anaconda3\envs\tf_2.0\lib\contextlib.py", line 119, in __exit__ next(self.gen) File "C:\Users\redex\Anaconda3\envs\tf_2.0\lib\site-packages\tensorflow_core\python\keras\engine\training_v2.py", line 685, in on_epoch self.callbacks.on_epoch_end(epoch, epoch_logs) File "C:\Users\redex\Anaconda3\envs\tf_2.0\lib\site-packages\tensorflow_core\python\keras\callbacks.py", line 298, in on_epoch_end callback.on_epoch_end(epoch, logs) File "C:\Users\redex\Anaconda3\envs\tf_2.0\lib\site-packages\tensorflow_core\python\keras\callbacks.py", line 965, in on_epoch_end self._save_model(epoch=epoch, logs=logs) File "C:\Users\redex\Anaconda3\envs\tf_2.0\lib\site-packages\tensorflow_core\python\keras\callbacks.py", line 999, in _save_model self.model.save_weights(filepath, overwrite=True) File "C:\Users\redex\Anaconda3\envs\tf_2.0\lib\site-packages\tensorflow_core\python\keras\engine\network.py", line 1090, in save_weights self._trackable_saver.save(filepath, session=session) File "C:\Users\redex\Anaconda3\envs\tf_2.0\lib\site-packages\tensorflow_core\python\training\tracking\util.py", line 1155, in save file_prefix=file_prefix_tensor, object_graph_tensor=object_graph_tensor) File "C:\Users\redex\Anaconda3\envs\tf_2.0\lib\site-packages\tensorflow_core\python\training\tracking\util.py", line 1103, in _save_cached_when_graph_building save_op = saver.save(file_prefix) File "C:\Users\redex\Anaconda3\envs\tf_2.0\lib\site-packages\tensorflow_core\python\training\saving\functional_saver.py", line 230, in save sharded_saves.append(saver.save(shard_prefix)) File "C:\Users\redex\Anaconda3\envs\tf_2.0\lib\site-packages\tensorflow_core\python\training\saving\functional_saver.py", line 72, in save return io_ops.save_v2(file_prefix, tensor_names, tensor_slices, tensors) File "C:\Users\redex\Anaconda3\envs\tf_2.0\lib\site-packages\tensorflow_core\python\ops\gen_io_ops.py", line 1932, in save_v2 ctx=_ctx) File "C:\Users\redex\Anaconda3\envs\tf_2.0\lib\site-packages\tensorflow_core\python\ops\gen_io_ops.py", line 1969, in save_v2_eager_fallback ctx=_ctx, name=name) File "C:\Users\redex\Anaconda3\envs\tf_2.0\lib\site-packages\tensorflow_core\python\eager\execute.py", line 67, in quick_execute six.raise_from(core._status_to_exception(e.code, message), None) File "<string>", line 3, in raise_from tensorflow.python.framework.errors_impl.NotFoundError: Failed to create a NewWriteableFile: 1576951667\junk\trial_c5a5436b1d28a85446ce55c8d13f9657\checkpoints\epoch_0\checkpoint_temp_8a230a5ae2d046098456d1fdfc696690/part-00000-of-00001.data-00000-of-00001.tempstate15377864750281844169 : The system cannot find the path specified. ; No such process [Op:SaveV2] ``` My machine is Windows 10 The keras-tuner documentation specifies Tensorflow 2.0 and Python 3.6 but I'm using 3.7.4. I presume more recent is OK. I'm no software expert so this is about all I know, any help is appreciated.
2019/12/21
[ "https://Stackoverflow.com/questions/59439124", "https://Stackoverflow.com", "https://Stackoverflow.com/users/9447938/" ]
I had the similas problem while using kerastuner in Windows and I've solved it: 1. The first issue is that the path to the log directory may be too long. I had to reduced it. 2. The second problem is that python (or tf) doens't work in Windows with mixed slashes. But kerastuner forms the path with backslashes. So I should provide the path with backslashes. I've done this with os.path.normpath() method: ``` tuner=RandomSearch(build_model,objective='val_accuracy',max_trials=10,directory=os.path.normpath('C:/')) tuner.search(x_train,y_train,batch_size=256,epochs=30,validation_split=0.2,verbose=1) ``` Now I don't receive this error.
In my case, the path exceeds the maximum length of path in windows because the length of generated path by Keras Turner is about 170. After I make my folder shorter, it works normally.
54,547,986
I couldn't figure out why I'm getting a `NameError` when trying to access a function inside the class. This is the code I am having a problem with. Am I missing something? ``` class ArmstrongNumber: def cubesum(num): return sum([int(i)**3 for i in list(str(num))]) def PrintArmstrong(num): if cubesum(num) == num: return "Armstrong Number" return "Not an Armstrong Number" def Armstrong(num): if cubesum(num) == num: return True return False [i for i in range(1000) if ArmstrongNumber.Armstrong(i)] # this return NameError ``` Error-message: ``` NameError Traceback (most recent call last) <ipython-input-32-f3d39f24a48c> in <module> ----> 1 ArmstrongNumber.Armstrong(153) <ipython-input-31-fd21586166ed> in Armstrong(num) 10 11 def Armstrong(num): ---> 12 if cubesum(num) == num: 13 return True 14 return False NameError: name 'cubesum' is not defined ```
2019/02/06
[ "https://Stackoverflow.com/questions/54547986", "https://Stackoverflow.com", "https://Stackoverflow.com/users/10361602/" ]
Use `classname` before method: ``` class ArmstrongNumber: def cubesum(num): return sum([int(i)**3 for i in list(str(num))]) def PrintArmstrong(num): if ArmstrongNumber.cubesum(num) == num: return "Armstrong Number" return "Not an Armstrong Number" def Armstrong(num): if ArmstrongNumber.cubesum(num) == num: return True return False print([i for i in range(1000) if ArmstrongNumber.Armstrong(i)]) ``` Unlsess you pass `self` to the functions those functions are not `instance methods`. Even if you define that within class you still need to access them using `classname`.
this should be your actual solution if you really want to use class ``` class ArmstrongNumber(object): def cubesum(self, num): return sum([int(i)**3 for i in list(str(num))]) def PrintArmstrong(self, num): if self.cubesum(num) == num: return "Armstrong Number" return "Not an Armstrong Number" def Armstrong(self, num): if self.cubesum(num) == num: return True return False a = ArmstrongNumber() print([i for i in range(1000) if a.Armstrong(i)]) ``` output ``` [0, 1, 153, 370, 371, 407] ``` --- **2nd method:** if you dont want to use class then use static methods like this ``` def cubesum(num): return sum([int(i)**3 for i in list(str(num))]) def PrintArmstrong(num): if cubesum(num) == num: return "Armstrong Number" return "Not an Armstrong Number" def Armstrong(num): if cubesum(num) == num: return True return False # a = ArmstrongNumber() print([i for i in range(1000) if Armstrong(i)]) ```
32,562,253
I am trying to run multiple calculations with UI using Tkinter in python where i have to display all the outputs for all the calculations. The problem is, the output for the first calculation is fine but the outputs for further calculations seems to be calculated out of default values. I came to know that i should destroy first label in order to output the second calculation, but when i try to destroy my first label, i could not. The code i tried is as follows: ``` from tkinter import * def funcname(): #My calculations GMT = GMT_user.get() lat = lat_deg_user.get() E = GMT * 365 Eqntime_label.configure(text=E) Elevation = E/lat Elevation_label.configure(text=Elevation) nUI_pgm = Tk() GMT_user = DoubleVar() lat_deg_user = DoubleVar() nlabel_time = Label(text = "Enter time in accordance to GMT in decimal").pack() nEntry_time = Entry(nUI_pgm, textvariable = GMT_user).pack() nlabel_Long = Label(text = "Enter Longitude in Decimal Degrees").pack() nEntry_Long = Entry(nUI_pgm, textvariable = lat_deg_user).pack() nbutton = Button(nUI_pgm, text = "Calculate", command = funcname).pack() #Displaying results nlabel_E = Label (text = "The Equation of Time is").pack() Eqntime_label = Label(nUI_pgm, text="") Eqntime_label.pack() #when i try Eqntime_label.destroy() # this doesn't work nlabel_Elevation = Label(text = "The Elevation of the sun is").pack() Elevation_label = Label(nUI_pgm, text="") Elevation_label.pack() nUI_pgm.mainloop() ``` Here I have to destroy the Eqntime\_label after the result is displayed in order to output Elevation\_label too. What should i do??
2015/09/14
[ "https://Stackoverflow.com/questions/32562253", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5325381/" ]
Temporary tables always gets created in TempDb. However, it is not necessary that size of TempDb is only due to temporary tables. TempDb is used in various ways 1. Internal objects (Sort & spool, CTE, index rebuild, hash join etc) 2. User objects (Temporary table, table variables) 3. Version store (AFTER/INSTEAD OF triggers, MARS) So, as it is clear that it is being use in various SQL operations so size can grow due to other reasons also You can check what is causing TempDb to grow its size with below query ``` SELECT SUM (user_object_reserved_page_count)*8 as usr_obj_kb, SUM (internal_object_reserved_page_count)*8 as internal_obj_kb, SUM (version_store_reserved_page_count)*8 as version_store_kb, SUM (unallocated_extent_page_count)*8 as freespace_kb, SUM (mixed_extent_page_count)*8 as mixedextent_kb FROM sys.dm_db_file_space_usage ``` if above query shows, * Higher number of user objects then it means that there is more usage of Temp tables , cursors or temp variables * Higher number of internal objects indicates that Query plan is using a lot of database. Ex: sorting, Group by etc. * Higher number of version stores shows Long running transaction or high transaction throughput based on that you can configure TempDb file size. I've written an article recently about TempDB configuration best practices. You can read that [here](http://social.technet.microsoft.com/wiki/contents/articles/31353.sql-server-demystifying-tempdb-and-recommendations.aspx)
Perhaps you can use following SQL command on temp db files seperately ``` DBCC SHRINKFILE ``` Please refer to <https://support.microsoft.com/en-us/kb/307487> for more information
5,440,550
The sample application on the android developers site validates the purchase json using java code. Has anybody had any luck working out how to validate the purchase in python. In particular in GAE? The following are the relevant excerpts from the android in-app billing [example program](http://developer.android.com/guide/market/billing/billing_integrate.html#billing-download). This is what would need to be converted to python using [PyCrypto](http://www.dlitz.net/software/pycrypto/) which was re-written to be completely python by Google and is the only Security lib available on app engine. Hopefully Google is cool with me using the excerpts below. ``` private static final String KEY_FACTORY_ALGORITHM = "RSA"; private static final String SIGNATURE_ALGORITHM = "SHA1withRSA"; String base64EncodedPublicKey = "your public key here"; PublicKey key = Security.generatePublicKey(base64EncodedPublicKey); verified = Security.verify(key, signedData, signature); public static PublicKey generatePublicKey(String encodedPublicKey) { try { byte[] decodedKey = Base64.decode(encodedPublicKey); KeyFactory keyFactory = KeyFactory.getInstance(KEY_FACTORY_ALGORITHM); return keyFactory.generatePublic(new X509EncodedKeySpec(decodedKey)); } catch ... } } public static boolean verify(PublicKey publicKey, String signedData, String signature) { if (Consts.DEBUG) { Log.i(TAG, "signature: " + signature); } Signature sig; try { sig = Signature.getInstance(SIGNATURE_ALGORITHM); sig.initVerify(publicKey); sig.update(signedData.getBytes()); if (!sig.verify(Base64.decode(signature))) { Log.e(TAG, "Signature verification failed."); return false; } return true; } catch ... } return false; } ```
2011/03/26
[ "https://Stackoverflow.com/questions/5440550", "https://Stackoverflow.com", "https://Stackoverflow.com/users/677760/" ]
Here's how i did it: ``` from Crypto.Hash import SHA from Crypto.PublicKey import RSA from Crypto.Signature import PKCS1_v1_5 from base64 import b64decode def chunks(s, n): for start in range(0, len(s), n): yield s[start:start+n] def pem_format(key): return '\n'.join([ '-----BEGIN PUBLIC KEY-----', '\n'.join(chunks(key, 64)), '-----END PUBLIC KEY-----' ]) def validate_purchase(publicKey, signedData, signature): key = RSA.importKey(pem_format(publicKey)) verifier = PKCS1_v1_5.new(key) data = SHA.new(signedData) sig = b64decode(signature) return verifier.verify(data, sig) ``` This assumes that `publicKey` is your base64 encoded Google Play Store key on one line as you get it from the Developer Console. For people who rather use m2crypto, `validate_purchase()` would change to: ``` from M2Crypto import RSA, BIO, EVP from base64 import b64decode # pem_format() as above def validate_purchase(publicKey, signedData, signature): bio = BIO.MemoryBuffer(pem_format(publicKey)) rsa = RSA.load_pub_key_bio(bio) key = EVP.PKey() key.assign_rsa(rsa) key.verify_init() key.verify_update(signedData) return key.verify_final(b64decode(signature)) == 1 ```
I finally figured out that your base64 encoded public key from Google Play is an X.509 subjectPublicKeyInfo DER SEQUENCE, and that the signature scheme is RSASSA-PKCS1-v1\_5 and not RSASSA-PSS. If you have [PyCrypto](https://www.dlitz.net/software/pycrypto/) installed, it's actually quite easy: ``` import base64 from Crypto.Hash import SHA from Crypto.PublicKey import RSA from Crypto.Signature import PKCS1_v1_5 # Your base64 encoded public key from Google Play. _PUBLIC_KEY_BASE64 = "YOUR_BASE64_PUBLIC_KEY_HERE" # Key from Google Play is a X.509 subjectPublicKeyInfo DER SEQUENCE. _PUBLIC_KEY = RSA.importKey(base64.standard_b64decode(_PUBLIC_KEY_BASE64)) def verify(signed_data, signature_base64): """Returns whether the given data was signed with the private key.""" h = SHA.new() h.update(signed_data) # Scheme is RSASSA-PKCS1-v1_5. verifier = PKCS1_v1_5.new(_PUBLIC_KEY) # The signature is base64 encoded. signature = base64.standard_b64decode(signature_base64) return verifier.verify(h, signature) ```
5,440,550
The sample application on the android developers site validates the purchase json using java code. Has anybody had any luck working out how to validate the purchase in python. In particular in GAE? The following are the relevant excerpts from the android in-app billing [example program](http://developer.android.com/guide/market/billing/billing_integrate.html#billing-download). This is what would need to be converted to python using [PyCrypto](http://www.dlitz.net/software/pycrypto/) which was re-written to be completely python by Google and is the only Security lib available on app engine. Hopefully Google is cool with me using the excerpts below. ``` private static final String KEY_FACTORY_ALGORITHM = "RSA"; private static final String SIGNATURE_ALGORITHM = "SHA1withRSA"; String base64EncodedPublicKey = "your public key here"; PublicKey key = Security.generatePublicKey(base64EncodedPublicKey); verified = Security.verify(key, signedData, signature); public static PublicKey generatePublicKey(String encodedPublicKey) { try { byte[] decodedKey = Base64.decode(encodedPublicKey); KeyFactory keyFactory = KeyFactory.getInstance(KEY_FACTORY_ALGORITHM); return keyFactory.generatePublic(new X509EncodedKeySpec(decodedKey)); } catch ... } } public static boolean verify(PublicKey publicKey, String signedData, String signature) { if (Consts.DEBUG) { Log.i(TAG, "signature: " + signature); } Signature sig; try { sig = Signature.getInstance(SIGNATURE_ALGORITHM); sig.initVerify(publicKey); sig.update(signedData.getBytes()); if (!sig.verify(Base64.decode(signature))) { Log.e(TAG, "Signature verification failed."); return false; } return true; } catch ... } return false; } ```
2011/03/26
[ "https://Stackoverflow.com/questions/5440550", "https://Stackoverflow.com", "https://Stackoverflow.com/users/677760/" ]
Here's how i did it: ``` from Crypto.Hash import SHA from Crypto.PublicKey import RSA from Crypto.Signature import PKCS1_v1_5 from base64 import b64decode def chunks(s, n): for start in range(0, len(s), n): yield s[start:start+n] def pem_format(key): return '\n'.join([ '-----BEGIN PUBLIC KEY-----', '\n'.join(chunks(key, 64)), '-----END PUBLIC KEY-----' ]) def validate_purchase(publicKey, signedData, signature): key = RSA.importKey(pem_format(publicKey)) verifier = PKCS1_v1_5.new(key) data = SHA.new(signedData) sig = b64decode(signature) return verifier.verify(data, sig) ``` This assumes that `publicKey` is your base64 encoded Google Play Store key on one line as you get it from the Developer Console. For people who rather use m2crypto, `validate_purchase()` would change to: ``` from M2Crypto import RSA, BIO, EVP from base64 import b64decode # pem_format() as above def validate_purchase(publicKey, signedData, signature): bio = BIO.MemoryBuffer(pem_format(publicKey)) rsa = RSA.load_pub_key_bio(bio) key = EVP.PKey() key.assign_rsa(rsa) key.verify_init() key.verify_update(signedData) return key.verify_final(b64decode(signature)) == 1 ```
Now that we're in 2016, here's how to do it with `cryptography`: ``` import base64 import binascii from cryptography.exceptions import InvalidSignature from cryptography.hazmat.backends import default_backend from cryptography.hazmat.primitives import hashes, serialization from cryptography.hazmat.primitives.asymmetric import padding class RSAwithSHA1: def __init__(self, public_key): # the public key google gives you is in DER encoding # let cryptography handle it for you self.public_key = serialization.load_der_public_key( base64.b64decode(public_key), backend=default_backend() ) def verify(self, data, signature): """ :param str data: purchase data :param str signature: data signature :return: True signature verification passes or False otherwise """ # note the signature is base64 encoded signature = base64.b64decode(signature.encode()) # as per https://developer.android.com/google/play/billing/billing_reference.html # the signature uses "the RSASSA-PKCS1-v1_5 scheme" verifier = self.public_key.verifier( signature, padding.PKCS1v15(), hashes.SHA1(), ) verifier.update(data.encode()) try: verifier.verify() except InvalidSignature: return False else: return True ```
5,440,550
The sample application on the android developers site validates the purchase json using java code. Has anybody had any luck working out how to validate the purchase in python. In particular in GAE? The following are the relevant excerpts from the android in-app billing [example program](http://developer.android.com/guide/market/billing/billing_integrate.html#billing-download). This is what would need to be converted to python using [PyCrypto](http://www.dlitz.net/software/pycrypto/) which was re-written to be completely python by Google and is the only Security lib available on app engine. Hopefully Google is cool with me using the excerpts below. ``` private static final String KEY_FACTORY_ALGORITHM = "RSA"; private static final String SIGNATURE_ALGORITHM = "SHA1withRSA"; String base64EncodedPublicKey = "your public key here"; PublicKey key = Security.generatePublicKey(base64EncodedPublicKey); verified = Security.verify(key, signedData, signature); public static PublicKey generatePublicKey(String encodedPublicKey) { try { byte[] decodedKey = Base64.decode(encodedPublicKey); KeyFactory keyFactory = KeyFactory.getInstance(KEY_FACTORY_ALGORITHM); return keyFactory.generatePublic(new X509EncodedKeySpec(decodedKey)); } catch ... } } public static boolean verify(PublicKey publicKey, String signedData, String signature) { if (Consts.DEBUG) { Log.i(TAG, "signature: " + signature); } Signature sig; try { sig = Signature.getInstance(SIGNATURE_ALGORITHM); sig.initVerify(publicKey); sig.update(signedData.getBytes()); if (!sig.verify(Base64.decode(signature))) { Log.e(TAG, "Signature verification failed."); return false; } return true; } catch ... } return false; } ```
2011/03/26
[ "https://Stackoverflow.com/questions/5440550", "https://Stackoverflow.com", "https://Stackoverflow.com/users/677760/" ]
I finally figured out that your base64 encoded public key from Google Play is an X.509 subjectPublicKeyInfo DER SEQUENCE, and that the signature scheme is RSASSA-PKCS1-v1\_5 and not RSASSA-PSS. If you have [PyCrypto](https://www.dlitz.net/software/pycrypto/) installed, it's actually quite easy: ``` import base64 from Crypto.Hash import SHA from Crypto.PublicKey import RSA from Crypto.Signature import PKCS1_v1_5 # Your base64 encoded public key from Google Play. _PUBLIC_KEY_BASE64 = "YOUR_BASE64_PUBLIC_KEY_HERE" # Key from Google Play is a X.509 subjectPublicKeyInfo DER SEQUENCE. _PUBLIC_KEY = RSA.importKey(base64.standard_b64decode(_PUBLIC_KEY_BASE64)) def verify(signed_data, signature_base64): """Returns whether the given data was signed with the private key.""" h = SHA.new() h.update(signed_data) # Scheme is RSASSA-PKCS1-v1_5. verifier = PKCS1_v1_5.new(_PUBLIC_KEY) # The signature is base64 encoded. signature = base64.standard_b64decode(signature_base64) return verifier.verify(h, signature) ```
Now that we're in 2016, here's how to do it with `cryptography`: ``` import base64 import binascii from cryptography.exceptions import InvalidSignature from cryptography.hazmat.backends import default_backend from cryptography.hazmat.primitives import hashes, serialization from cryptography.hazmat.primitives.asymmetric import padding class RSAwithSHA1: def __init__(self, public_key): # the public key google gives you is in DER encoding # let cryptography handle it for you self.public_key = serialization.load_der_public_key( base64.b64decode(public_key), backend=default_backend() ) def verify(self, data, signature): """ :param str data: purchase data :param str signature: data signature :return: True signature verification passes or False otherwise """ # note the signature is base64 encoded signature = base64.b64decode(signature.encode()) # as per https://developer.android.com/google/play/billing/billing_reference.html # the signature uses "the RSASSA-PKCS1-v1_5 scheme" verifier = self.public_key.verifier( signature, padding.PKCS1v15(), hashes.SHA1(), ) verifier.update(data.encode()) try: verifier.verify() except InvalidSignature: return False else: return True ```
44,972,219
I’m pretty new to Python and I just start to understand the basics. I’m trying to run a script in a loop to check the temperatures and if the outside temp getting higher than inside or the opposite, the function should print it once and continue to check every 5 seconds, for changed state. I found a similar [questions](https://stackoverflow.com/questions/38001105/python-print-only-one-time-inside-a-loop) what was very helpful but if I execute the code it print the outside temp is higher, next I heat up the inside sensor and it prints that it is inside higher, all good except that it doesn’t continue, the loop works but it doesn’t recognize the next change of state. . ``` import RPi.GPIO as GPIO import time sensor_name_0 = "test" printed_out = False printed_in = False try: while True: if sensor_name_0: sensor_0 = open('/sys/devices/w1_bus_master1/w1_master_slaves','r').read().split('\n')[0] sensor_1 = open('/sys/devices/w1_bus_master1/w1_master_slaves','r').read().split('\n')[1] sensorpath = "/sys/bus/w1/devices/" sensorfile = "/w1_slave" def callsensor_0(sensor_0): f = open(sensorpath + sensor_0 + sensorfile, 'r') lines = f.readlines() f.close() temp_line = lines[1].find('t=') temp_output = lines[1].strip() [temp_line+2:] temp_celsius = float(temp_output) / 1000 return temp_celsius def callsensor_1(sensor_1): f = open(sensorpath + sensor_1 + sensorfile, 'r') lines = f.readlines() f.close() temp_line = lines[1].find('t=') temp_output = lines[1].strip() [temp_line+2:] temp_celsius = float(temp_output) / 1000 return temp_celsius outside = (str('%.1f' % float(callsensor_0(sensor_0))).rstrip('0').rstrip('.')) inside = (str('%.1f' % float(callsensor_1(sensor_1))).rstrip('0').rstrip('.')) print "loop" if outside > inside and not printed_out: printed_out = True print "outside is higher then inside" print outside if outside < inside and not printed_in: printed_in = True print "inside is higher then outside" print inside time.sleep(5) except KeyboardInterrupt: print('interrupted!') ```
2017/07/07
[ "https://Stackoverflow.com/questions/44972219", "https://Stackoverflow.com", "https://Stackoverflow.com/users/8271048/" ]
Its been long time, but for future readers thought to share some info. There is a good article that explains the `getItemLayout`, please find it [here](https://medium.com/@jsoendermann/sectionlist-and-getitemlayout-2293b0b916fb) I also faced `data[index]` as `undefined`. The reason is that `index` is calculated considering `section.data.length + 2` (1 for section header and 1 for section footer), you can find the code [here (RN-52)](https://github.com/facebook/react-native/blob/0.52-stable/Libraries/Lists/VirtualizedSectionList.js#L334). With `SectionList` we have to be very careful while processing `index`.
For some reason the `react-native-get-item-layout` package keeps crashing with `"height: <<NaN>>"` so I had to write my [own RN SectionList getItemLayout](https://npmjs.com/package/sectionlist-get-itemlayout) . It uses the same interface as the former. Like the `package` it's also an `O(n)`.
44,972,219
I’m pretty new to Python and I just start to understand the basics. I’m trying to run a script in a loop to check the temperatures and if the outside temp getting higher than inside or the opposite, the function should print it once and continue to check every 5 seconds, for changed state. I found a similar [questions](https://stackoverflow.com/questions/38001105/python-print-only-one-time-inside-a-loop) what was very helpful but if I execute the code it print the outside temp is higher, next I heat up the inside sensor and it prints that it is inside higher, all good except that it doesn’t continue, the loop works but it doesn’t recognize the next change of state. . ``` import RPi.GPIO as GPIO import time sensor_name_0 = "test" printed_out = False printed_in = False try: while True: if sensor_name_0: sensor_0 = open('/sys/devices/w1_bus_master1/w1_master_slaves','r').read().split('\n')[0] sensor_1 = open('/sys/devices/w1_bus_master1/w1_master_slaves','r').read().split('\n')[1] sensorpath = "/sys/bus/w1/devices/" sensorfile = "/w1_slave" def callsensor_0(sensor_0): f = open(sensorpath + sensor_0 + sensorfile, 'r') lines = f.readlines() f.close() temp_line = lines[1].find('t=') temp_output = lines[1].strip() [temp_line+2:] temp_celsius = float(temp_output) / 1000 return temp_celsius def callsensor_1(sensor_1): f = open(sensorpath + sensor_1 + sensorfile, 'r') lines = f.readlines() f.close() temp_line = lines[1].find('t=') temp_output = lines[1].strip() [temp_line+2:] temp_celsius = float(temp_output) / 1000 return temp_celsius outside = (str('%.1f' % float(callsensor_0(sensor_0))).rstrip('0').rstrip('.')) inside = (str('%.1f' % float(callsensor_1(sensor_1))).rstrip('0').rstrip('.')) print "loop" if outside > inside and not printed_out: printed_out = True print "outside is higher then inside" print outside if outside < inside and not printed_in: printed_in = True print "inside is higher then outside" print inside time.sleep(5) except KeyboardInterrupt: print('interrupted!') ```
2017/07/07
[ "https://Stackoverflow.com/questions/44972219", "https://Stackoverflow.com", "https://Stackoverflow.com/users/8271048/" ]
Similar to uxxi we ended up writing up our own implementation based off of `react-native-get-item-layout` but there is an important distinction to make. Wiring either method to the SectionList getItemLayout parameter re-executes the same iterations over the data for every item being rendered. The overhead added by this is substantial and after correcting for the same there is a significant performance improvement. Essentially the key is to calculate your offset data whenever the data provided to the component changes it's shape and then call against that calculated data to obtain the offset for getItemLayout. This will result in a single iteration per data change vs endless iterations per interaction with the list.
For some reason the `react-native-get-item-layout` package keeps crashing with `"height: <<NaN>>"` so I had to write my [own RN SectionList getItemLayout](https://npmjs.com/package/sectionlist-get-itemlayout) . It uses the same interface as the former. Like the `package` it's also an `O(n)`.
9,866,923
I'm trying to solve this newbie puzzle: I've created this function: ``` def bucket_loop(htable, key): bucket = hashtable_get_bucket(htable, key) for entry in bucket: if entry[0] == key: return entry[1] return None ``` And I have to call it in two other functions (bellow) in the following way: to change the value of the element entry[1] or to append to this list (entry) a new element. But I can't do that calling the function bucket\_loop the way I did because **"you can't assign to function call"** (assigning to a function call is illegal in Python). What is the alternative (most similar to the code I wrote) to do this (bucket\_loop(htable, key) = value and hashtable\_get\_bucket(htable, key).append([key, value]))? ``` def hashtable_update(htable, key, value): if bucket_loop(htable, key) != None: bucket_loop(htable, key) = value else: hashtable_get_bucket(htable, key).append([key, value]) def hashtable_lookup(htable, key): return bucket_loop(htable, key) ``` Thanks, in advance, for any help! This is the rest of the code to make this script works: ``` def make_hashtable(size): table = [] for unused in range(0, size): table.append([]) return table def hash_string(s, size): h = 0 for c in s: h = h + ord(c) return h % size def hashtable_get_bucket(htable, key): return htable[hash_string(key, len(htable))] ``` Similar question (but didn't help me): [SyntaxError: "can't assign to function call"](https://stackoverflow.com/questions/5964927/python-cannot-assign-function-call)
2012/03/26
[ "https://Stackoverflow.com/questions/9866923", "https://Stackoverflow.com", "https://Stackoverflow.com/users/955883/" ]
In general, there are three things you can do: 1. Write “setter” functions (ex, `bucket_set`) 2. Return mutable values (ex, `bucket_get(table, key).append(42)` if the value is a `list`) 3. Use a class which overrides `__getitem__` and `__setitem__` For example, you could have a class like like: ``` class Bucket(object): def __setitem__(self, key, value): # … implementation … def __getitem__(self, key): # … implementation … return value ``` Then use it like this: ``` >>> b = Bucket() >>> b["foo"] = 42 >>> b["foo"] 42 >>> ``` This would be the most Pythonic way to do it.
One option that would require few changes would be adding a third argument to `bucket_loop`, optional, to use for assignment: ``` empty = object() # An object that's guaranteed not to be in your htable def bucket_loop(htable, key, value=empty): bucket = hashtable_get_bucket(htable, key) for entry in bucket: if entry[0] == key: if value is not empty: # Reference (id) comparison entry[1] = value return entry[1] else: # I think this else is unnecessary/buggy return None ``` However, a few pointers: 1. I agree with Ignacio Vazquez-Abrams and David Wolever, a class would be better; 2. Since a bucket can have more than one key/value pairs, you shouldn't return None if the first entry didn't match your key. Loop through all of them, and only return None in the end; (you can ommit this statement also, the default behavior is to return None) 3. If your htable doesn't admit `None` as a value, you can use it instead of `empty`.
9,866,923
I'm trying to solve this newbie puzzle: I've created this function: ``` def bucket_loop(htable, key): bucket = hashtable_get_bucket(htable, key) for entry in bucket: if entry[0] == key: return entry[1] return None ``` And I have to call it in two other functions (bellow) in the following way: to change the value of the element entry[1] or to append to this list (entry) a new element. But I can't do that calling the function bucket\_loop the way I did because **"you can't assign to function call"** (assigning to a function call is illegal in Python). What is the alternative (most similar to the code I wrote) to do this (bucket\_loop(htable, key) = value and hashtable\_get\_bucket(htable, key).append([key, value]))? ``` def hashtable_update(htable, key, value): if bucket_loop(htable, key) != None: bucket_loop(htable, key) = value else: hashtable_get_bucket(htable, key).append([key, value]) def hashtable_lookup(htable, key): return bucket_loop(htable, key) ``` Thanks, in advance, for any help! This is the rest of the code to make this script works: ``` def make_hashtable(size): table = [] for unused in range(0, size): table.append([]) return table def hash_string(s, size): h = 0 for c in s: h = h + ord(c) return h % size def hashtable_get_bucket(htable, key): return htable[hash_string(key, len(htable))] ``` Similar question (but didn't help me): [SyntaxError: "can't assign to function call"](https://stackoverflow.com/questions/5964927/python-cannot-assign-function-call)
2012/03/26
[ "https://Stackoverflow.com/questions/9866923", "https://Stackoverflow.com", "https://Stackoverflow.com/users/955883/" ]
In general, there are three things you can do: 1. Write “setter” functions (ex, `bucket_set`) 2. Return mutable values (ex, `bucket_get(table, key).append(42)` if the value is a `list`) 3. Use a class which overrides `__getitem__` and `__setitem__` For example, you could have a class like like: ``` class Bucket(object): def __setitem__(self, key, value): # … implementation … def __getitem__(self, key): # … implementation … return value ``` Then use it like this: ``` >>> b = Bucket() >>> b["foo"] = 42 >>> b["foo"] 42 >>> ``` This would be the most Pythonic way to do it.
So you're basically cheating at udacity, which is an online cs class / university? Funny part is you couldn't even declare the question properly. Next time cheat thoroughly and paste the two functions you're supposed to simplify and request someone simplify them by creating a third function with the overlapping code within. Doesn't matter anyway because if this is the one you need help in you're likely not doing very well in the class you were also able to solve the problem without using most of these tools, it was an exercise in understanding how to identify an handle redundancies, NOT efficiency... Real instructions: Modify the code for both hashtable\_update and hashtable\_lookup to have the same behavior they have now, but using fewer lines of code in each procedure.  You should define a new procedure, helper, to help with this.  Your new version should have approximately the same running time as the original version, but neither hashtable\_update or hashtable\_lookup should include any for or while loop, and the block of each procedure should be no more than 6 lines of code Seriously, cheating is lame.
9,866,923
I'm trying to solve this newbie puzzle: I've created this function: ``` def bucket_loop(htable, key): bucket = hashtable_get_bucket(htable, key) for entry in bucket: if entry[0] == key: return entry[1] return None ``` And I have to call it in two other functions (bellow) in the following way: to change the value of the element entry[1] or to append to this list (entry) a new element. But I can't do that calling the function bucket\_loop the way I did because **"you can't assign to function call"** (assigning to a function call is illegal in Python). What is the alternative (most similar to the code I wrote) to do this (bucket\_loop(htable, key) = value and hashtable\_get\_bucket(htable, key).append([key, value]))? ``` def hashtable_update(htable, key, value): if bucket_loop(htable, key) != None: bucket_loop(htable, key) = value else: hashtable_get_bucket(htable, key).append([key, value]) def hashtable_lookup(htable, key): return bucket_loop(htable, key) ``` Thanks, in advance, for any help! This is the rest of the code to make this script works: ``` def make_hashtable(size): table = [] for unused in range(0, size): table.append([]) return table def hash_string(s, size): h = 0 for c in s: h = h + ord(c) return h % size def hashtable_get_bucket(htable, key): return htable[hash_string(key, len(htable))] ``` Similar question (but didn't help me): [SyntaxError: "can't assign to function call"](https://stackoverflow.com/questions/5964927/python-cannot-assign-function-call)
2012/03/26
[ "https://Stackoverflow.com/questions/9866923", "https://Stackoverflow.com", "https://Stackoverflow.com/users/955883/" ]
One option that would require few changes would be adding a third argument to `bucket_loop`, optional, to use for assignment: ``` empty = object() # An object that's guaranteed not to be in your htable def bucket_loop(htable, key, value=empty): bucket = hashtable_get_bucket(htable, key) for entry in bucket: if entry[0] == key: if value is not empty: # Reference (id) comparison entry[1] = value return entry[1] else: # I think this else is unnecessary/buggy return None ``` However, a few pointers: 1. I agree with Ignacio Vazquez-Abrams and David Wolever, a class would be better; 2. Since a bucket can have more than one key/value pairs, you shouldn't return None if the first entry didn't match your key. Loop through all of them, and only return None in the end; (you can ommit this statement also, the default behavior is to return None) 3. If your htable doesn't admit `None` as a value, you can use it instead of `empty`.
So you're basically cheating at udacity, which is an online cs class / university? Funny part is you couldn't even declare the question properly. Next time cheat thoroughly and paste the two functions you're supposed to simplify and request someone simplify them by creating a third function with the overlapping code within. Doesn't matter anyway because if this is the one you need help in you're likely not doing very well in the class you were also able to solve the problem without using most of these tools, it was an exercise in understanding how to identify an handle redundancies, NOT efficiency... Real instructions: Modify the code for both hashtable\_update and hashtable\_lookup to have the same behavior they have now, but using fewer lines of code in each procedure.  You should define a new procedure, helper, to help with this.  Your new version should have approximately the same running time as the original version, but neither hashtable\_update or hashtable\_lookup should include any for or while loop, and the block of each procedure should be no more than 6 lines of code Seriously, cheating is lame.
62,065,607
I run into problems when calling Spark's MinHashLSH's approxSimilarityJoin on a dataframe of (name\_id, name) combinations. **A summary of the problem I try to solve:** I have a dataframe of around 30 million unique (name\_id, name) combinations for company names. Some of those names refer to the same company, but are (i) either misspelled, and/or (ii) include additional names. Performing fuzzy string matching for every combination is not possible. To reduce the number of fuzzy string matching combinations, I use MinHashLSH in Spark. My intended approach is to use a approxSimilarityJoin (self-join) with a relatively large Jaccard threshold, such that I am able to run a fuzzy matching algorithm on the matched combinations to further improve the disambiguation. **A summary of the steps I took:** 1. Used CountVectorizer to create a vector of character counts for every name, 2. Used MinHashLSH and its approxSimilarityJoin with the following settings: * numHashTables=100 * threshold=0.3 (Jaccard threshold for approxSimilarityJoin) 3. After the approxSimilarityJoin, I remove duplicate combinations (for which holds that there exists a matched combination (i,j) and (j,i), then I remove (j,i)) 4. After removing the duplicate combinations, I run a fuzzy string matching algorithm using the FuzzyWuzzy package to reduce the number of records and improve the disambiguation of the names. 5. Eventually I run a connectedComponents algorithm on the remaining edges (i,j) to match which company names belong together. **Part of code used:** ``` id_col = 'id' name_col = 'name' num_hastables = 100 max_jaccard = 0.3 fuzzy_threshold = 90 fuzzy_method = fuzz.token_set_ratio # Calculate edges using minhash practices edges = MinHashLSH(inputCol='vectorized_char_lst', outputCol='hashes', numHashTables=num_hastables).\ fit(data).\ approxSimilarityJoin(data, data, max_jaccard).\ select(col('datasetA.'+id_col).alias('src'), col('datasetA.clean').alias('src_name'), col('datasetB.'+id_col).alias('dst'), col('datasetB.clean').alias('dst_name')).\ withColumn('comb', sort_array(array(*('src', 'dst')))).\ dropDuplicates(['comb']).\ rdd.\ filter(lambda x: fuzzy_method(x['src_name'], x['dst_name']) >= fuzzy_threshold if x['src'] != x['dst'] else False).\ toDF().\ drop(*('src_name', 'dst_name', 'comb')) ``` **Explain plan of `edges`** ``` == Physical Plan == *(5) HashAggregate(keys=[datasetA#232, datasetB#263], functions=[]) +- Exchange hashpartitioning(datasetA#232, datasetB#263, 200) +- *(4) HashAggregate(keys=[datasetA#232, datasetB#263], functions=[]) +- *(4) Project [datasetA#232, datasetB#263] +- *(4) BroadcastHashJoin [entry#233, hashValue#234], [entry#264, hashValue#265], Inner, BuildRight, (UDF(datasetA#232.vectorized_char_lst, datasetB#263.vectorized_char_lst) < 0.3) :- *(4) Project [named_struct(id, id#10, name, name#11, clean, clean#90, char_lst, char_lst#95, vectorized_char_lst, vectorized_char_lst#107, hashes, hashes#225) AS datasetA#232, entry#233, hashValue#234] : +- *(4) Filter isnotnull(hashValue#234) : +- Generate posexplode(hashes#225), [id#10, name#11, clean#90, char_lst#95, vectorized_char_lst#107, hashes#225], false, [entry#233, hashValue#234] : +- *(1) Project [id#10, name#11, clean#90, char_lst#95, vectorized_char_lst#107, UDF(vectorized_char_lst#107) AS hashes#225] : +- InMemoryTableScan [char_lst#95, clean#90, id#10, name#11, vectorized_char_lst#107] : +- InMemoryRelation [id#10, name#11, clean#90, char_lst#95, vectorized_char_lst#107], StorageLevel(disk, memory, deserialized, 1 replicas) : +- *(4) Project [id#10, name#11, pythonUDF0#114 AS clean#90, pythonUDF2#116 AS char_lst#95, UDF(pythonUDF2#116) AS vectorized_char_lst#107] : +- BatchEvalPython [<lambda>(name#11), <lambda>(<lambda>(name#11)), <lambda>(<lambda>(name#11))], [id#10, name#11, pythonUDF0#114, pythonUDF1#115, pythonUDF2#116] : +- SortAggregate(key=[name#11], functions=[first(id#10, false)]) : +- *(3) Sort [name#11 ASC NULLS FIRST], false, 0 : +- Exchange hashpartitioning(name#11, 200) : +- SortAggregate(key=[name#11], functions=[partial_first(id#10, false)]) : +- *(2) Sort [name#11 ASC NULLS FIRST], false, 0 : +- Exchange RoundRobinPartitioning(8) : +- *(1) Filter AtLeastNNulls(n, id#10,name#11) : +- *(1) FileScan csv [id#10,name#11] Batched: false, Format: CSV, Location: InMemoryFileIndex[file:<path>, PartitionFilters: [], PushedFilters: [], ReadSchema: struct<id:string,name:string> +- BroadcastExchange HashedRelationBroadcastMode(List(input[1, int, false], input[2, vector, true])) +- *(3) Project [named_struct(id, id#10, name, name#11, clean, clean#90, char_lst, char_lst#95, vectorized_char_lst, vectorized_char_lst#107, hashes, hashes#256) AS datasetB#263, entry#264, hashValue#265] +- *(3) Filter isnotnull(hashValue#265) +- Generate posexplode(hashes#256), [id#10, name#11, clean#90, char_lst#95, vectorized_char_lst#107, hashes#256], false, [entry#264, hashValue#265] +- *(2) Project [id#10, name#11, clean#90, char_lst#95, vectorized_char_lst#107, UDF(vectorized_char_lst#107) AS hashes#256] +- InMemoryTableScan [char_lst#95, clean#90, id#10, name#11, vectorized_char_lst#107] +- InMemoryRelation [id#10, name#11, clean#90, char_lst#95, vectorized_char_lst#107], StorageLevel(disk, memory, deserialized, 1 replicas) +- *(4) Project [id#10, name#11, pythonUDF0#114 AS clean#90, pythonUDF2#116 AS char_lst#95, UDF(pythonUDF2#116) AS vectorized_char_lst#107] +- BatchEvalPython [<lambda>(name#11), <lambda>(<lambda>(name#11)), <lambda>(<lambda>(name#11))], [id#10, name#11, pythonUDF0#114, pythonUDF1#115, pythonUDF2#116] +- SortAggregate(key=[name#11], functions=[first(id#10, false)]) +- *(3) Sort [name#11 ASC NULLS FIRST], false, 0 +- Exchange hashpartitioning(name#11, 200) +- SortAggregate(key=[name#11], functions=[partial_first(id#10, false)]) +- *(2) Sort [name#11 ASC NULLS FIRST], false, 0 +- Exchange RoundRobinPartitioning(8) +- *(1) Filter AtLeastNNulls(n, id#10,name#11) +- *(1) FileScan csv [id#10,name#11] Batched: false, Format: CSV, Location: InMemoryFileIndex[file:<path>, PartitionFilters: [], PushedFilters: [], ReadSchema: struct<id:string,name:string> ``` **How `data` looks:** ``` +-------+--------------------+--------------------+--------------------+--------------------+ | id| name| clean| char_lst| vectorized_char_lst| +-------+--------------------+--------------------+--------------------+--------------------+ |3633038|MURATA MACHINERY LTD| MURATA MACHINERY|[M, U, R, A, T, A...|(33,[0,1,2,3,4,5,...| |3632811|SOCIETE ANONYME D...|SOCIETE ANONYME D...|[S, O, C, I, E, T...|(33,[0,1,2,3,4,5,...| |3632655|FUJIFILM CORPORATION| FUJIFILM|[F, U, J, I, F, I...|(33,[3,10,12,13,2...| |3633318|HEINE OPTOTECHNIK...|HEINE OPTOTECHNIK...|[H, E, I, N, E, ...|(33,[0,1,2,3,4,5,...| |3633523|SUNBEAM PRODUCTS INC| SUNBEAM PRODUCTS|[S, U, N, B, E, A...|(33,[0,1,2,4,5,6,...| |3633300| HIVAL LTD| HIVAL| [H, I, V, A, L]|(33,[2,3,10,11,21...| |3632657| NSK LTD| NSK| [N, S, K]|(33,[5,6,16],[1.0...| |3633240|REHABILITATION IN...|REHABILITATION IN...|[R, E, H, A, B, I...|(33,[0,1,2,3,4,5,...| |3632732|STUDIENGESELLSCHA...|STUDIENGESELLSCHA...|[S, T, U, D, I, E...|(33,[0,1,2,3,4,5,...| |3632866|ENERGY CONVERSION...|ENERGY CONVERSION...|[E, N, E, R, G, Y...|(33,[0,1,3,5,6,7,...| |3632895|ERGENICS POWER SY...|ERGENICS POWER SY...|[E, R, G, E, N, I...|(33,[0,1,3,4,5,6,...| |3632897| MOLI ENERGY LIMITED| MOLI ENERGY|[M, O, L, I, , E...|(33,[0,1,3,5,7,8,...| |3633275| NORDSON CORPORATION| NORDSON|[N, O, R, D, S, O...|(33,[5,6,7,8,14],...| |3633256| PEROXIDCHEMIE GMBH| PEROXIDCHEMIE|[P, E, R, O, X, I...|(33,[0,3,7,8,9,11...| |3632695| POWER CELL INC| POWER CELL|[P, O, W, E, R, ...|(33,[0,1,7,8,9,10...| |3633037| ERGENICS INC| ERGENICS|[E, R, G, E, N, I...|(33,[0,3,5,6,8,9,...| |3632878| FORD MOTOR COMPANY| FORD MOTOR|[F, O, R, D, , M...|(33,[1,4,7,8,13,1...| |3632573| SAFT AMERICA INC| SAFT AMERICA|[S, A, F, T, , A...|(33,[0,1,2,3,4,6,...| |3632852|ALCAN INTERNATION...| ALCAN INTERNATIONAL|[A, L, C, A, N, ...|(33,[0,1,2,3,4,5,...| |3632698| KRUPPKOPPERS GMBH| KRUPPKOPPERS|[K, R, U, P, P, K...|(33,[0,6,7,8,12,1...| |3633150|ALCAN INTERNATION...| ALCAN INTERNATIONAL|[A, L, C, A, N, ...|(33,[0,1,2,3,4,5,...| |3632761|AMERICAN TELEPHON...|AMERICAN TELEPHON...|[A, M, E, R, I, C...|(33,[0,1,2,3,4,5,...| |3632757|HITACHI KOKI COMP...| HITACHI KOKI|[H, I, T, A, C, H...|(33,[1,2,3,4,7,9,...| |3632836|HUGHES AIRCRAFT C...| HUGHES AIRCRAFT|[H, U, G, H, E, S...|(33,[0,1,2,3,4,6,...| |3633152| SOSY INC| SOSY| [S, O, S, Y]|(33,[6,7,18],[2.0...| |3633052|HAMAMATSU PHOTONI...|HAMAMATSU PHOTONI...|[H, A, M, A, M, A...|(33,[1,2,3,4,5,6,...| |3633450| AKZO NOBEL NV| AKZO NOBEL|[A, K, Z, O, , N...|(33,[0,1,2,5,7,10...| |3632713| ELTRON RESEARCH INC| ELTRON RESEARCH|[E, L, T, R, O, N...|(33,[0,1,2,4,5,6,...| |3632533|NEC ELECTRONICS C...| NEC ELECTRONICS|[N, E, C, , E, L...|(33,[0,1,3,4,5,6,...| |3632562| TARGETTI SANKEY SPA| TARGETTI SANKEY SPA|[T, A, R, G, E, T...|(33,[0,1,2,3,4,5,...| +-------+--------------------+--------------------+--------------------+--------------------+ only showing top 30 rows ``` **Hardware used:** 1. Master node: m5.2xlarge 8 vCore, 32 GiB memory, EBS only storage EBS Storage:128 GiB 2. Slave nodes (10x): m5.4xlarge 16 vCore, 64 GiB memory, EBS only storage EBS Storage:500 GiB **Spark-submit settings used:** ``` spark-submit --master yarn --conf "spark.executor.instances=40" --conf "spark.default.parallelism=640" --conf "spark.shuffle.partitions=2000" --conf "spark.executor.cores=4" --conf "spark.executor.memory=14g" --conf "spark.driver.memory=14g" --conf "spark.driver.maxResultSize=14g" --conf "spark.dynamicAllocation.enabled=false" --packages graphframes:graphframes:0.7.0-spark2.4-s_2.11 run_disambiguation.py ``` **Task errors from Web UI** ``` ExecutorLostFailure (executor 21 exited caused by one of the running tasks) Reason: Slave lost ``` ``` ExecutorLostFailure (executor 31 exited unrelated to the running tasks) Reason: Container marked as failed: container_1590592506722_0001_02_000002 on host: ip-172-31-47-180.eu-central-1.compute.internal. Exit status: -100. Diagnostics: Container released on a *lost* node. ``` **(Part of) executor logs:** ``` 20/05/27 16:29:09 INFO ShuffleExternalSorter: Thread 89 spilling sort data of 1988.0 MB to disk (25 times so far) 20/05/27 16:29:13 INFO ShuffleExternalSorter: Thread 147 spilling sort data of 1988.0 MB to disk (26 times so far) 20/05/27 16:29:15 INFO ShuffleExternalSorter: Thread 146 spilling sort data of 1988.0 MB to disk (28 times so far) 20/05/27 16:29:17 INFO ShuffleExternalSorter: Thread 145 spilling sort data of 1988.0 MB to disk (0 time so far) 20/05/27 16:29:28 INFO ShuffleExternalSorter: Thread 147 spilling sort data of 1988.0 MB to disk (27 times so far) 20/05/27 16:29:28 INFO ShuffleExternalSorter: Thread 89 spilling sort data of 1988.0 MB to disk (26 times so far) 20/05/27 16:29:33 INFO ShuffleExternalSorter: Thread 146 spilling sort data of 1988.0 MB to disk (29 times so far) 20/05/27 16:29:38 INFO ShuffleExternalSorter: Thread 145 spilling sort data of 1988.0 MB to disk (1 time so far) 20/05/27 16:29:42 INFO ShuffleExternalSorter: Thread 89 spilling sort data of 1988.0 MB to disk (27 times so far) 20/05/27 16:29:46 INFO ShuffleExternalSorter: Thread 147 spilling sort data of 1988.0 MB to disk (28 times so far) 20/05/27 16:29:53 INFO ShuffleExternalSorter: Thread 146 spilling sort data of 1988.0 MB to disk (30 times so far) 20/05/27 16:29:57 INFO ShuffleExternalSorter: Thread 145 spilling sort data of 1988.0 MB to disk (2 times so far) 20/05/27 16:30:00 INFO ShuffleExternalSorter: Thread 89 spilling sort data of 1988.0 MB to disk (28 times so far) 20/05/27 16:30:05 INFO ShuffleExternalSorter: Thread 147 spilling sort data of 1988.0 MB to disk (29 times so far) 20/05/27 16:30:10 INFO ShuffleExternalSorter: Thread 146 spilling sort data of 1988.0 MB to disk (31 times so far) 20/05/27 16:30:15 INFO ShuffleExternalSorter: Thread 145 spilling sort data of 1988.0 MB to disk (3 times so far) 20/05/27 16:30:19 INFO ShuffleExternalSorter: Thread 89 spilling sort data of 1988.0 MB to disk (29 times so far) 20/05/27 16:30:22 INFO ShuffleExternalSorter: Thread 147 spilling sort data of 1988.0 MB to disk (30 times so far) 20/05/27 16:30:29 INFO ShuffleExternalSorter: Thread 146 spilling sort data of 1988.0 MB to disk (32 times so far) 20/05/27 16:30:32 INFO ShuffleExternalSorter: Thread 145 spilling sort data of 1988.0 MB to disk (4 times so far) 20/05/27 16:30:39 INFO ShuffleExternalSorter: Thread 147 spilling sort data of 1988.0 MB to disk (31 times so far) 20/05/27 16:30:39 INFO ShuffleExternalSorter: Thread 89 spilling sort data of 1988.0 MB to disk (30 times so far) 20/05/27 16:30:46 INFO ShuffleExternalSorter: Thread 146 spilling sort data of 1988.0 MB to disk (33 times so far) 20/05/27 16:30:47 INFO ShuffleExternalSorter: Thread 145 spilling sort data of 1988.0 MB to disk (5 times so far) 20/05/27 16:30:55 INFO ShuffleExternalSorter: Thread 147 spilling sort data of 1988.0 MB to disk (32 times so far) 20/05/27 16:30:59 INFO ShuffleExternalSorter: Thread 89 spilling sort data of 1988.0 MB to disk (31 times so far) 20/05/27 16:31:03 INFO ShuffleExternalSorter: Thread 146 spilling sort data of 1988.0 MB to disk (34 times so far) 20/05/27 16:31:06 INFO ShuffleExternalSorter: Thread 145 spilling sort data of 1988.0 MB to disk (6 times so far) 20/05/27 16:31:13 INFO ShuffleExternalSorter: Thread 147 spilling sort data of 1988.0 MB to disk (33 times so far) 20/05/27 16:31:14 INFO ShuffleExternalSorter: Thread 89 spilling sort data of 1988.0 MB to disk (32 times so far) 20/05/27 16:31:22 INFO ShuffleExternalSorter: Thread 146 spilling sort data of 1988.0 MB to disk (35 times so far) 20/05/27 16:31:24 INFO ShuffleExternalSorter: Thread 145 spilling sort data of 1988.0 MB to disk (7 times so far) 20/05/27 16:31:30 INFO ShuffleExternalSorter: Thread 147 spilling sort data of 1988.0 MB to disk (34 times so far) 20/05/27 16:31:32 INFO ShuffleExternalSorter: Thread 89 spilling sort data of 1988.0 MB to disk (33 times so far) 20/05/27 16:31:41 INFO ShuffleExternalSorter: Thread 146 spilling sort data of 1988.0 MB to disk (36 times so far) 20/05/27 16:31:44 INFO ShuffleExternalSorter: Thread 145 spilling sort data of 1988.0 MB to disk (8 times so far) 20/05/27 16:31:47 INFO ShuffleExternalSorter: Thread 147 spilling sort data of 1988.0 MB to disk (35 times so far) 20/05/27 16:31:48 INFO ShuffleExternalSorter: Thread 89 spilling sort data of 1988.0 MB to disk (34 times so far) 20/05/27 16:32:02 INFO ShuffleExternalSorter: Thread 146 spilling sort data of 1988.0 MB to disk (37 times so far) 20/05/27 16:32:03 INFO ShuffleExternalSorter: Thread 145 spilling sort data of 1988.0 MB to disk (9 times so far) 20/05/27 16:32:04 INFO ShuffleExternalSorter: Thread 147 spilling sort data of 1988.0 MB to disk (36 times so far) 20/05/27 16:32:08 INFO ShuffleExternalSorter: Thread 89 spilling sort data of 1988.0 MB to disk (35 times so far) 20/05/27 16:32:19 INFO ShuffleExternalSorter: Thread 146 spilling sort data of 1988.0 MB to disk (38 times so far) 20/05/27 16:32:20 INFO ShuffleExternalSorter: Thread 147 spilling sort data of 1988.0 MB to disk (37 times so far) 20/05/27 16:32:21 INFO ShuffleExternalSorter: Thread 145 spilling sort data of 1988.0 MB to disk (10 times so far) 20/05/27 16:32:26 INFO ShuffleExternalSorter: Thread 89 spilling sort data of 1988.0 MB to disk (36 times so far) 20/05/27 16:32:37 INFO ShuffleExternalSorter: Thread 146 spilling sort data of 1988.0 MB to disk (39 times so far) 20/05/27 16:32:37 INFO ShuffleExternalSorter: Thread 145 spilling sort data of 1988.0 MB to disk (11 times so far) 20/05/27 16:32:38 INFO ShuffleExternalSorter: Thread 147 spilling sort data of 1988.0 MB to disk (38 times so far) 20/05/27 16:32:45 INFO ShuffleExternalSorter: Thread 89 spilling sort data of 1988.0 MB to disk (37 times so far) 20/05/27 16:32:51 INFO ShuffleExternalSorter: Thread 146 spilling sort data of 1988.0 MB to disk (40 times so far) 20/05/27 16:32:56 INFO ShuffleExternalSorter: Thread 145 spilling sort data of 1988.0 MB to disk (12 times so far) 20/05/27 16:32:58 INFO ShuffleExternalSorter: Thread 147 spilling sort data of 1988.0 MB to disk (39 times so far) 20/05/27 16:33:03 INFO ShuffleExternalSorter: Thread 89 spilling sort data of 1988.0 MB to disk (38 times so far) 20/05/27 16:33:08 INFO ShuffleExternalSorter: Thread 146 spilling sort data of 1988.0 MB to disk (41 times so far) 20/05/27 16:33:13 INFO ShuffleExternalSorter: Thread 145 spilling sort data of 1988.0 MB to disk (13 times so far) 20/05/27 16:33:15 INFO ShuffleExternalSorter: Thread 147 spilling sort data of 1988.0 MB to disk (40 times so far) 20/05/27 16:33:20 INFO ShuffleExternalSorter: Thread 89 spilling sort data of 1988.0 MB to disk (39 times so far) 20/05/27 16:33:26 INFO ShuffleExternalSorter: Thread 146 spilling sort data of 1988.0 MB to disk (42 times so far) 20/05/27 16:33:30 INFO ShuffleExternalSorter: Thread 147 spilling sort data of 1988.0 MB to disk (41 times so far) 20/05/27 16:33:31 INFO ShuffleExternalSorter: Thread 145 spilling sort data of 1988.0 MB to disk (14 times so far) 20/05/27 16:33:36 INFO ShuffleExternalSorter: Thread 89 spilling sort data of 1988.0 MB to disk (40 times so far) 20/05/27 16:33:46 INFO ShuffleExternalSorter: Thread 146 spilling sort data of 1992.0 MB to disk (43 times so far) 20/05/27 16:33:47 INFO ShuffleExternalSorter: Thread 147 spilling sort data of 1988.0 MB to disk (42 times so far) 20/05/27 16:33:51 INFO ShuffleExternalSorter: Thread 145 spilling sort data of 1988.0 MB to disk (15 times so far) 20/05/27 16:33:54 INFO ShuffleExternalSorter: Thread 89 spilling sort data of 1988.0 MB to disk (41 times so far) 20/05/27 16:34:03 INFO ShuffleExternalSorter: Thread 147 spilling sort data of 1992.0 MB to disk (43 times so far) 20/05/27 16:34:04 INFO ShuffleExternalSorter: Thread 146 spilling sort data of 1992.0 MB to disk (44 times so far) 20/05/27 16:34:08 INFO ShuffleExternalSorter: Thread 145 spilling sort data of 1988.0 MB to disk (16 times so far) 20/05/27 16:34:14 INFO ShuffleExternalSorter: Thread 89 spilling sort data of 1988.0 MB to disk (42 times so far) 20/05/27 16:34:16 INFO PythonUDFRunner: Times: total = 774701, boot = 3, init = 10, finish = 774688 20/05/27 16:34:21 INFO ShuffleExternalSorter: Thread 147 spilling sort data of 1992.0 MB to disk (44 times so far) 20/05/27 16:34:22 INFO ShuffleExternalSorter: Thread 145 spilling sort data of 1988.0 MB to disk (17 times so far) 20/05/27 16:34:30 INFO PythonUDFRunner: Times: total = 773372, boot = 2, init = 9, finish = 773361 20/05/27 16:34:32 INFO ShuffleExternalSorter: Thread 89 spilling sort data of 1992.0 MB to disk (43 times so far) 20/05/27 16:34:39 INFO ShuffleExternalSorter: Thread 145 spilling sort data of 1988.0 MB to disk (18 times so far) 20/05/27 16:34:46 INFO ShuffleExternalSorter: Thread 89 spilling sort data of 1992.0 MB to disk (44 times so far) 20/05/27 16:34:52 INFO ShuffleExternalSorter: Thread 145 spilling sort data of 1988.0 MB to disk (19 times so far) 20/05/27 16:35:01 INFO PythonUDFRunner: Times: total = 776905, boot = 3, init = 11, finish = 776891 20/05/27 16:35:05 INFO ShuffleExternalSorter: Thread 145 spilling sort data of 1988.0 MB to disk (20 times so far) 20/05/27 16:35:19 INFO ShuffleExternalSorter: Thread 145 spilling sort data of 1988.0 MB to disk (21 times so far) 20/05/27 16:35:35 INFO ShuffleExternalSorter: Thread 145 spilling sort data of 1988.0 MB to disk (22 times so far) 20/05/27 16:35:52 INFO ShuffleExternalSorter: Thread 145 spilling sort data of 1988.0 MB to disk (23 times so far) 20/05/27 16:36:10 INFO ShuffleExternalSorter: Thread 145 spilling sort data of 1988.0 MB to disk (24 times so far) 20/05/27 16:36:29 INFO ShuffleExternalSorter: Thread 145 spilling sort data of 1988.0 MB to disk (25 times so far) 20/05/27 16:36:47 INFO ShuffleExternalSorter: Thread 145 spilling sort data of 1988.0 MB to disk (26 times so far) 20/05/27 16:37:06 INFO ShuffleExternalSorter: Thread 145 spilling sort data of 1988.0 MB to disk (27 times so far) 20/05/27 16:37:25 INFO ShuffleExternalSorter: Thread 145 spilling sort data of 1988.0 MB to disk (28 times so far) 20/05/27 16:37:44 INFO ShuffleExternalSorter: Thread 145 spilling sort data of 1988.0 MB to disk (29 times so far) 20/05/27 16:38:03 INFO ShuffleExternalSorter: Thread 145 spilling sort data of 1988.0 MB to disk (30 times so far) 20/05/27 16:38:22 INFO ShuffleExternalSorter: Thread 145 spilling sort data of 1988.0 MB to disk (31 times so far) 20/05/27 16:38:41 INFO ShuffleExternalSorter: Thread 145 spilling sort data of 1988.0 MB to disk (32 times so far) 20/05/27 16:38:59 INFO ShuffleExternalSorter: Thread 145 spilling sort data of 1988.0 MB to disk (33 times so far) 20/05/27 16:39:19 INFO ShuffleExternalSorter: Thread 145 spilling sort data of 1988.0 MB to disk (34 times so far) 20/05/27 16:39:39 INFO ShuffleExternalSorter: Thread 145 spilling sort data of 1988.0 MB to disk (35 times so far) 20/05/27 16:39:58 INFO ShuffleExternalSorter: Thread 145 spilling sort data of 1988.0 MB to disk (36 times so far) 20/05/27 16:40:18 INFO ShuffleExternalSorter: Thread 145 spilling sort data of 1988.0 MB to disk (37 times so far) 20/05/27 16:40:38 INFO ShuffleExternalSorter: Thread 145 spilling sort data of 1988.0 MB to disk (38 times so far) 20/05/27 16:40:57 INFO ShuffleExternalSorter: Thread 145 spilling sort data of 1988.0 MB to disk (39 times so far) 20/05/27 16:41:16 INFO ShuffleExternalSorter: Thread 145 spilling sort data of 1988.0 MB to disk (40 times so far) 20/05/27 16:41:35 INFO ShuffleExternalSorter: Thread 145 spilling sort data of 1988.0 MB to disk (41 times so far) 20/05/27 16:41:55 INFO ShuffleExternalSorter: Thread 145 spilling sort data of 1988.0 MB to disk (42 times so far) 20/05/27 16:42:19 INFO ShuffleExternalSorter: Thread 145 spilling sort data of 1992.0 MB to disk (43 times so far) 20/05/27 16:42:41 INFO ShuffleExternalSorter: Thread 145 spilling sort data of 1992.0 MB to disk (44 times so far) 20/05/27 16:42:59 ERROR CoarseGrainedExecutorBackend: RECEIVED SIGNAL TERM 20/05/27 16:42:59 INFO DiskBlockManager: Shutdown hook called 20/05/27 16:42:59 INFO ShutdownHookManager: Shutdown hook called 20/05/27 16:42:59 INFO ShutdownHookManager: Deleting directory /mnt/yarn/usercache/hadoop/appcache/application_1590592506722_0001/spark-73af8e3b-f428-47d4-9e13-fed4e19cc2cd ``` ``` 2020-05-27T16:41:16.336+0000: [GC (Allocation Failure) 2020-05-27T16:41:16.336+0000: [ParNew: 272234K->242K(305984K), 0.0094375 secs] 9076907K->8804915K(13188748K), 0.0094895 secs] [Times: user=0.12 sys=0.00, real=0.01 secs] 2020-05-27T16:41:34.686+0000: [GC (Allocation Failure) 2020-05-27T16:41:34.686+0000: [ParNew: 272242K->257K(305984K), 0.0084179 secs] 9076915K->8804947K(13188748K), 0.0084840 secs] [Times: user=0.09 sys=0.01, real=0.01 secs] 2020-05-27T16:41:35.145+0000: [GC (Allocation Failure) 2020-05-27T16:41:35.145+0000: [ParNew: 272257K->1382K(305984K), 0.0095541 secs] 9076947K->8806073K(13188748K), 0.0096080 secs] [Times: user=0.12 sys=0.00, real=0.01 secs] 2020-05-27T16:41:55.077+0000: [GC (Allocation Failure) 2020-05-27T16:41:55.077+0000: [ParNew: 273382K->2683K(305984K), 0.0097177 secs] 9078073K->8807392K(13188748K), 0.0097754 secs] [Times: user=0.12 sys=0.00, real=0.01 secs] 2020-05-27T16:41:55.513+0000: [GC (Allocation Failure) 2020-05-27T16:41:55.513+0000: [ParNew: 274683K->3025K(305984K), 0.0093345 secs] 9079392K->8807734K(13188748K), 0.0093892 secs] [Times: user=0.12 sys=0.00, real=0.01 secs] 2020-05-27T16:42:05.481+0000: [GC (Allocation Failure) 2020-05-27T16:42:05.481+0000: [ParNew: 275025K->4102K(305984K), 0.0092950 secs] 9079734K->8808830K(13188748K), 0.0093464 secs] [Times: user=0.12 sys=0.00, real=0.01 secs] 2020-05-27T16:42:18.711+0000: [GC (Allocation Failure) 2020-05-27T16:42:18.711+0000: [ParNew: 276102K->2972K(305984K), 0.0098928 secs] 9080830K->8807700K(13188748K), 0.0099510 secs] [Times: user=0.13 sys=0.00, real=0.01 secs] 2020-05-27T16:42:36.493+0000: [GC (Allocation Failure) 2020-05-27T16:42:36.493+0000: [ParNew: 274972K->3852K(305984K), 0.0094324 secs] 9079700K->8808598K(13188748K), 0.0094897 secs] [Times: user=0.11 sys=0.00, real=0.01 secs] 2020-05-27T16:42:40.880+0000: [GC (Allocation Failure) 2020-05-27T16:42:40.880+0000: [ParNew: 275852K->2568K(305984K), 0.0111794 secs] 9080598K->8807882K(13188748K), 0.0112352 secs] [Times: user=0.13 sys=0.00, real=0.01 secs] Heap par new generation total 305984K, used 261139K [0x0000000440000000, 0x0000000454c00000, 0x0000000483990000) eden space 272000K, 95% used [0x0000000440000000, 0x000000044fc82cf8, 0x00000004509a0000) from space 33984K, 7% used [0x00000004509a0000, 0x0000000450c220a8, 0x0000000452ad0000) to space 33984K, 0% used [0x0000000452ad0000, 0x0000000452ad0000, 0x0000000454c00000) concurrent mark-sweep generation total 12882764K, used 8805314K [0x0000000483990000, 0x0000000795e63000, 0x00000007c0000000) Metaspace used 77726K, capacity 79553K, committed 79604K, reserved 1118208K class space used 10289K, capacity 10704K, committed 10740K, reserved 1048576K ``` [Screenshot of executors](https://i.stack.imgur.com/MsKzB.png) **What I tried:** * Changing `spark.sql.shuffle.partitions` * Changing `spark.default.parallelism` * Repartition the dataframe How can I solve this issue? Thanks in advance! Thijs
2020/05/28
[ "https://Stackoverflow.com/questions/62065607", "https://Stackoverflow.com", "https://Stackoverflow.com/users/12383245/" ]
`approxSimilarityJoin` will only parallelize well across workers if the tokens being input into MinHash are sufficiently distinct. Since individual character tokens appear frequently across many records; include an `NGram` transformation on your character list to make the appearance of each token less frequent; this will greatly reduce data skew and will resolve memory strain. MinHash simulates the process of creating a random permutation of your token population and selects the token in the sample set that appears first in the permutation. Since you are using individual characters as tokens, let's say you select a MinHash seed that makes the character `e` the first in your random permutation. In this case, every row with the letter `e` in it will have a matching MinHash and will be shuffled to the same worker for set comparison. This will cause extreme data skew and out of memory errors.
Thanks for the detailed explanation. What threshold are you using a and how are reducing false -ve?
62,065,607
I run into problems when calling Spark's MinHashLSH's approxSimilarityJoin on a dataframe of (name\_id, name) combinations. **A summary of the problem I try to solve:** I have a dataframe of around 30 million unique (name\_id, name) combinations for company names. Some of those names refer to the same company, but are (i) either misspelled, and/or (ii) include additional names. Performing fuzzy string matching for every combination is not possible. To reduce the number of fuzzy string matching combinations, I use MinHashLSH in Spark. My intended approach is to use a approxSimilarityJoin (self-join) with a relatively large Jaccard threshold, such that I am able to run a fuzzy matching algorithm on the matched combinations to further improve the disambiguation. **A summary of the steps I took:** 1. Used CountVectorizer to create a vector of character counts for every name, 2. Used MinHashLSH and its approxSimilarityJoin with the following settings: * numHashTables=100 * threshold=0.3 (Jaccard threshold for approxSimilarityJoin) 3. After the approxSimilarityJoin, I remove duplicate combinations (for which holds that there exists a matched combination (i,j) and (j,i), then I remove (j,i)) 4. After removing the duplicate combinations, I run a fuzzy string matching algorithm using the FuzzyWuzzy package to reduce the number of records and improve the disambiguation of the names. 5. Eventually I run a connectedComponents algorithm on the remaining edges (i,j) to match which company names belong together. **Part of code used:** ``` id_col = 'id' name_col = 'name' num_hastables = 100 max_jaccard = 0.3 fuzzy_threshold = 90 fuzzy_method = fuzz.token_set_ratio # Calculate edges using minhash practices edges = MinHashLSH(inputCol='vectorized_char_lst', outputCol='hashes', numHashTables=num_hastables).\ fit(data).\ approxSimilarityJoin(data, data, max_jaccard).\ select(col('datasetA.'+id_col).alias('src'), col('datasetA.clean').alias('src_name'), col('datasetB.'+id_col).alias('dst'), col('datasetB.clean').alias('dst_name')).\ withColumn('comb', sort_array(array(*('src', 'dst')))).\ dropDuplicates(['comb']).\ rdd.\ filter(lambda x: fuzzy_method(x['src_name'], x['dst_name']) >= fuzzy_threshold if x['src'] != x['dst'] else False).\ toDF().\ drop(*('src_name', 'dst_name', 'comb')) ``` **Explain plan of `edges`** ``` == Physical Plan == *(5) HashAggregate(keys=[datasetA#232, datasetB#263], functions=[]) +- Exchange hashpartitioning(datasetA#232, datasetB#263, 200) +- *(4) HashAggregate(keys=[datasetA#232, datasetB#263], functions=[]) +- *(4) Project [datasetA#232, datasetB#263] +- *(4) BroadcastHashJoin [entry#233, hashValue#234], [entry#264, hashValue#265], Inner, BuildRight, (UDF(datasetA#232.vectorized_char_lst, datasetB#263.vectorized_char_lst) < 0.3) :- *(4) Project [named_struct(id, id#10, name, name#11, clean, clean#90, char_lst, char_lst#95, vectorized_char_lst, vectorized_char_lst#107, hashes, hashes#225) AS datasetA#232, entry#233, hashValue#234] : +- *(4) Filter isnotnull(hashValue#234) : +- Generate posexplode(hashes#225), [id#10, name#11, clean#90, char_lst#95, vectorized_char_lst#107, hashes#225], false, [entry#233, hashValue#234] : +- *(1) Project [id#10, name#11, clean#90, char_lst#95, vectorized_char_lst#107, UDF(vectorized_char_lst#107) AS hashes#225] : +- InMemoryTableScan [char_lst#95, clean#90, id#10, name#11, vectorized_char_lst#107] : +- InMemoryRelation [id#10, name#11, clean#90, char_lst#95, vectorized_char_lst#107], StorageLevel(disk, memory, deserialized, 1 replicas) : +- *(4) Project [id#10, name#11, pythonUDF0#114 AS clean#90, pythonUDF2#116 AS char_lst#95, UDF(pythonUDF2#116) AS vectorized_char_lst#107] : +- BatchEvalPython [<lambda>(name#11), <lambda>(<lambda>(name#11)), <lambda>(<lambda>(name#11))], [id#10, name#11, pythonUDF0#114, pythonUDF1#115, pythonUDF2#116] : +- SortAggregate(key=[name#11], functions=[first(id#10, false)]) : +- *(3) Sort [name#11 ASC NULLS FIRST], false, 0 : +- Exchange hashpartitioning(name#11, 200) : +- SortAggregate(key=[name#11], functions=[partial_first(id#10, false)]) : +- *(2) Sort [name#11 ASC NULLS FIRST], false, 0 : +- Exchange RoundRobinPartitioning(8) : +- *(1) Filter AtLeastNNulls(n, id#10,name#11) : +- *(1) FileScan csv [id#10,name#11] Batched: false, Format: CSV, Location: InMemoryFileIndex[file:<path>, PartitionFilters: [], PushedFilters: [], ReadSchema: struct<id:string,name:string> +- BroadcastExchange HashedRelationBroadcastMode(List(input[1, int, false], input[2, vector, true])) +- *(3) Project [named_struct(id, id#10, name, name#11, clean, clean#90, char_lst, char_lst#95, vectorized_char_lst, vectorized_char_lst#107, hashes, hashes#256) AS datasetB#263, entry#264, hashValue#265] +- *(3) Filter isnotnull(hashValue#265) +- Generate posexplode(hashes#256), [id#10, name#11, clean#90, char_lst#95, vectorized_char_lst#107, hashes#256], false, [entry#264, hashValue#265] +- *(2) Project [id#10, name#11, clean#90, char_lst#95, vectorized_char_lst#107, UDF(vectorized_char_lst#107) AS hashes#256] +- InMemoryTableScan [char_lst#95, clean#90, id#10, name#11, vectorized_char_lst#107] +- InMemoryRelation [id#10, name#11, clean#90, char_lst#95, vectorized_char_lst#107], StorageLevel(disk, memory, deserialized, 1 replicas) +- *(4) Project [id#10, name#11, pythonUDF0#114 AS clean#90, pythonUDF2#116 AS char_lst#95, UDF(pythonUDF2#116) AS vectorized_char_lst#107] +- BatchEvalPython [<lambda>(name#11), <lambda>(<lambda>(name#11)), <lambda>(<lambda>(name#11))], [id#10, name#11, pythonUDF0#114, pythonUDF1#115, pythonUDF2#116] +- SortAggregate(key=[name#11], functions=[first(id#10, false)]) +- *(3) Sort [name#11 ASC NULLS FIRST], false, 0 +- Exchange hashpartitioning(name#11, 200) +- SortAggregate(key=[name#11], functions=[partial_first(id#10, false)]) +- *(2) Sort [name#11 ASC NULLS FIRST], false, 0 +- Exchange RoundRobinPartitioning(8) +- *(1) Filter AtLeastNNulls(n, id#10,name#11) +- *(1) FileScan csv [id#10,name#11] Batched: false, Format: CSV, Location: InMemoryFileIndex[file:<path>, PartitionFilters: [], PushedFilters: [], ReadSchema: struct<id:string,name:string> ``` **How `data` looks:** ``` +-------+--------------------+--------------------+--------------------+--------------------+ | id| name| clean| char_lst| vectorized_char_lst| +-------+--------------------+--------------------+--------------------+--------------------+ |3633038|MURATA MACHINERY LTD| MURATA MACHINERY|[M, U, R, A, T, A...|(33,[0,1,2,3,4,5,...| |3632811|SOCIETE ANONYME D...|SOCIETE ANONYME D...|[S, O, C, I, E, T...|(33,[0,1,2,3,4,5,...| |3632655|FUJIFILM CORPORATION| FUJIFILM|[F, U, J, I, F, I...|(33,[3,10,12,13,2...| |3633318|HEINE OPTOTECHNIK...|HEINE OPTOTECHNIK...|[H, E, I, N, E, ...|(33,[0,1,2,3,4,5,...| |3633523|SUNBEAM PRODUCTS INC| SUNBEAM PRODUCTS|[S, U, N, B, E, A...|(33,[0,1,2,4,5,6,...| |3633300| HIVAL LTD| HIVAL| [H, I, V, A, L]|(33,[2,3,10,11,21...| |3632657| NSK LTD| NSK| [N, S, K]|(33,[5,6,16],[1.0...| |3633240|REHABILITATION IN...|REHABILITATION IN...|[R, E, H, A, B, I...|(33,[0,1,2,3,4,5,...| |3632732|STUDIENGESELLSCHA...|STUDIENGESELLSCHA...|[S, T, U, D, I, E...|(33,[0,1,2,3,4,5,...| |3632866|ENERGY CONVERSION...|ENERGY CONVERSION...|[E, N, E, R, G, Y...|(33,[0,1,3,5,6,7,...| |3632895|ERGENICS POWER SY...|ERGENICS POWER SY...|[E, R, G, E, N, I...|(33,[0,1,3,4,5,6,...| |3632897| MOLI ENERGY LIMITED| MOLI ENERGY|[M, O, L, I, , E...|(33,[0,1,3,5,7,8,...| |3633275| NORDSON CORPORATION| NORDSON|[N, O, R, D, S, O...|(33,[5,6,7,8,14],...| |3633256| PEROXIDCHEMIE GMBH| PEROXIDCHEMIE|[P, E, R, O, X, I...|(33,[0,3,7,8,9,11...| |3632695| POWER CELL INC| POWER CELL|[P, O, W, E, R, ...|(33,[0,1,7,8,9,10...| |3633037| ERGENICS INC| ERGENICS|[E, R, G, E, N, I...|(33,[0,3,5,6,8,9,...| |3632878| FORD MOTOR COMPANY| FORD MOTOR|[F, O, R, D, , M...|(33,[1,4,7,8,13,1...| |3632573| SAFT AMERICA INC| SAFT AMERICA|[S, A, F, T, , A...|(33,[0,1,2,3,4,6,...| |3632852|ALCAN INTERNATION...| ALCAN INTERNATIONAL|[A, L, C, A, N, ...|(33,[0,1,2,3,4,5,...| |3632698| KRUPPKOPPERS GMBH| KRUPPKOPPERS|[K, R, U, P, P, K...|(33,[0,6,7,8,12,1...| |3633150|ALCAN INTERNATION...| ALCAN INTERNATIONAL|[A, L, C, A, N, ...|(33,[0,1,2,3,4,5,...| |3632761|AMERICAN TELEPHON...|AMERICAN TELEPHON...|[A, M, E, R, I, C...|(33,[0,1,2,3,4,5,...| |3632757|HITACHI KOKI COMP...| HITACHI KOKI|[H, I, T, A, C, H...|(33,[1,2,3,4,7,9,...| |3632836|HUGHES AIRCRAFT C...| HUGHES AIRCRAFT|[H, U, G, H, E, S...|(33,[0,1,2,3,4,6,...| |3633152| SOSY INC| SOSY| [S, O, S, Y]|(33,[6,7,18],[2.0...| |3633052|HAMAMATSU PHOTONI...|HAMAMATSU PHOTONI...|[H, A, M, A, M, A...|(33,[1,2,3,4,5,6,...| |3633450| AKZO NOBEL NV| AKZO NOBEL|[A, K, Z, O, , N...|(33,[0,1,2,5,7,10...| |3632713| ELTRON RESEARCH INC| ELTRON RESEARCH|[E, L, T, R, O, N...|(33,[0,1,2,4,5,6,...| |3632533|NEC ELECTRONICS C...| NEC ELECTRONICS|[N, E, C, , E, L...|(33,[0,1,3,4,5,6,...| |3632562| TARGETTI SANKEY SPA| TARGETTI SANKEY SPA|[T, A, R, G, E, T...|(33,[0,1,2,3,4,5,...| +-------+--------------------+--------------------+--------------------+--------------------+ only showing top 30 rows ``` **Hardware used:** 1. Master node: m5.2xlarge 8 vCore, 32 GiB memory, EBS only storage EBS Storage:128 GiB 2. Slave nodes (10x): m5.4xlarge 16 vCore, 64 GiB memory, EBS only storage EBS Storage:500 GiB **Spark-submit settings used:** ``` spark-submit --master yarn --conf "spark.executor.instances=40" --conf "spark.default.parallelism=640" --conf "spark.shuffle.partitions=2000" --conf "spark.executor.cores=4" --conf "spark.executor.memory=14g" --conf "spark.driver.memory=14g" --conf "spark.driver.maxResultSize=14g" --conf "spark.dynamicAllocation.enabled=false" --packages graphframes:graphframes:0.7.0-spark2.4-s_2.11 run_disambiguation.py ``` **Task errors from Web UI** ``` ExecutorLostFailure (executor 21 exited caused by one of the running tasks) Reason: Slave lost ``` ``` ExecutorLostFailure (executor 31 exited unrelated to the running tasks) Reason: Container marked as failed: container_1590592506722_0001_02_000002 on host: ip-172-31-47-180.eu-central-1.compute.internal. Exit status: -100. Diagnostics: Container released on a *lost* node. ``` **(Part of) executor logs:** ``` 20/05/27 16:29:09 INFO ShuffleExternalSorter: Thread 89 spilling sort data of 1988.0 MB to disk (25 times so far) 20/05/27 16:29:13 INFO ShuffleExternalSorter: Thread 147 spilling sort data of 1988.0 MB to disk (26 times so far) 20/05/27 16:29:15 INFO ShuffleExternalSorter: Thread 146 spilling sort data of 1988.0 MB to disk (28 times so far) 20/05/27 16:29:17 INFO ShuffleExternalSorter: Thread 145 spilling sort data of 1988.0 MB to disk (0 time so far) 20/05/27 16:29:28 INFO ShuffleExternalSorter: Thread 147 spilling sort data of 1988.0 MB to disk (27 times so far) 20/05/27 16:29:28 INFO ShuffleExternalSorter: Thread 89 spilling sort data of 1988.0 MB to disk (26 times so far) 20/05/27 16:29:33 INFO ShuffleExternalSorter: Thread 146 spilling sort data of 1988.0 MB to disk (29 times so far) 20/05/27 16:29:38 INFO ShuffleExternalSorter: Thread 145 spilling sort data of 1988.0 MB to disk (1 time so far) 20/05/27 16:29:42 INFO ShuffleExternalSorter: Thread 89 spilling sort data of 1988.0 MB to disk (27 times so far) 20/05/27 16:29:46 INFO ShuffleExternalSorter: Thread 147 spilling sort data of 1988.0 MB to disk (28 times so far) 20/05/27 16:29:53 INFO ShuffleExternalSorter: Thread 146 spilling sort data of 1988.0 MB to disk (30 times so far) 20/05/27 16:29:57 INFO ShuffleExternalSorter: Thread 145 spilling sort data of 1988.0 MB to disk (2 times so far) 20/05/27 16:30:00 INFO ShuffleExternalSorter: Thread 89 spilling sort data of 1988.0 MB to disk (28 times so far) 20/05/27 16:30:05 INFO ShuffleExternalSorter: Thread 147 spilling sort data of 1988.0 MB to disk (29 times so far) 20/05/27 16:30:10 INFO ShuffleExternalSorter: Thread 146 spilling sort data of 1988.0 MB to disk (31 times so far) 20/05/27 16:30:15 INFO ShuffleExternalSorter: Thread 145 spilling sort data of 1988.0 MB to disk (3 times so far) 20/05/27 16:30:19 INFO ShuffleExternalSorter: Thread 89 spilling sort data of 1988.0 MB to disk (29 times so far) 20/05/27 16:30:22 INFO ShuffleExternalSorter: Thread 147 spilling sort data of 1988.0 MB to disk (30 times so far) 20/05/27 16:30:29 INFO ShuffleExternalSorter: Thread 146 spilling sort data of 1988.0 MB to disk (32 times so far) 20/05/27 16:30:32 INFO ShuffleExternalSorter: Thread 145 spilling sort data of 1988.0 MB to disk (4 times so far) 20/05/27 16:30:39 INFO ShuffleExternalSorter: Thread 147 spilling sort data of 1988.0 MB to disk (31 times so far) 20/05/27 16:30:39 INFO ShuffleExternalSorter: Thread 89 spilling sort data of 1988.0 MB to disk (30 times so far) 20/05/27 16:30:46 INFO ShuffleExternalSorter: Thread 146 spilling sort data of 1988.0 MB to disk (33 times so far) 20/05/27 16:30:47 INFO ShuffleExternalSorter: Thread 145 spilling sort data of 1988.0 MB to disk (5 times so far) 20/05/27 16:30:55 INFO ShuffleExternalSorter: Thread 147 spilling sort data of 1988.0 MB to disk (32 times so far) 20/05/27 16:30:59 INFO ShuffleExternalSorter: Thread 89 spilling sort data of 1988.0 MB to disk (31 times so far) 20/05/27 16:31:03 INFO ShuffleExternalSorter: Thread 146 spilling sort data of 1988.0 MB to disk (34 times so far) 20/05/27 16:31:06 INFO ShuffleExternalSorter: Thread 145 spilling sort data of 1988.0 MB to disk (6 times so far) 20/05/27 16:31:13 INFO ShuffleExternalSorter: Thread 147 spilling sort data of 1988.0 MB to disk (33 times so far) 20/05/27 16:31:14 INFO ShuffleExternalSorter: Thread 89 spilling sort data of 1988.0 MB to disk (32 times so far) 20/05/27 16:31:22 INFO ShuffleExternalSorter: Thread 146 spilling sort data of 1988.0 MB to disk (35 times so far) 20/05/27 16:31:24 INFO ShuffleExternalSorter: Thread 145 spilling sort data of 1988.0 MB to disk (7 times so far) 20/05/27 16:31:30 INFO ShuffleExternalSorter: Thread 147 spilling sort data of 1988.0 MB to disk (34 times so far) 20/05/27 16:31:32 INFO ShuffleExternalSorter: Thread 89 spilling sort data of 1988.0 MB to disk (33 times so far) 20/05/27 16:31:41 INFO ShuffleExternalSorter: Thread 146 spilling sort data of 1988.0 MB to disk (36 times so far) 20/05/27 16:31:44 INFO ShuffleExternalSorter: Thread 145 spilling sort data of 1988.0 MB to disk (8 times so far) 20/05/27 16:31:47 INFO ShuffleExternalSorter: Thread 147 spilling sort data of 1988.0 MB to disk (35 times so far) 20/05/27 16:31:48 INFO ShuffleExternalSorter: Thread 89 spilling sort data of 1988.0 MB to disk (34 times so far) 20/05/27 16:32:02 INFO ShuffleExternalSorter: Thread 146 spilling sort data of 1988.0 MB to disk (37 times so far) 20/05/27 16:32:03 INFO ShuffleExternalSorter: Thread 145 spilling sort data of 1988.0 MB to disk (9 times so far) 20/05/27 16:32:04 INFO ShuffleExternalSorter: Thread 147 spilling sort data of 1988.0 MB to disk (36 times so far) 20/05/27 16:32:08 INFO ShuffleExternalSorter: Thread 89 spilling sort data of 1988.0 MB to disk (35 times so far) 20/05/27 16:32:19 INFO ShuffleExternalSorter: Thread 146 spilling sort data of 1988.0 MB to disk (38 times so far) 20/05/27 16:32:20 INFO ShuffleExternalSorter: Thread 147 spilling sort data of 1988.0 MB to disk (37 times so far) 20/05/27 16:32:21 INFO ShuffleExternalSorter: Thread 145 spilling sort data of 1988.0 MB to disk (10 times so far) 20/05/27 16:32:26 INFO ShuffleExternalSorter: Thread 89 spilling sort data of 1988.0 MB to disk (36 times so far) 20/05/27 16:32:37 INFO ShuffleExternalSorter: Thread 146 spilling sort data of 1988.0 MB to disk (39 times so far) 20/05/27 16:32:37 INFO ShuffleExternalSorter: Thread 145 spilling sort data of 1988.0 MB to disk (11 times so far) 20/05/27 16:32:38 INFO ShuffleExternalSorter: Thread 147 spilling sort data of 1988.0 MB to disk (38 times so far) 20/05/27 16:32:45 INFO ShuffleExternalSorter: Thread 89 spilling sort data of 1988.0 MB to disk (37 times so far) 20/05/27 16:32:51 INFO ShuffleExternalSorter: Thread 146 spilling sort data of 1988.0 MB to disk (40 times so far) 20/05/27 16:32:56 INFO ShuffleExternalSorter: Thread 145 spilling sort data of 1988.0 MB to disk (12 times so far) 20/05/27 16:32:58 INFO ShuffleExternalSorter: Thread 147 spilling sort data of 1988.0 MB to disk (39 times so far) 20/05/27 16:33:03 INFO ShuffleExternalSorter: Thread 89 spilling sort data of 1988.0 MB to disk (38 times so far) 20/05/27 16:33:08 INFO ShuffleExternalSorter: Thread 146 spilling sort data of 1988.0 MB to disk (41 times so far) 20/05/27 16:33:13 INFO ShuffleExternalSorter: Thread 145 spilling sort data of 1988.0 MB to disk (13 times so far) 20/05/27 16:33:15 INFO ShuffleExternalSorter: Thread 147 spilling sort data of 1988.0 MB to disk (40 times so far) 20/05/27 16:33:20 INFO ShuffleExternalSorter: Thread 89 spilling sort data of 1988.0 MB to disk (39 times so far) 20/05/27 16:33:26 INFO ShuffleExternalSorter: Thread 146 spilling sort data of 1988.0 MB to disk (42 times so far) 20/05/27 16:33:30 INFO ShuffleExternalSorter: Thread 147 spilling sort data of 1988.0 MB to disk (41 times so far) 20/05/27 16:33:31 INFO ShuffleExternalSorter: Thread 145 spilling sort data of 1988.0 MB to disk (14 times so far) 20/05/27 16:33:36 INFO ShuffleExternalSorter: Thread 89 spilling sort data of 1988.0 MB to disk (40 times so far) 20/05/27 16:33:46 INFO ShuffleExternalSorter: Thread 146 spilling sort data of 1992.0 MB to disk (43 times so far) 20/05/27 16:33:47 INFO ShuffleExternalSorter: Thread 147 spilling sort data of 1988.0 MB to disk (42 times so far) 20/05/27 16:33:51 INFO ShuffleExternalSorter: Thread 145 spilling sort data of 1988.0 MB to disk (15 times so far) 20/05/27 16:33:54 INFO ShuffleExternalSorter: Thread 89 spilling sort data of 1988.0 MB to disk (41 times so far) 20/05/27 16:34:03 INFO ShuffleExternalSorter: Thread 147 spilling sort data of 1992.0 MB to disk (43 times so far) 20/05/27 16:34:04 INFO ShuffleExternalSorter: Thread 146 spilling sort data of 1992.0 MB to disk (44 times so far) 20/05/27 16:34:08 INFO ShuffleExternalSorter: Thread 145 spilling sort data of 1988.0 MB to disk (16 times so far) 20/05/27 16:34:14 INFO ShuffleExternalSorter: Thread 89 spilling sort data of 1988.0 MB to disk (42 times so far) 20/05/27 16:34:16 INFO PythonUDFRunner: Times: total = 774701, boot = 3, init = 10, finish = 774688 20/05/27 16:34:21 INFO ShuffleExternalSorter: Thread 147 spilling sort data of 1992.0 MB to disk (44 times so far) 20/05/27 16:34:22 INFO ShuffleExternalSorter: Thread 145 spilling sort data of 1988.0 MB to disk (17 times so far) 20/05/27 16:34:30 INFO PythonUDFRunner: Times: total = 773372, boot = 2, init = 9, finish = 773361 20/05/27 16:34:32 INFO ShuffleExternalSorter: Thread 89 spilling sort data of 1992.0 MB to disk (43 times so far) 20/05/27 16:34:39 INFO ShuffleExternalSorter: Thread 145 spilling sort data of 1988.0 MB to disk (18 times so far) 20/05/27 16:34:46 INFO ShuffleExternalSorter: Thread 89 spilling sort data of 1992.0 MB to disk (44 times so far) 20/05/27 16:34:52 INFO ShuffleExternalSorter: Thread 145 spilling sort data of 1988.0 MB to disk (19 times so far) 20/05/27 16:35:01 INFO PythonUDFRunner: Times: total = 776905, boot = 3, init = 11, finish = 776891 20/05/27 16:35:05 INFO ShuffleExternalSorter: Thread 145 spilling sort data of 1988.0 MB to disk (20 times so far) 20/05/27 16:35:19 INFO ShuffleExternalSorter: Thread 145 spilling sort data of 1988.0 MB to disk (21 times so far) 20/05/27 16:35:35 INFO ShuffleExternalSorter: Thread 145 spilling sort data of 1988.0 MB to disk (22 times so far) 20/05/27 16:35:52 INFO ShuffleExternalSorter: Thread 145 spilling sort data of 1988.0 MB to disk (23 times so far) 20/05/27 16:36:10 INFO ShuffleExternalSorter: Thread 145 spilling sort data of 1988.0 MB to disk (24 times so far) 20/05/27 16:36:29 INFO ShuffleExternalSorter: Thread 145 spilling sort data of 1988.0 MB to disk (25 times so far) 20/05/27 16:36:47 INFO ShuffleExternalSorter: Thread 145 spilling sort data of 1988.0 MB to disk (26 times so far) 20/05/27 16:37:06 INFO ShuffleExternalSorter: Thread 145 spilling sort data of 1988.0 MB to disk (27 times so far) 20/05/27 16:37:25 INFO ShuffleExternalSorter: Thread 145 spilling sort data of 1988.0 MB to disk (28 times so far) 20/05/27 16:37:44 INFO ShuffleExternalSorter: Thread 145 spilling sort data of 1988.0 MB to disk (29 times so far) 20/05/27 16:38:03 INFO ShuffleExternalSorter: Thread 145 spilling sort data of 1988.0 MB to disk (30 times so far) 20/05/27 16:38:22 INFO ShuffleExternalSorter: Thread 145 spilling sort data of 1988.0 MB to disk (31 times so far) 20/05/27 16:38:41 INFO ShuffleExternalSorter: Thread 145 spilling sort data of 1988.0 MB to disk (32 times so far) 20/05/27 16:38:59 INFO ShuffleExternalSorter: Thread 145 spilling sort data of 1988.0 MB to disk (33 times so far) 20/05/27 16:39:19 INFO ShuffleExternalSorter: Thread 145 spilling sort data of 1988.0 MB to disk (34 times so far) 20/05/27 16:39:39 INFO ShuffleExternalSorter: Thread 145 spilling sort data of 1988.0 MB to disk (35 times so far) 20/05/27 16:39:58 INFO ShuffleExternalSorter: Thread 145 spilling sort data of 1988.0 MB to disk (36 times so far) 20/05/27 16:40:18 INFO ShuffleExternalSorter: Thread 145 spilling sort data of 1988.0 MB to disk (37 times so far) 20/05/27 16:40:38 INFO ShuffleExternalSorter: Thread 145 spilling sort data of 1988.0 MB to disk (38 times so far) 20/05/27 16:40:57 INFO ShuffleExternalSorter: Thread 145 spilling sort data of 1988.0 MB to disk (39 times so far) 20/05/27 16:41:16 INFO ShuffleExternalSorter: Thread 145 spilling sort data of 1988.0 MB to disk (40 times so far) 20/05/27 16:41:35 INFO ShuffleExternalSorter: Thread 145 spilling sort data of 1988.0 MB to disk (41 times so far) 20/05/27 16:41:55 INFO ShuffleExternalSorter: Thread 145 spilling sort data of 1988.0 MB to disk (42 times so far) 20/05/27 16:42:19 INFO ShuffleExternalSorter: Thread 145 spilling sort data of 1992.0 MB to disk (43 times so far) 20/05/27 16:42:41 INFO ShuffleExternalSorter: Thread 145 spilling sort data of 1992.0 MB to disk (44 times so far) 20/05/27 16:42:59 ERROR CoarseGrainedExecutorBackend: RECEIVED SIGNAL TERM 20/05/27 16:42:59 INFO DiskBlockManager: Shutdown hook called 20/05/27 16:42:59 INFO ShutdownHookManager: Shutdown hook called 20/05/27 16:42:59 INFO ShutdownHookManager: Deleting directory /mnt/yarn/usercache/hadoop/appcache/application_1590592506722_0001/spark-73af8e3b-f428-47d4-9e13-fed4e19cc2cd ``` ``` 2020-05-27T16:41:16.336+0000: [GC (Allocation Failure) 2020-05-27T16:41:16.336+0000: [ParNew: 272234K->242K(305984K), 0.0094375 secs] 9076907K->8804915K(13188748K), 0.0094895 secs] [Times: user=0.12 sys=0.00, real=0.01 secs] 2020-05-27T16:41:34.686+0000: [GC (Allocation Failure) 2020-05-27T16:41:34.686+0000: [ParNew: 272242K->257K(305984K), 0.0084179 secs] 9076915K->8804947K(13188748K), 0.0084840 secs] [Times: user=0.09 sys=0.01, real=0.01 secs] 2020-05-27T16:41:35.145+0000: [GC (Allocation Failure) 2020-05-27T16:41:35.145+0000: [ParNew: 272257K->1382K(305984K), 0.0095541 secs] 9076947K->8806073K(13188748K), 0.0096080 secs] [Times: user=0.12 sys=0.00, real=0.01 secs] 2020-05-27T16:41:55.077+0000: [GC (Allocation Failure) 2020-05-27T16:41:55.077+0000: [ParNew: 273382K->2683K(305984K), 0.0097177 secs] 9078073K->8807392K(13188748K), 0.0097754 secs] [Times: user=0.12 sys=0.00, real=0.01 secs] 2020-05-27T16:41:55.513+0000: [GC (Allocation Failure) 2020-05-27T16:41:55.513+0000: [ParNew: 274683K->3025K(305984K), 0.0093345 secs] 9079392K->8807734K(13188748K), 0.0093892 secs] [Times: user=0.12 sys=0.00, real=0.01 secs] 2020-05-27T16:42:05.481+0000: [GC (Allocation Failure) 2020-05-27T16:42:05.481+0000: [ParNew: 275025K->4102K(305984K), 0.0092950 secs] 9079734K->8808830K(13188748K), 0.0093464 secs] [Times: user=0.12 sys=0.00, real=0.01 secs] 2020-05-27T16:42:18.711+0000: [GC (Allocation Failure) 2020-05-27T16:42:18.711+0000: [ParNew: 276102K->2972K(305984K), 0.0098928 secs] 9080830K->8807700K(13188748K), 0.0099510 secs] [Times: user=0.13 sys=0.00, real=0.01 secs] 2020-05-27T16:42:36.493+0000: [GC (Allocation Failure) 2020-05-27T16:42:36.493+0000: [ParNew: 274972K->3852K(305984K), 0.0094324 secs] 9079700K->8808598K(13188748K), 0.0094897 secs] [Times: user=0.11 sys=0.00, real=0.01 secs] 2020-05-27T16:42:40.880+0000: [GC (Allocation Failure) 2020-05-27T16:42:40.880+0000: [ParNew: 275852K->2568K(305984K), 0.0111794 secs] 9080598K->8807882K(13188748K), 0.0112352 secs] [Times: user=0.13 sys=0.00, real=0.01 secs] Heap par new generation total 305984K, used 261139K [0x0000000440000000, 0x0000000454c00000, 0x0000000483990000) eden space 272000K, 95% used [0x0000000440000000, 0x000000044fc82cf8, 0x00000004509a0000) from space 33984K, 7% used [0x00000004509a0000, 0x0000000450c220a8, 0x0000000452ad0000) to space 33984K, 0% used [0x0000000452ad0000, 0x0000000452ad0000, 0x0000000454c00000) concurrent mark-sweep generation total 12882764K, used 8805314K [0x0000000483990000, 0x0000000795e63000, 0x00000007c0000000) Metaspace used 77726K, capacity 79553K, committed 79604K, reserved 1118208K class space used 10289K, capacity 10704K, committed 10740K, reserved 1048576K ``` [Screenshot of executors](https://i.stack.imgur.com/MsKzB.png) **What I tried:** * Changing `spark.sql.shuffle.partitions` * Changing `spark.default.parallelism` * Repartition the dataframe How can I solve this issue? Thanks in advance! Thijs
2020/05/28
[ "https://Stackoverflow.com/questions/62065607", "https://Stackoverflow.com", "https://Stackoverflow.com/users/12383245/" ]
The answer of @lokk3r really helped me in the right direction here. However, there were some other things that I had to do before I was able to run the program without errors. I will share them to help people out that are having similar problems: * First of all, I used `NGrams` as @lokk3r suggested instead of just single characters to avoid extreme data skew inside the MinHashLSH algorithm. When using 4-grams, `data` looks like: ``` +------------------------------+-------+------------------------------+------------------------------+------------------------------+ | name| id| clean| ng_char_lst| vectorized_char_lst| +------------------------------+-------+------------------------------+------------------------------+------------------------------+ | SOCIETE ANONYME DITE SAFT|3632811| SOCIETE ANONYME DITE SAFT|[ S O C, S O C I, O C I E,...|(1332,[64,75,82,84,121,223,...| | MURATA MACHINERY LTD|3633038| MURATA MACHINERY|[ M U R, M U R A, U R A T,...|(1332,[55,315,388,437,526,5...| |HEINE OPTOTECHNIK GMBH AND ...|3633318| HEINE OPTOTECHNIK GMBH AND|[ H E I, H E I N, E I N E,...|(1332,[23,72,216,221,229,34...| | FUJIFILM CORPORATION|3632655| FUJIFILM|[ F U J, F U J I, U J I F,...|(1332,[157,179,882,1028],[1...| | SUNBEAM PRODUCTS INC|3633523| SUNBEAM PRODUCTS|[ S U N, S U N B, U N B E,...|(1332,[99,137,165,175,187,1...| | STUDIENGESELLSCHAFT KOHLE MBH|3632732| STUDIENGESELLSCHAFT KOHLE MBH|[ S T U, S T U D, T U D I,...|(1332,[13,14,23,25,43,52,57...| |REHABILITATION INSTITUTE OF...|3633240|REHABILITATION INSTITUTE OF...|[ R E H, R E H A, E H A B,...|(1332,[20,44,51,118,308,309...| | NORDSON CORPORATION|3633275| NORDSON|[ N O R, N O R D, O R D S,...|(1332,[45,88,582,1282],[1.0...| | ENERGY CONVERSION DEVICES|3632866| ENERGY CONVERSION DEVICES|[ E N E, E N E R, N E R G,...|(1332,[54,76,81,147,202,224...| | MOLI ENERGY LIMITED|3632897| MOLI ENERGY|[ M O L, M O L I, O L I ,...|(1332,[438,495,717,756,1057...| | ERGENICS POWER SYSTEMS INC|3632895| ERGENICS POWER SYSTEMS|[ E R G, E R G E, R G E N,...|(1332,[6,10,18,21,24,35,375...| | POWER CELL INC|3632695| POWER CELL|[ P O W, P O W E, O W E R,...|(1332,[6,10,18,35,126,169,3...| | PEROXIDCHEMIE GMBH|3633256| PEROXIDCHEMIE|[ P E R, P E R O, E R O X,...|(1332,[326,450,532,889,1073...| | FORD MOTOR COMPANY|3632878| FORD MOTOR|[ F O R, F O R D, O R D ,...|(1332,[156,158,186,200,314,...| | ERGENICS INC|3633037| ERGENICS|[ E R G, E R G E, R G E N,...|(1332,[375,642,812,866,1269...| | SAFT AMERICA INC|3632573| SAFT AMERICA|[ S A F, S A F T, A F T ,...|(1332,[498,552,1116],[1.0,1...| | ALCAN INTERNATIONAL LIMITED|3632598| ALCAN INTERNATIONAL|[ A L C, A L C A, L C A N,...|(1332,[20,434,528,549,571,7...| | KRUPPKOPPERS GMBH|3632698| KRUPPKOPPERS|[ K R U, K R U P, R U P P,...|(1332,[664,795,798,1010,114...| | HUGHES AIRCRAFT COMPANY|3632752| HUGHES AIRCRAFT|[ H U G, H U G H, U G H E,...|(1332,[605,632,705,758,807,...| |AMERICAN TELEPHONE AND TELE...|3632761|AMERICAN TELEPHONE AND TELE...|[ A M E, A M E R, M E R I,...|(1332,[19,86,91,126,128,134...| +------------------------------+-------+------------------------------+------------------------------+------------------------------+ ``` Note that I added leading and trailing white spaces on the names, to make sure that the order of words in the name does not matter for the `NGrams`: `'XX YY'` has 3-grams `'XX ', 'X Y', ' YY'`, while `'YY XX'` has 3-grams `'YY ', 'Y X', ' XX'`. This means that both share 0 out of 6 unique `NGrams`. If we use leading and trailing white spaces: `' XX YY '` has 3-grams `' XX', 'XX ', 'X Y', ' YY', 'YY '`, while `' YY XX '` has 3-grams `' YY', 'YY ', 'Y X', ' XX', 'XX '`. This means both share 4 out of 6 unique `NGrams`. This means that there is much more probability that both records end in the same bucket during MinHashLSH. * I experimented with different values of `n` - the input parameter for `NGrams`. I found that both `n=2` and `n=3` still gives so much data skew that a few Spark jobs take way too long while others are done within seconds. So you end up waiting forever before the program continues. I now use `n=4`, and that still gives substantial skew but it is workable. * To reduce the effects of the data skew even more, I used some additional filtering of too (in)frequently occuring `NGrams` in the `CountVectorizer` method of Spark. I have set `minDF=2` such that it filters out `NGrams` that are occuring in only a single name. I did this because you cannot match those names based on a `NGram` that occurs only in one name anyways. In addition, I set `maxDF=0.001` such that it filters out `NGrams` that are occuring in more than 0.1% of the names. This means for approximately 30 million names, that `NGrams` that occur more frequently than in 30000 names are filtered out. I figured that a too frequently occuring `NGram` will not provide usefull information on which names can be matched anyways. * I reduce the number of unique names (30 million first) to 15 million by filtering out the non-Latin (extended) names. I noticed that (e.g. Arabic and Chinese) characters caused a big skew in the data as well. Since I am not primarily interested in disambiguating these company names, I disregarded them from the data set. I filtered using the following regex match: ``` re.fullmatch('[\u0020-\u007F\u00A0-\u00FF\u0100-\u017F\u0180-\u024F]+'.encode(), string_to_filter.encode()) ``` * This is a little bit a straight forward advise, but I ran into some problems by not seing it. Make sure you run a filter on dataset before feeding it to the `MinHashLSH` algorithm to filter out records that have no `NGrams` remaining due to the settings `minDF` and `maxDF` or just because it is a small name. Obviously this will not work for the `MinHashLSH` algorithm. * Finally, regarding the settings of the `spark-submit` command and the hardware settings of the EMR cluster, I found that I didn't need a larger cluster as some of the answers on the forums suggested. All the above changes made the program run perfectly on a cluster with the settings as provided in my original post. Reducing the `spark.shuffle.partitions`, the `spark.driver.memory` and the `spark.driver.maxResultSize` substantially improved the running time of the program. The `spark-submit` I submitted was: ``` spark-submit --master yarn --conf "spark.executor.instances=40" --conf "spark.default.parallelism=640" --conf "spark.executor.cores=4" --conf "spark.executor.memory=12g" --conf "spark.driver.memory=8g" --conf "spark.driver.maxResultSize=8g" --conf "spark.dynamicAllocation.enabled=false" --packages graphframes:graphframes:0.7.0-spark2.4-s_2.11 run_disambiguation.py ```
Thanks for the detailed explanation. What threshold are you using a and how are reducing false -ve?
29,970,679
I have this simple code in Python: ``` import sys class Crawler(object): def __init__(self, num_of_runs): self.run_number = 1 self.num_of_runs = num_of_runs def single_run(self): #do stuff pass def run(self): while self.run_number <= self.num_of_runs: self.single_run() print self.run_number self.run_number += 1 if __name__ == "__main__": num_of_runs = sys.argv[1] crawler = Crawler(num_of_runs) crawler.run() ``` Then, I run it this way: `python path/crawler.py 10` From my understanding, it should loop 10 times and stop, right? Why it doesn't?
2015/04/30
[ "https://Stackoverflow.com/questions/29970679", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3717289/" ]
``` num_of_runs = sys.argv[1] ``` `num_of_runs` is a string at that stage. ``` while self.run_number <= self.num_of_runs: ``` You are comparing a `string` and an `int` here. A simple way to fix this is to convert it to an int ``` num_of_runs = int(sysargv[1]) ``` Another way to deal with this is to use `argparser`. ``` import argparse parser = argparse.ArgumentParser(description='The program does bla and bla') parser.add_argument( 'my_int', type=int, help='an integer for the script' ) args = parser.parse_args() print args.my_int print type(args.my_int) ``` Now if you execute the script like this: ``` ./my_script.py 20 ``` The output is: > > 20 > > > > Using argparser also gives you the -h option by default: ``` python my_script.py -h usage: i.py [-h] my_int The program does bla and bla positional arguments: my_int an integer for the script optional arguments: -h, --help show this help message and exit ``` For more information, have a look at the [argparser](https://docs.python.org/dev/library/argparse.html) documentation. Note: The code I have used is from the argparser documentation, but has been slightly modified.
When accepting input from the command line, data is passed as a string. You need to convert this value to an `int` before you pass it to your `Crawler` class: ``` num_of_runs = int(sys.argv[1]) ``` --- You can also utilize this to determine if the input is valid. If it doesn't convert to an int, it will throw an error.
54,873,222
I have a scenario where the data is like below in a text file: ``` first_id;"second_id";"name";"performer";"criteria" 12345;"13254";"abc";"def";"criteria_1" 65432;"13254";"abc";"ghi";"criteria_1" 24561;"13254";"abc";"pqr";"criteria_2" 24571;"13254";"abc";"jkl";"criteria_2" first_id;"second_id";"name";"performer";"criteria" 12345;"78452";"mno";"xyz";"criteria_1" 24561;"78452";"mno";"tuv";"criteria_2" so on.. ``` Note: The name column value remains same for each result fetched, but the performer varies for each row and has criteria set. The second\_id column values are same for each result fetched. For the above data, I need to capture the name and performer and have to move them to excel sheet as comma separated value like below output. The author value is based on name column defined above, approver values are based on criteria\_1 and reviewer values are based on criteria\_2. ``` **author| approver| reviewer** --> columns in excel abc | def, ghi| pqr, jkl --> values corresponding to their columns ``` See the below picture for my expected output. The author is of "name" field defined above. approver field is determined based on "criteria" - criteria\_1, reviewer field is determined based on "criteria" - criteria\_2. [picture for output](https://i.stack.imgur.com/cvzyv.png) Here, I'm requesting to how to make a script in python to get the above output? Let me know for any further information.
2019/02/25
[ "https://Stackoverflow.com/questions/54873222", "https://Stackoverflow.com", "https://Stackoverflow.com/users/6909182/" ]
Here's a general answer that will scale up to as many data frames as you have: ``` library(dplyr) df_list = list(df5 = df5, df6 = df6) library(dplyr) big_df = bind_rows(df_list, .id = "source") big_df = big_df %>% group_by(Year) %>% summarize_if(is.numeric, mean) %>% mutate(source = "Mean") %>% bind_rows(big_df) ggplot(big_df, aes(x = Year, y = Total, color = source)) + geom_line() ``` [![enter image description here](https://i.stack.imgur.com/vApcE.png)](https://i.stack.imgur.com/vApcE.png) Naming the `list` more appropriately will help with the plot labels. If you do have more data frames, I'd strongly recommend reading my answer at [How to make a list of data frames](https://stackoverflow.com/a/24376207/903061). Using this data: ``` df5 = read.table(text = "Year VegC LittC SoilfC SoilsC Total 1 2013 1.820858 1.704079 4.544182 1.964507 10.03363 2 2014 1.813573 1.722106 4.548287 1.964658 10.04863 3 2015 1.776853 1.722110 4.553425 1.964817 10.01722 4 2016 1.794462 1.691728 4.556691 1.964973 10.00785 5 2017 1.808207 1.708956 4.557116 1.965063 10.03936 6 2018 1.831758 1.728973 4.559844 1.965192 10.08578", header = T) df6 = read.table(text = " Year VegC LittC SoilfC SoilsC Total 1 2013 1.832084 1.736137 4.542052 1.964454 10.07474 2 2014 1.806351 1.741353 4.548349 1.964633 10.06069 3 2015 1.825316 1.729084 4.552433 1.964792 10.07164 4 2016 1.845673 1.735861 4.553766 1.964900 10.10020 5 2017 1.810343 1.754477 4.556542 1.965033 10.08640 6 2018 1.814503 1.728337 4.561960 1.965191 10.07001", header = T) ```
If I understand you right, you would like to plot for the year 2013 10,054185 If you have for every year one line you can create a new col and add this to your existing ggplot: ``` df <- dataframe5$Year df$total5 <- dataframe5$Total df$total6 <- dataframe6$Total df$totalmean <- (df$total5+df$total6)/2 ``` By plotting `df$totalmean` you should get the mean of the line. Just add the lines by `+ Geometrie_line(...)`in the existing ggplot.
41,951,204
I am new to python. I'm trying to connect my client with the broker. But I am getting an error "global name 'mqttClient' is not defined". Can anyone help me to what is wrong with my code. Here is my code, **Test.py** ``` #!/usr/bin/env python import time, threading import mqttConnector class UtilsThread(object): def __init__(self): thread = threading.Thread(target=self.run, args=()) thread.daemon = True # Daemonize thread thread.start() # Start the execution class SubscribeToMQTTQueue(object): def __init__(self): thread = threading.Thread(target=self.run, args=()) thread.daemon = True # Daemonize thread thread.start() # Start the execution def run(self): mqttConnector.main() def connectAndPushData(): PUSH_DATA = "xxx" mqttConnector.publish(PUSH_DATA) def main(): SubscribeToMQTTQueue() # connects and subscribes to an MQTT Queue that receives MQTT commands from the server LAST_TEMP = 25 try: if LAST_TEMP > 0: connectAndPushData() time.sleep(5000) except (KeyboardInterrupt, Exception) as e: print "Exception in RaspberryAgentThread (either KeyboardInterrupt or Other)" print ("STATS: " + str(e)) pass if __name__ == "__main__": main() ``` **mqttConnector.py** ``` #!/usr/bin/env python import time import paho.mqtt.client as mqtt def on_connect(client, userdata, flags, rc): print("MQTT_LISTENER: Connected with result code " + str(rc)) def on_message(client, userdata, msg): print 'MQTT_LISTENER: Message Received by Device' def on_publish(client, userdata, mid): print 'Temperature Data Published Succesfully' def publish(msg): # global mqttClient mqttClient.publish(TOPIC_TO_PUBLISH, msg) def main(): MQTT_IP = "IP" MQTT_PORT = "port" global TOPIC_TO_PUBLISH TOPIC_TO_PUBLISH = "xxx/laptop-management/001/data" global mqttClient mqttClient = mqtt.Client() mqttClient.on_connect = on_connect mqttClient.on_message = on_message mqttClient.on_publish = on_publish while True: try: mqttClient.connect(MQTT_IP, MQTT_PORT, 180) mqttClient.loop_forever() except (KeyboardInterrupt, Exception) as e: print "MQTT_LISTENER: Exception in MQTTServerThread (either KeyboardInterrupt or Other)" print ("MQTT_LISTENER: " + str(e)) mqttClient.disconnect() print "MQTT_LISTENER: " + time.asctime(), "Connection to Broker closed - %s:%s" % (MQTT_IP, MQTT_PORT) if __name__ == '__main__': main() ``` I'm getting this, ``` Exception in RaspberryAgentThread (either KeyboardInterrupt or Other) STATS: global name 'mqttClient' is not defined ```
2017/01/31
[ "https://Stackoverflow.com/questions/41951204", "https://Stackoverflow.com", "https://Stackoverflow.com/users/7134849/" ]
* **For MongoDB -** Use AWS quick start MongoDB <http://docs.aws.amazon.com/quickstart/latest/mongodb/overview.html> <http://docs.aws.amazon.com/quickstart/latest/mongodb/architecture.html> * **For rest of the docker stack i.e NodeJS & Nginx -** Use the AWS ElasticBeanstalk Multi Container Deployment <http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/create_deploy_docker_ecs.html> <http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/create_deploy_docker_v2config.html>
Elastic Beanstalk supports Docker, as [documented here](http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/create_deploy_docker.html). Elastic Beanstalk would manage the EC2 resources for you so that you, which should make things a bit easier on you.
41,951,204
I am new to python. I'm trying to connect my client with the broker. But I am getting an error "global name 'mqttClient' is not defined". Can anyone help me to what is wrong with my code. Here is my code, **Test.py** ``` #!/usr/bin/env python import time, threading import mqttConnector class UtilsThread(object): def __init__(self): thread = threading.Thread(target=self.run, args=()) thread.daemon = True # Daemonize thread thread.start() # Start the execution class SubscribeToMQTTQueue(object): def __init__(self): thread = threading.Thread(target=self.run, args=()) thread.daemon = True # Daemonize thread thread.start() # Start the execution def run(self): mqttConnector.main() def connectAndPushData(): PUSH_DATA = "xxx" mqttConnector.publish(PUSH_DATA) def main(): SubscribeToMQTTQueue() # connects and subscribes to an MQTT Queue that receives MQTT commands from the server LAST_TEMP = 25 try: if LAST_TEMP > 0: connectAndPushData() time.sleep(5000) except (KeyboardInterrupt, Exception) as e: print "Exception in RaspberryAgentThread (either KeyboardInterrupt or Other)" print ("STATS: " + str(e)) pass if __name__ == "__main__": main() ``` **mqttConnector.py** ``` #!/usr/bin/env python import time import paho.mqtt.client as mqtt def on_connect(client, userdata, flags, rc): print("MQTT_LISTENER: Connected with result code " + str(rc)) def on_message(client, userdata, msg): print 'MQTT_LISTENER: Message Received by Device' def on_publish(client, userdata, mid): print 'Temperature Data Published Succesfully' def publish(msg): # global mqttClient mqttClient.publish(TOPIC_TO_PUBLISH, msg) def main(): MQTT_IP = "IP" MQTT_PORT = "port" global TOPIC_TO_PUBLISH TOPIC_TO_PUBLISH = "xxx/laptop-management/001/data" global mqttClient mqttClient = mqtt.Client() mqttClient.on_connect = on_connect mqttClient.on_message = on_message mqttClient.on_publish = on_publish while True: try: mqttClient.connect(MQTT_IP, MQTT_PORT, 180) mqttClient.loop_forever() except (KeyboardInterrupt, Exception) as e: print "MQTT_LISTENER: Exception in MQTTServerThread (either KeyboardInterrupt or Other)" print ("MQTT_LISTENER: " + str(e)) mqttClient.disconnect() print "MQTT_LISTENER: " + time.asctime(), "Connection to Broker closed - %s:%s" % (MQTT_IP, MQTT_PORT) if __name__ == '__main__': main() ``` I'm getting this, ``` Exception in RaspberryAgentThread (either KeyboardInterrupt or Other) STATS: global name 'mqttClient' is not defined ```
2017/01/31
[ "https://Stackoverflow.com/questions/41951204", "https://Stackoverflow.com", "https://Stackoverflow.com/users/7134849/" ]
* **For MongoDB -** Use AWS quick start MongoDB <http://docs.aws.amazon.com/quickstart/latest/mongodb/overview.html> <http://docs.aws.amazon.com/quickstart/latest/mongodb/architecture.html> * **For rest of the docker stack i.e NodeJS & Nginx -** Use the AWS ElasticBeanstalk Multi Container Deployment <http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/create_deploy_docker_ecs.html> <http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/create_deploy_docker_v2config.html>
You can install [Kontena](https://www.kontena.io) to [AWS](https://kontena.io/docs/getting-started/installing/aws-ec2.html) and use that to deploy your application to production environment (of course other cloud providers are also supported). Transition from Docker Compose is very smooth since [kontena.yml](http://kontena.io/docs/references/kontena-yml.html) uses similar syntax and keys as docker-compose.yml. With Kontena you will have private image registry, load balancer and secret management built-in that are very useful when running containers in production.