qid
int64
46k
74.7M
question
stringlengths
54
37.8k
date
stringlengths
10
10
metadata
listlengths
3
3
response_j
stringlengths
17
26k
response_k
stringlengths
26
26k
25,888,828
I'm trying to make an script which takes all rows starting by 'HELIX', 'SHEET' and 'DBREF' from a .txt, from that rows takes some specifical columns and then saves the results on a new file. ``` #!/usr/bin/python import sys if len(sys.argv) != 3: print("2 Parameters expected: You must introduce your pdb file and a name for output file.")` exit() for line in open(sys.argv[1]): if 'HELIX' in line: helix = line.split() cols_h = helix[0], helix[3:6:2], helix[6:9:2] elif 'SHEET'in line: sheet = line.split() cols_s = sheet[0], sheet[4:7:2], sheet[7:10:2], sheet [12:15:2], sheet[16:19:2] elif 'DBREF' in line: dbref = line.split() cols_id = dbref[0], dbref[3:5], dbref[8:10] modified_data = open(sys.argv[2],'w') modified_data.write(cols_id) modified_data.write(cols_h) modified_data.write(cols_s) ``` My problem is that when I try to write my final results it gives this error: ``` Traceback (most recent call last): File "funcional2.py", line 21, in <module> modified_data.write(cols_id) TypeError: expected a character buffer object ``` When I try to convert to a string using ''.join() it returns another error ``` Traceback (most recent call last): File "funcional2.py", line 21, in <module> modified_data.write(' '.join(cols_id)) TypeError: sequence item 1: expected string, list found ``` What am I doing wrong? Also, if there is some easy way to simplify my code, it'll be great. PS: I'm no programmer so I'll probably need some explanation if you do something... Thank you very much.
2014/09/17
[ "https://Stackoverflow.com/questions/25888828", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4027271/" ]
Here is a solution (untested) that separates data and code a little more. There is a data structure (`keyword_and_slices`) describing the keywords searched in the lines paired with the slices to be taken for the result. The code then goes through the lines and builds a data structure (`keyword2lines`) mapping the keyword to the result lines for that keyword. At the end the collected lines for each keyword are written to the result file. ``` import sys from collections import defaultdict def main(): if len(sys.argv) != 3: print( '2 Parameters expected: You must introduce your pdb file' ' and a name for output file.' ) sys.exit(1) input_filename, output_filename = sys.argv[1:3] # # Pairs of keywords and slices that should be taken from the line # starting with the respective keyword. # keyword_and_slices = [ ('HELIX', [slice(3, 6, 2), slice(6, 9, 2)]), ( 'SHEET', [slice(a, b, 2) for a, b in [(4, 7), (7, 10), (12, 15), (16, 19)]] ), ('DBREF', [slice(3, 5), slice(8, 10)]), ] keyword2lines = defaultdict(list) with open(input_filename, 'r') as lines: for line in lines: for keyword, slices in keyword_and_slices: if line.startswith(keyword): parts = line.split() result_line = [keyword] for index in slices: result_line.extend(parts[index]) keyword2lines[keyword].append(' '.join(result_line) + '\n') with open(output_filename, 'w') as out_file: for keyword in ['DBREF', 'HELIX', 'SHEET']: out_file.writelines(keyword2lines[keyword]) if __name__ == '__main__': main() ``` The code follows your text in checking if a line *starts* with a keyword, instead your code which checks if a keyword is *anywhere* within a line. It also makes sure all files are closed properly by using the `with` statement.
You need to convert the tuple created on RHS in your assignments to string. ``` # Replace this with statement given below cols_id = dbref[0], dbref[3:5], dbref[8:10] # Create a string out of the tuple cols_id = ''.join((dbref[0], dbref[3:5], dbref[8:10])) ```
25,888,828
I'm trying to make an script which takes all rows starting by 'HELIX', 'SHEET' and 'DBREF' from a .txt, from that rows takes some specifical columns and then saves the results on a new file. ``` #!/usr/bin/python import sys if len(sys.argv) != 3: print("2 Parameters expected: You must introduce your pdb file and a name for output file.")` exit() for line in open(sys.argv[1]): if 'HELIX' in line: helix = line.split() cols_h = helix[0], helix[3:6:2], helix[6:9:2] elif 'SHEET'in line: sheet = line.split() cols_s = sheet[0], sheet[4:7:2], sheet[7:10:2], sheet [12:15:2], sheet[16:19:2] elif 'DBREF' in line: dbref = line.split() cols_id = dbref[0], dbref[3:5], dbref[8:10] modified_data = open(sys.argv[2],'w') modified_data.write(cols_id) modified_data.write(cols_h) modified_data.write(cols_s) ``` My problem is that when I try to write my final results it gives this error: ``` Traceback (most recent call last): File "funcional2.py", line 21, in <module> modified_data.write(cols_id) TypeError: expected a character buffer object ``` When I try to convert to a string using ''.join() it returns another error ``` Traceback (most recent call last): File "funcional2.py", line 21, in <module> modified_data.write(' '.join(cols_id)) TypeError: sequence item 1: expected string, list found ``` What am I doing wrong? Also, if there is some easy way to simplify my code, it'll be great. PS: I'm no programmer so I'll probably need some explanation if you do something... Thank you very much.
2014/09/17
[ "https://Stackoverflow.com/questions/25888828", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4027271/" ]
cols\_id, cols\_h and cols\_s seems to be lists, not strings. You can only write a string in your file so you have to convert the list to a string. ``` modified_data.write(' '.join(cols_id)) ``` and similar. `'!'.join(a_list_of_things)` converts the list into a string separating each element with an exclamation mark EDIT: ``` #!/usr/bin/python import sys if len(sys.argv) != 3: print("2 Parameters expected: You must introduce your pdb file and a name for output file.")` exit() cols_h, cols_s, cols_id = [] for line in open(sys.argv[1]): if 'HELIX' in line: helix = line.split() cols_h.append(''.join(helix[0]+helix[3:6:2]+helix[6:9:2])) elif 'SHEET'in line: sheet = line.split() cols_s.append( ''.join(sheet[0]+sheet[4:7:2]+sheet[7:10:2]+sheet[12:15:2]+sheet[16:19:2])) elif 'DBREF' in line: dbref = line.split() cols_id.append(''.join(dbref[0]+dbref[3:5]+dbref[8:10])) modified_data = open(sys.argv[2],'w') cols = [cols_id,cols_h,cols_s] for col in cols: modified_data.write(''.join(col)) ```
Here is a solution (untested) that separates data and code a little more. There is a data structure (`keyword_and_slices`) describing the keywords searched in the lines paired with the slices to be taken for the result. The code then goes through the lines and builds a data structure (`keyword2lines`) mapping the keyword to the result lines for that keyword. At the end the collected lines for each keyword are written to the result file. ``` import sys from collections import defaultdict def main(): if len(sys.argv) != 3: print( '2 Parameters expected: You must introduce your pdb file' ' and a name for output file.' ) sys.exit(1) input_filename, output_filename = sys.argv[1:3] # # Pairs of keywords and slices that should be taken from the line # starting with the respective keyword. # keyword_and_slices = [ ('HELIX', [slice(3, 6, 2), slice(6, 9, 2)]), ( 'SHEET', [slice(a, b, 2) for a, b in [(4, 7), (7, 10), (12, 15), (16, 19)]] ), ('DBREF', [slice(3, 5), slice(8, 10)]), ] keyword2lines = defaultdict(list) with open(input_filename, 'r') as lines: for line in lines: for keyword, slices in keyword_and_slices: if line.startswith(keyword): parts = line.split() result_line = [keyword] for index in slices: result_line.extend(parts[index]) keyword2lines[keyword].append(' '.join(result_line) + '\n') with open(output_filename, 'w') as out_file: for keyword in ['DBREF', 'HELIX', 'SHEET']: out_file.writelines(keyword2lines[keyword]) if __name__ == '__main__': main() ``` The code follows your text in checking if a line *starts* with a keyword, instead your code which checks if a keyword is *anywhere* within a line. It also makes sure all files are closed properly by using the `with` statement.
7,234,518
what is the bes way tho check if two words are ordered in sentence and how many times it occurs in python. For example: I like to eat maki sushi and the best sushi is in Japan. words are: [maki, sushi] Thanks. The code ``` import re x="I like to eat maki sushi and the best sushi is in Japan" x1 = re.split('\W+',x) l1 = [i for i,m in enumerate(x1) if m == "maki"] l2 = [i for i,m in enumerate(x1) if m == "sushi"] ordered = [] for i in l1: for j in l2: if j == i+1: ordered.append((i,j)) print ordered ```
2011/08/29
[ "https://Stackoverflow.com/questions/7234518", "https://Stackoverflow.com", "https://Stackoverflow.com/users/461736/" ]
According to the added code, you mean that words are adjacent? Why not just put them together: ``` print len(re.findall(r'\bmaki sushi\b', sent)) ```
``` def ordered(string, words): pos = [string.index(word) for word in words] return pos == sorted(pos) s = "I like to eat maki sushi and the best sushi is in Japan" w = ["maki", "sushi"] ordered(s, w) #Returns True. ``` Not exactly the most efficient way of doing it but simpler to understand.
7,234,518
what is the bes way tho check if two words are ordered in sentence and how many times it occurs in python. For example: I like to eat maki sushi and the best sushi is in Japan. words are: [maki, sushi] Thanks. The code ``` import re x="I like to eat maki sushi and the best sushi is in Japan" x1 = re.split('\W+',x) l1 = [i for i,m in enumerate(x1) if m == "maki"] l2 = [i for i,m in enumerate(x1) if m == "sushi"] ordered = [] for i in l1: for j in l2: if j == i+1: ordered.append((i,j)) print ordered ```
2011/08/29
[ "https://Stackoverflow.com/questions/7234518", "https://Stackoverflow.com", "https://Stackoverflow.com/users/461736/" ]
According to the added code, you mean that words are adjacent? Why not just put them together: ``` print len(re.findall(r'\bmaki sushi\b', sent)) ```
``` s = 'I like to eat maki sushi and the best sushi is in Japan' ``` **check order** ``` indices = [s.split().index(w) for w in ['maki', 'sushi']] sorted(indices) == indices ``` **how to count** ``` s.split().count('maki') ``` --- Note (based on discussion below): suppose the sentence is *'I like makim more than sushi or maki'*. Realizing that *makim* is another word than *maki*, the word *maki* is placed after *sushi* and occurs only once in the sentence. To detect this and count correctly, the sentence **must be split** over the spaces into the actual words.
7,234,518
what is the bes way tho check if two words are ordered in sentence and how many times it occurs in python. For example: I like to eat maki sushi and the best sushi is in Japan. words are: [maki, sushi] Thanks. The code ``` import re x="I like to eat maki sushi and the best sushi is in Japan" x1 = re.split('\W+',x) l1 = [i for i,m in enumerate(x1) if m == "maki"] l2 = [i for i,m in enumerate(x1) if m == "sushi"] ordered = [] for i in l1: for j in l2: if j == i+1: ordered.append((i,j)) print ordered ```
2011/08/29
[ "https://Stackoverflow.com/questions/7234518", "https://Stackoverflow.com", "https://Stackoverflow.com/users/461736/" ]
``` s = 'I like to eat maki sushi and the best sushi is in Japan' ``` **check order** ``` indices = [s.split().index(w) for w in ['maki', 'sushi']] sorted(indices) == indices ``` **how to count** ``` s.split().count('maki') ``` --- Note (based on discussion below): suppose the sentence is *'I like makim more than sushi or maki'*. Realizing that *makim* is another word than *maki*, the word *maki* is placed after *sushi* and occurs only once in the sentence. To detect this and count correctly, the sentence **must be split** over the spaces into the actual words.
Just and idea, it might need some more work ``` (sentence.index('maki') <= sentence.index('sushi')) == ('maki' <= 'sushi') ```
7,234,518
what is the bes way tho check if two words are ordered in sentence and how many times it occurs in python. For example: I like to eat maki sushi and the best sushi is in Japan. words are: [maki, sushi] Thanks. The code ``` import re x="I like to eat maki sushi and the best sushi is in Japan" x1 = re.split('\W+',x) l1 = [i for i,m in enumerate(x1) if m == "maki"] l2 = [i for i,m in enumerate(x1) if m == "sushi"] ordered = [] for i in l1: for j in l2: if j == i+1: ordered.append((i,j)) print ordered ```
2011/08/29
[ "https://Stackoverflow.com/questions/7234518", "https://Stackoverflow.com", "https://Stackoverflow.com/users/461736/" ]
According to the added code, you mean that words are adjacent? Why not just put them together: ``` print len(re.findall(r'\bmaki sushi\b', sent)) ```
A regex solution :) ``` import re sent = 'I like to eat maki sushi and the best sushi is in Japan' words = sorted(['maki', 'sushi']) assert re.search(r'\b%s\b' % r'\b.*\b'.join(words), sent) ```
7,234,518
what is the bes way tho check if two words are ordered in sentence and how many times it occurs in python. For example: I like to eat maki sushi and the best sushi is in Japan. words are: [maki, sushi] Thanks. The code ``` import re x="I like to eat maki sushi and the best sushi is in Japan" x1 = re.split('\W+',x) l1 = [i for i,m in enumerate(x1) if m == "maki"] l2 = [i for i,m in enumerate(x1) if m == "sushi"] ordered = [] for i in l1: for j in l2: if j == i+1: ordered.append((i,j)) print ordered ```
2011/08/29
[ "https://Stackoverflow.com/questions/7234518", "https://Stackoverflow.com", "https://Stackoverflow.com/users/461736/" ]
According to the added code, you mean that words are adjacent? Why not just put them together: ``` print len(re.findall(r'\bmaki sushi\b', sent)) ```
if res > 0: words are sorted in the sentence ``` words = ["sushi", "maki", "xxx"] sorted_words = sorted(words) sen = " I like to eat maki sushi and the best sushi is in Japan xxx"; ind = map(lambda x : sen.index(x), sorted_words) res = reduce(lambda a, b: b-a, ind) ```
7,234,518
what is the bes way tho check if two words are ordered in sentence and how many times it occurs in python. For example: I like to eat maki sushi and the best sushi is in Japan. words are: [maki, sushi] Thanks. The code ``` import re x="I like to eat maki sushi and the best sushi is in Japan" x1 = re.split('\W+',x) l1 = [i for i,m in enumerate(x1) if m == "maki"] l2 = [i for i,m in enumerate(x1) if m == "sushi"] ordered = [] for i in l1: for j in l2: if j == i+1: ordered.append((i,j)) print ordered ```
2011/08/29
[ "https://Stackoverflow.com/questions/7234518", "https://Stackoverflow.com", "https://Stackoverflow.com/users/461736/" ]
``` s = 'I like to eat maki sushi and the best sushi is in Japan' ``` **check order** ``` indices = [s.split().index(w) for w in ['maki', 'sushi']] sorted(indices) == indices ``` **how to count** ``` s.split().count('maki') ``` --- Note (based on discussion below): suppose the sentence is *'I like makim more than sushi or maki'*. Realizing that *makim* is another word than *maki*, the word *maki* is placed after *sushi* and occurs only once in the sentence. To detect this and count correctly, the sentence **must be split** over the spaces into the actual words.
if res > 0: words are sorted in the sentence ``` words = ["sushi", "maki", "xxx"] sorted_words = sorted(words) sen = " I like to eat maki sushi and the best sushi is in Japan xxx"; ind = map(lambda x : sen.index(x), sorted_words) res = reduce(lambda a, b: b-a, ind) ```
7,234,518
what is the bes way tho check if two words are ordered in sentence and how many times it occurs in python. For example: I like to eat maki sushi and the best sushi is in Japan. words are: [maki, sushi] Thanks. The code ``` import re x="I like to eat maki sushi and the best sushi is in Japan" x1 = re.split('\W+',x) l1 = [i for i,m in enumerate(x1) if m == "maki"] l2 = [i for i,m in enumerate(x1) if m == "sushi"] ordered = [] for i in l1: for j in l2: if j == i+1: ordered.append((i,j)) print ordered ```
2011/08/29
[ "https://Stackoverflow.com/questions/7234518", "https://Stackoverflow.com", "https://Stackoverflow.com/users/461736/" ]
According to the added code, you mean that words are adjacent? Why not just put them together: ``` print len(re.findall(r'\bmaki sushi\b', sent)) ```
Just and idea, it might need some more work ``` (sentence.index('maki') <= sentence.index('sushi')) == ('maki' <= 'sushi') ```
7,234,518
what is the bes way tho check if two words are ordered in sentence and how many times it occurs in python. For example: I like to eat maki sushi and the best sushi is in Japan. words are: [maki, sushi] Thanks. The code ``` import re x="I like to eat maki sushi and the best sushi is in Japan" x1 = re.split('\W+',x) l1 = [i for i,m in enumerate(x1) if m == "maki"] l2 = [i for i,m in enumerate(x1) if m == "sushi"] ordered = [] for i in l1: for j in l2: if j == i+1: ordered.append((i,j)) print ordered ```
2011/08/29
[ "https://Stackoverflow.com/questions/7234518", "https://Stackoverflow.com", "https://Stackoverflow.com/users/461736/" ]
``` s = 'I like to eat maki sushi and the best sushi is in Japan' ``` **check order** ``` indices = [s.split().index(w) for w in ['maki', 'sushi']] sorted(indices) == indices ``` **how to count** ``` s.split().count('maki') ``` --- Note (based on discussion below): suppose the sentence is *'I like makim more than sushi or maki'*. Realizing that *makim* is another word than *maki*, the word *maki* is placed after *sushi* and occurs only once in the sentence. To detect this and count correctly, the sentence **must be split** over the spaces into the actual words.
A regex solution :) ``` import re sent = 'I like to eat maki sushi and the best sushi is in Japan' words = sorted(['maki', 'sushi']) assert re.search(r'\b%s\b' % r'\b.*\b'.join(words), sent) ```
7,234,518
what is the bes way tho check if two words are ordered in sentence and how many times it occurs in python. For example: I like to eat maki sushi and the best sushi is in Japan. words are: [maki, sushi] Thanks. The code ``` import re x="I like to eat maki sushi and the best sushi is in Japan" x1 = re.split('\W+',x) l1 = [i for i,m in enumerate(x1) if m == "maki"] l2 = [i for i,m in enumerate(x1) if m == "sushi"] ordered = [] for i in l1: for j in l2: if j == i+1: ordered.append((i,j)) print ordered ```
2011/08/29
[ "https://Stackoverflow.com/questions/7234518", "https://Stackoverflow.com", "https://Stackoverflow.com/users/461736/" ]
``` def ordered(string, words): pos = [string.index(word) for word in words] return pos == sorted(pos) s = "I like to eat maki sushi and the best sushi is in Japan" w = ["maki", "sushi"] ordered(s, w) #Returns True. ``` Not exactly the most efficient way of doing it but simpler to understand.
Just and idea, it might need some more work ``` (sentence.index('maki') <= sentence.index('sushi')) == ('maki' <= 'sushi') ```
7,234,518
what is the bes way tho check if two words are ordered in sentence and how many times it occurs in python. For example: I like to eat maki sushi and the best sushi is in Japan. words are: [maki, sushi] Thanks. The code ``` import re x="I like to eat maki sushi and the best sushi is in Japan" x1 = re.split('\W+',x) l1 = [i for i,m in enumerate(x1) if m == "maki"] l2 = [i for i,m in enumerate(x1) if m == "sushi"] ordered = [] for i in l1: for j in l2: if j == i+1: ordered.append((i,j)) print ordered ```
2011/08/29
[ "https://Stackoverflow.com/questions/7234518", "https://Stackoverflow.com", "https://Stackoverflow.com/users/461736/" ]
``` def ordered(string, words): pos = [string.index(word) for word in words] return pos == sorted(pos) s = "I like to eat maki sushi and the best sushi is in Japan" w = ["maki", "sushi"] ordered(s, w) #Returns True. ``` Not exactly the most efficient way of doing it but simpler to understand.
if res > 0: words are sorted in the sentence ``` words = ["sushi", "maki", "xxx"] sorted_words = sorted(words) sen = " I like to eat maki sushi and the best sushi is in Japan xxx"; ind = map(lambda x : sen.index(x), sorted_words) res = reduce(lambda a, b: b-a, ind) ```
64,348,889
There's a code I found in internet that says it gives my machines local network IP address: ``` hostname = socket.gethostname() local_ip = socket.gethostbyname(hostname) ``` but the IP it returns is 192.168.94.2 but my IP address in WIFI network is actually 192.168.1.107 How can I only get wifi network local IP address with only python? I want it to work for windows,linux and macos.
2020/10/14
[ "https://Stackoverflow.com/questions/64348889", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3848316/" ]
You can use this code: ``` import socket hostname = socket.getfqdn() print("IP Address:",socket.gethostbyname_ex(hostname)[2][1]) ``` or this to get public ip: ``` import requests import json print(json.loads(requests.get("https://ip.seeip.org/jsonip?").text)["ip"]) ```
Here's code from the `whatismyip` Python module that can grab it from public websites: ``` import urllib.request IP_WEBSITES = ( 'https://ipinfo.io/ip', 'https://ipecho.net/plain', 'https://api.ipify.org', 'https://ipaddr.site', 'https://icanhazip.com', 'https://ident.me', 'https://curlmyip.net', ) def getIp(): for ipWebsite in IP_WEBSITES: try: response = urllib.request.urlopen(ipWebsite) charsets = response.info().get_charsets() if len(charsets) == 0 or charsets[0] is None: charset = 'utf-8' # Use utf-8 by default else: charset = charsets[0] userIp = response.read().decode(charset).strip() return userIp except: pass # Network error, just continue on to next website. # Either all of the websites are down or returned invalid response # (unlikely) or you are disconnected from the internet. return None print(getIp()) ``` Or you can install `pip install whatismyip` and then call `whatismyip.whatismyip()`.
5,484,098
I'm new to Python. I'm writing a simple class but I'm with an error. My class: ``` import config # Ficheiro de configuracao import twitter import random import sqlite3 import time import bitly_api #https://github.com/bitly/bitly-api-python class TwitterC: def logToDatabase(self, tweet, timestamp): # Will log to the database database = sqlite3.connect('database.db') # Create a database file cursor = database.cursor() # Create a cursor cursor.execute("CREATE TABLE IF NOT EXISTS twitter(id_tweet INTEGER AUTO_INCREMENT PRIMARY KEY, tweet TEXT, timestamp TEXT);") # Make a table # Assign the values for the insert into msg_ins = tweet timestamp_ins = timestamp values = [msg_ins, timestamp_ins] # Insert data into the table cursor.execute("INSERT INTO twitter(tweet, timestamp) VALUES(?, ?)", values) database.commit() # Save our changes database.close() # Close the connection to the database def shortUrl(self, url): bit = bitly_api.Connection(config.bitly_username, config.bitly_key) # Instanciar a API return bit.shorten(url) # Encurtar o URL def updateTwitterStatus(self, update): short = self.shortUrl(update["url"]) # Vou encurtar o URL update = update["msg"] + short['url'] # Will post to twitter and print the posted text twitter_api = twitter.Api(consumer_key=config.twitter_consumer_key, consumer_secret=config.twitter_consumer_secret, access_token_key=config.twitter_access_token_key, access_token_secret=config.twitter_consumer_secret) status = twitter_api.PostUpdate(update) # Fazer o update msg = status.text # Vou gravar o texto enviado para a variavel 'msg' # Vou gravar p a Base de Dados self.logToDatabase(msg, time.time()) print msg x = TwitterC() x.updateTwitterStatus([{"url": "http://xxxx.com/?cat=31", "msg": "See some strings..., "}]) ``` The error is: ``` Traceback (most recent call last): File "C:\Documents and Settings\anlopes\workspace\redes_soc\src\twitterC.py", line 42, in <module> x.updateTwitterStatus([{"url": "http://xxxx.com/?cat=31", "msg": "See some strings..., "}]) File "C:\Documents and Settings\anlopes\workspace\redes_soc\src\twitterC.py", line 28, in updateTwitterStatus short = self.shortUrl(update["url"]) # Vou encurtar o URL TypeError: list indices must be integers, not str ``` Any clues on how to solve it? Best Regards,
2011/03/30
[ "https://Stackoverflow.com/questions/5484098", "https://Stackoverflow.com", "https://Stackoverflow.com/users/488735/" ]
It looks like your call to updateTwitterStatus just needs to lose the square brackets: ``` x.updateTwitterStatus({"url": "http://xxxx.com/?cat=31", "msg": "See some strings..., "}) ``` You were passing a list with a single dictionary element. It looks as though the method just requires a dictionary with "url" and "msg" keys. In Python, `{...}` creates a dictionary, and `[...]` creates a list.
The error message tells you everything you need to know. It says "list indices must be integers, not str" and points to the code `short = self.shortUrl(update["url"])`. So obviously the python interpreter thinks `update` is a list, and `"url"` is not a valid index into the list. Since `update` is passed in as a parameter we have to see where it came from. It looks like `[{...}]`, which means it's a list with a single dictionary inside. Presumably you intended to pass just the dictionary, so remove the square brackets when calling `x.updateTwitterStatus` The first rule of debugging is to assume that the error message is correct, and that you should take it literally.
7,606,062
For example, if a python script will spit out a string giving the path of a newly written file that I'm going to edit immediately after running the script, it would be very nice to have it directly sent to the system clipboard rather than `STDOUT`.
2011/09/30
[ "https://Stackoverflow.com/questions/7606062", "https://Stackoverflow.com", "https://Stackoverflow.com/users/560844/" ]
You can use an external program, [`xsel`](http://www.vergenet.net/~conrad/software/xsel/): ``` from subprocess import Popen, PIPE p = Popen(['xsel','-pi'], stdin=PIPE) p.communicate(input='Hello, World') ``` With `xsel`, you can set the clipboard you want to work on. * `-p` works with the `PRIMARY` selection. That's the middle click one. * `-s` works with the `SECONDARY` selection. I don't know if this is used anymore. * `-b` works with the `CLIPBOARD` selection. That's your `Ctrl + V` one. Read more about X's clipboards [here](http://standards.freedesktop.org/clipboards-spec/clipboards-latest.txt) and [here](https://superuser.com/questions/200444/why-do-we-have-3-types-of-x-selections-in-linux). A quick and dirty function I created to handle this: ``` def paste(str, p=True, c=True): from subprocess import Popen, PIPE if p: p = Popen(['xsel', '-pi'], stdin=PIPE) p.communicate(input=str) if c: p = Popen(['xsel', '-bi'], stdin=PIPE) p.communicate(input=str) paste('Hello', False) # pastes to CLIPBOARD only paste('Hello', c=False) # pastes to PRIMARY only paste('Hello') # pastes to both ``` --- You can also try pyGTK's [`clipboard`](http://www.pygtk.org/docs/pygtk/class-gtkclipboard.html) : ``` import pygtk pygtk.require('2.0') import gtk clipboard = gtk.clipboard_get() clipboard.set_text('Hello, World') clipboard.store() ``` This works with the `Ctrl + V` selection for me.
This is not really a Python question but a shell question. You already can send the output of a Python script (or any command) to the clipboard instead of standard out, by piping the output of the Python script into the `xclip` command. ``` myscript.py | xclip ``` If `xclip` is not already installed on your system (it isn't by default), this is how you get it: ``` sudo apt-get install xclip ``` If you wanted to do it directly from your Python script I guess you could shell out and run the xclip command using `os.system()` which is simple but deprecated. There are a number of ways to do this (see the `subprocess` module for the current official way). The command you'd want to execute is something like: ``` echo -n /path/goes/here | xclip ``` Bonus: Under Mac OS X, you can do the same thing by piping into `pbcopy`.
7,606,062
For example, if a python script will spit out a string giving the path of a newly written file that I'm going to edit immediately after running the script, it would be very nice to have it directly sent to the system clipboard rather than `STDOUT`.
2011/09/30
[ "https://Stackoverflow.com/questions/7606062", "https://Stackoverflow.com", "https://Stackoverflow.com/users/560844/" ]
You can use an external program, [`xsel`](http://www.vergenet.net/~conrad/software/xsel/): ``` from subprocess import Popen, PIPE p = Popen(['xsel','-pi'], stdin=PIPE) p.communicate(input='Hello, World') ``` With `xsel`, you can set the clipboard you want to work on. * `-p` works with the `PRIMARY` selection. That's the middle click one. * `-s` works with the `SECONDARY` selection. I don't know if this is used anymore. * `-b` works with the `CLIPBOARD` selection. That's your `Ctrl + V` one. Read more about X's clipboards [here](http://standards.freedesktop.org/clipboards-spec/clipboards-latest.txt) and [here](https://superuser.com/questions/200444/why-do-we-have-3-types-of-x-selections-in-linux). A quick and dirty function I created to handle this: ``` def paste(str, p=True, c=True): from subprocess import Popen, PIPE if p: p = Popen(['xsel', '-pi'], stdin=PIPE) p.communicate(input=str) if c: p = Popen(['xsel', '-bi'], stdin=PIPE) p.communicate(input=str) paste('Hello', False) # pastes to CLIPBOARD only paste('Hello', c=False) # pastes to PRIMARY only paste('Hello') # pastes to both ``` --- You can also try pyGTK's [`clipboard`](http://www.pygtk.org/docs/pygtk/class-gtkclipboard.html) : ``` import pygtk pygtk.require('2.0') import gtk clipboard = gtk.clipboard_get() clipboard.set_text('Hello, World') clipboard.store() ``` This works with the `Ctrl + V` selection for me.
As others have pointed out this is not "Python and batteries" as it involves GUI operations. So It is platform dependent. If you are on windows you can use win32 Python Module and Access win32 clipboard operations. My suggestion though would be picking up one GUI toolkit (PyQT/PySide for QT, PyGTK for GTK+ or wxPython for wxWidgets). Then use the clipboard operations. If you don’t need the heavy weight things of toolkits then make your wrapper which will use win32 package on windows and whatever is available on other platform and switch accordingly! For wxPython here are some helpful links: <http://www.wxpython.org/docs/api/wx.Clipboard-class.html> <http://wiki.wxpython.org/ClipBoard> <http://www.python-forum.org/pythonforum/viewtopic.php?f=1&t=25549>
7,606,062
For example, if a python script will spit out a string giving the path of a newly written file that I'm going to edit immediately after running the script, it would be very nice to have it directly sent to the system clipboard rather than `STDOUT`.
2011/09/30
[ "https://Stackoverflow.com/questions/7606062", "https://Stackoverflow.com", "https://Stackoverflow.com/users/560844/" ]
You can use an external program, [`xsel`](http://www.vergenet.net/~conrad/software/xsel/): ``` from subprocess import Popen, PIPE p = Popen(['xsel','-pi'], stdin=PIPE) p.communicate(input='Hello, World') ``` With `xsel`, you can set the clipboard you want to work on. * `-p` works with the `PRIMARY` selection. That's the middle click one. * `-s` works with the `SECONDARY` selection. I don't know if this is used anymore. * `-b` works with the `CLIPBOARD` selection. That's your `Ctrl + V` one. Read more about X's clipboards [here](http://standards.freedesktop.org/clipboards-spec/clipboards-latest.txt) and [here](https://superuser.com/questions/200444/why-do-we-have-3-types-of-x-selections-in-linux). A quick and dirty function I created to handle this: ``` def paste(str, p=True, c=True): from subprocess import Popen, PIPE if p: p = Popen(['xsel', '-pi'], stdin=PIPE) p.communicate(input=str) if c: p = Popen(['xsel', '-bi'], stdin=PIPE) p.communicate(input=str) paste('Hello', False) # pastes to CLIPBOARD only paste('Hello', c=False) # pastes to PRIMARY only paste('Hello') # pastes to both ``` --- You can also try pyGTK's [`clipboard`](http://www.pygtk.org/docs/pygtk/class-gtkclipboard.html) : ``` import pygtk pygtk.require('2.0') import gtk clipboard = gtk.clipboard_get() clipboard.set_text('Hello, World') clipboard.store() ``` This works with the `Ctrl + V` selection for me.
As it was posted in another [answer](https://stackoverflow.com/a/11063483/212112), if you want to solve that within python, you can use [Pyperclip](https://pypi.python.org/pypi/pyperclip) which has the added benefit of being cross-platform. ``` >>> import pyperclip >>> pyperclip.copy('The text to be copied to the clipboard.') >>> pyperclip.paste() 'The text to be copied to the clipboard.' ```
7,606,062
For example, if a python script will spit out a string giving the path of a newly written file that I'm going to edit immediately after running the script, it would be very nice to have it directly sent to the system clipboard rather than `STDOUT`.
2011/09/30
[ "https://Stackoverflow.com/questions/7606062", "https://Stackoverflow.com", "https://Stackoverflow.com/users/560844/" ]
This is not really a Python question but a shell question. You already can send the output of a Python script (or any command) to the clipboard instead of standard out, by piping the output of the Python script into the `xclip` command. ``` myscript.py | xclip ``` If `xclip` is not already installed on your system (it isn't by default), this is how you get it: ``` sudo apt-get install xclip ``` If you wanted to do it directly from your Python script I guess you could shell out and run the xclip command using `os.system()` which is simple but deprecated. There are a number of ways to do this (see the `subprocess` module for the current official way). The command you'd want to execute is something like: ``` echo -n /path/goes/here | xclip ``` Bonus: Under Mac OS X, you can do the same thing by piping into `pbcopy`.
As others have pointed out this is not "Python and batteries" as it involves GUI operations. So It is platform dependent. If you are on windows you can use win32 Python Module and Access win32 clipboard operations. My suggestion though would be picking up one GUI toolkit (PyQT/PySide for QT, PyGTK for GTK+ or wxPython for wxWidgets). Then use the clipboard operations. If you don’t need the heavy weight things of toolkits then make your wrapper which will use win32 package on windows and whatever is available on other platform and switch accordingly! For wxPython here are some helpful links: <http://www.wxpython.org/docs/api/wx.Clipboard-class.html> <http://wiki.wxpython.org/ClipBoard> <http://www.python-forum.org/pythonforum/viewtopic.php?f=1&t=25549>
7,606,062
For example, if a python script will spit out a string giving the path of a newly written file that I'm going to edit immediately after running the script, it would be very nice to have it directly sent to the system clipboard rather than `STDOUT`.
2011/09/30
[ "https://Stackoverflow.com/questions/7606062", "https://Stackoverflow.com", "https://Stackoverflow.com/users/560844/" ]
As it was posted in another [answer](https://stackoverflow.com/a/11063483/212112), if you want to solve that within python, you can use [Pyperclip](https://pypi.python.org/pypi/pyperclip) which has the added benefit of being cross-platform. ``` >>> import pyperclip >>> pyperclip.copy('The text to be copied to the clipboard.') >>> pyperclip.paste() 'The text to be copied to the clipboard.' ```
This is not really a Python question but a shell question. You already can send the output of a Python script (or any command) to the clipboard instead of standard out, by piping the output of the Python script into the `xclip` command. ``` myscript.py | xclip ``` If `xclip` is not already installed on your system (it isn't by default), this is how you get it: ``` sudo apt-get install xclip ``` If you wanted to do it directly from your Python script I guess you could shell out and run the xclip command using `os.system()` which is simple but deprecated. There are a number of ways to do this (see the `subprocess` module for the current official way). The command you'd want to execute is something like: ``` echo -n /path/goes/here | xclip ``` Bonus: Under Mac OS X, you can do the same thing by piping into `pbcopy`.
7,606,062
For example, if a python script will spit out a string giving the path of a newly written file that I'm going to edit immediately after running the script, it would be very nice to have it directly sent to the system clipboard rather than `STDOUT`.
2011/09/30
[ "https://Stackoverflow.com/questions/7606062", "https://Stackoverflow.com", "https://Stackoverflow.com/users/560844/" ]
As it was posted in another [answer](https://stackoverflow.com/a/11063483/212112), if you want to solve that within python, you can use [Pyperclip](https://pypi.python.org/pypi/pyperclip) which has the added benefit of being cross-platform. ``` >>> import pyperclip >>> pyperclip.copy('The text to be copied to the clipboard.') >>> pyperclip.paste() 'The text to be copied to the clipboard.' ```
As others have pointed out this is not "Python and batteries" as it involves GUI operations. So It is platform dependent. If you are on windows you can use win32 Python Module and Access win32 clipboard operations. My suggestion though would be picking up one GUI toolkit (PyQT/PySide for QT, PyGTK for GTK+ or wxPython for wxWidgets). Then use the clipboard operations. If you don’t need the heavy weight things of toolkits then make your wrapper which will use win32 package on windows and whatever is available on other platform and switch accordingly! For wxPython here are some helpful links: <http://www.wxpython.org/docs/api/wx.Clipboard-class.html> <http://wiki.wxpython.org/ClipBoard> <http://www.python-forum.org/pythonforum/viewtopic.php?f=1&t=25549>
46,062,117
Following [the plotly directions](https://plot.ly/python/distplot/), I would like to plot something similar to the following code: ``` import plotly.plotly as py import plotly.figure_factory as ff import numpy as np # Add histogram data x1 = np.random.randn(200) - 2 x2 = np.random.randn(200) x3 = np.random.randn(200) + 2 x4 = np.random.randn(200) + 4 # Group data together hist_data = [x1, x2, x3, x4] group_labels = ['Group 1', 'Group 2', 'Group 3', 'Group 4'] # Create distplot with custom bin_size fig = ff.create_distplot(hist_data, group_labels, bin_size = [.1, .25, .5, 1]) # Plot! py.iplot(fig, filename = 'Distplot with Multiple Bin Sizes') ``` However, I have a real world dataset that is uneven in sample size (i.e. count of group 1 is different than count in group 2, etc.). Furthermore, it's in name-value pair format. Here is some dummy data to illustrate: ``` # Add histogram data x1 = pd.DataFrame(np.random.randn(100)) x1['name'] = 'x1' x2 = pd.DataFrame(np.random.randn(200) + 1) x2['name'] = 'x2' x3 = pd.DataFrame(np.random.randn(300) - 1) x3['name'] = 'x3' df = pd.concat([x1, x2, x3]) df = df.reset_index(drop = True) df.columns = ['value', 'names'] df ``` As you can see, each name (x1, x2, x3) has a different count, and also the "names" column is what I would like to use as the color. Does anyone know how I can plot this in plotly? FYI in R, it's very simple, I would simply call ggplot, and in `aes(fill = names)`. Any help would be appreciated, thank you!
2017/09/05
[ "https://Stackoverflow.com/questions/46062117", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3604836/" ]
You could try slicing your dataframe and then putting it into in Ploty. ``` fig = ff.create_distplot([df[df.names == a].value for a in df.names.unique()], df.names.unique(), bin_size=[.1, .25, .5, 1]) ``` --- [![enter image description here](https://i.stack.imgur.com/IsX83.png)](https://i.stack.imgur.com/IsX83.png) ``` import plotly import pandas as pd plotly.offline.init_notebook_mode() x1 = pd.DataFrame(np.random.randn(100)) x1['name']='x1' x2 = pd.DataFrame(np.random.randn(200)+1) x2['name']='x2' x3 = pd.DataFrame(np.random.randn(300)-1) x3['name']='x3' df=pd.concat([x1,x2,x3]) df=df.reset_index(drop=True) df.columns = ['value','names'] fig = ff.create_distplot([df[df.names == a].value for a in df.names.unique()], df.names.unique(), bin_size=[.1, .25, .5, 1]) plotly.offline.iplot(fig, filename='Distplot with Multiple Bin Sizes') ```
The [example](https://plot.ly/python/distplot/) in [`plotly`](https://plot.ly/python/)'s documentation works out of the box for uneven sample sizes too: ``` #!/usr/bin/env python import plotly import plotly.figure_factory as ff plotly.offline.init_notebook_mode() import numpy as np # data with different sizes x1 = np.random.randn(300)-2 x2 = np.random.randn(200) x3 = np.random.randn(4000)+2 x4 = np.random.randn(50)+4 # Group data together hist_data = [x1, x2, x3, x4] # use custom names group_labels = ['x1', 'x2', 'x3', 'x4'] # Create distplot with custom bin_size fig = ff.create_distplot(hist_data, group_labels, bin_size=.2) # change that if you don't want to plot offline plotly.offline.plot(fig, filename='Distplot with Multiple Datasets') ``` The above script will produce the following result: --- [![enter image description here](https://i.stack.imgur.com/0b3Ar.png)](https://i.stack.imgur.com/0b3Ar.png)
74,513,701
I am an R User that is trying to learn more about Python. I found this Python library that I would like to use for address parsing: <https://github.com/zehengl/ez-address-parser> I was able to try an example over here: ``` from ez_address_parser import AddressParser ap = AddressParser() result = ap.parse("290 Bremner Blvd, Toronto, ON M5V 3L9") print(results) [('290', 'StreetNumber'), ('Bremner', 'StreetName'), ('Blvd', 'StreetType'), ('Toronto', 'Municipality'), ('ON', 'Province'), ('M5V', 'PostalCode'), ('3L9', 'PostalCode')] ``` I have the following file that I imported: ``` df = pd.read_csv(r'C:/Users/me/OneDrive/Documents/my_file.csv', encoding='latin-1') name address 1 name1 290 Bremner Blvd, Toronto, ON M5V 3L9 2 name2 291 Bremner Blvd, Toronto, ON M5V 3L9 3 name3 292 Bremner Blvd, Toronto, ON M5V 3L9 ``` I then applied the above function and export the file and everything works: ``` df['Address_Parse'] = df['ADDRESS'].apply(ap.parse) df = pd.DataFrame(df) df.to_csv(r'C:/Users/me/OneDrive/Documents/python_file.csv', index=False, header=True) ``` **Problem:** I now have another file (similar format) - but this time, I am getting an error: ``` df1 = pd.read_csv(r'C:/Users/me/OneDrive/Documents/my_file1.csv', encoding='latin-1') df1['Address_Parse'] = df1['ADDRESS'].apply(ap.parse) AttributeError: 'float' object has no attribute 'replace' ``` I am confused as to why the same code will not work for this file. As I am still learning Python, I am not sure where to begin to debug this problem. My guesses are that perhaps there are special characters in the second file, formatting issues or incorrect variable types that are preventing this `ap.parse` function from working, but I am still not sure. Can someone please show me what to do? Thank you!
2022/11/21
[ "https://Stackoverflow.com/questions/74513701", "https://Stackoverflow.com", "https://Stackoverflow.com/users/13203841/" ]
It's not possible to completely remove the authorization prompt but you could make it appear only one time for each user by publishing your script as an Editor add-on. 1. Create a Google Cloud standard project (GCSP) and add the OAuth consent screen. 2. Link the GCSP to the Google Apps Script project. 3. Deploy the script as an Editor Add-on. 4. On the GCSP add the Google Workspace Marketplace SDK, configure it and publish to the Google Workspace Marketplace. Related * [Deploy and use Google Sheets add-on with Google Apps Script](https://stackoverflow.com/q/22664144/1595451) * [Is it possible to publish an Add-on for internal use without approval process?](https://stackoverflow.com/q/28990006/1595451) * [Publish an add-on privately](https://stackoverflow.com/q/45888142/1595451) Reference * [Authorization for Google Services](https://developers.google.com/apps-script/guides/services/authorization) * [OAuth Client Verification](https://developers.google.com/apps-script/guides/client-verification) * [Publish an add-on](https://developers.google.com/apps-script/add-ons/how-tos/publish-add-on-overview)
It's just a routine security procedure. If they trust you, then there's no issue in them accepting it, it's just a warning if you don't know the coder
52,683,832
``` Revenue = [400000000,10000000,10000000000,10000000] s1 = [] for x in Revenue: message = (','.join(['{:,.0f}'.format(x)]).split()) s1.append(message) print(s1) The output I am getting is something like this [['400,000,000'], ['10,000,000'], ['10,000,000,000'], ['10,000,000']] and I want it should be like this -> [400,000,000, 10,000,000, 10,000,000,000, 10,000,000] ``` Can someone please help me on this, I am new to python
2018/10/06
[ "https://Stackoverflow.com/questions/52683832", "https://Stackoverflow.com", "https://Stackoverflow.com/users/10431728/" ]
If your goal is to just add in the commas you will be stuck with the `' '` due to the fact its going to be a `str` but you can eliminate that nesting by using a simpler *list comprehension* ``` Revenue = [400000000,10000000,10000000000,10000000] l = ['{:,}'.format(i) for i in Revenue] # ['400,000,000', '10,000,000', '10,000,000,000', '10,000,000'] ``` You could also unpack the list into variables and then print each variable without `quotes` ``` v, w, x, y = l print(v) # 400,000,000 ``` You can `print` the unpacked list but that will just be output ``` print(*l) # 400,000,000 10,000,000 10,000,000,000 10,000,000 ``` Expanded Loop: ``` l = [] for i in Revenue: l.append('{:,}'.format(i)) ```
I'm not sure why you want the output you've shown, because it is hard to read, but here is how to make it: ``` >>> Revenue = [400000000,10000000,10000000000,10000000] >>> def revenue_formatted(rev): ... return "[" + ", ".join("{:,d}".format(n) for n in rev) + "]" ... >>> print(revenue_formatted(Revenue)) [400,000,000, 10,000,000, 10,000,000,000, 10,000,000] ```
56,840,250
I am making an adventure game in python 3.7.3, and I am using F strings for some of my print statements. When running it in the terminal and sublime text, F strings give me an error. ``` import time from time import sleep import sys def printfast(str): for letter in str: sys.stdout.write(letter) sys.stdout.flush() time.sleep(0.04) name = input("\nWhat is your name?\n\n") printfast(f("You are the mighty hero {name}. In front of you, there is a grand palace, containing twisting marble spires and spiraling dungeons.\n") ```
2019/07/01
[ "https://Stackoverflow.com/questions/56840250", "https://Stackoverflow.com", "https://Stackoverflow.com/users/11723707/" ]
You're doing it wrong. `f` isn't a function, it's more of an syntactic identifier. Whereas regular quotes `"` indicate the beginning of a regular string (or the end of any type of string), the token `f"` indicates the beginning of a format string in particular. The same idea goes for raw strings, indicated by `r"`, or binary strings, indicated by `b"`. Instead of ``` f("You are...") ``` do ``` f"You are..." ```
In `f("You ...)`, you're calling a function named `f` with the string as input parameter, and you don't have such a function hence the error. You need to drop the enclosing parentheses `()` to make it a f-string: ``` f"You are the mighty hero {name}. In front of you, there is a grand palace, containing twisting marble spires and spiraling dungeons.\n" ```
56,840,250
I am making an adventure game in python 3.7.3, and I am using F strings for some of my print statements. When running it in the terminal and sublime text, F strings give me an error. ``` import time from time import sleep import sys def printfast(str): for letter in str: sys.stdout.write(letter) sys.stdout.flush() time.sleep(0.04) name = input("\nWhat is your name?\n\n") printfast(f("You are the mighty hero {name}. In front of you, there is a grand palace, containing twisting marble spires and spiraling dungeons.\n") ```
2019/07/01
[ "https://Stackoverflow.com/questions/56840250", "https://Stackoverflow.com", "https://Stackoverflow.com/users/11723707/" ]
You're doing it wrong. `f` isn't a function, it's more of an syntactic identifier. Whereas regular quotes `"` indicate the beginning of a regular string (or the end of any type of string), the token `f"` indicates the beginning of a format string in particular. The same idea goes for raw strings, indicated by `r"`, or binary strings, indicated by `b"`. Instead of ``` f("You are...") ``` do ``` f"You are..." ```
f-strings are not created using a function, it's a type of string, which you denote using `f''` (similar to `b''` or `r''`) So try using: ```py printfast(f"You are the mighty hero {name}. In front of you, there is a grand palace, containing twisting marble spires and spiraling dungeons.\n") ```
36,985,391
I am creating a sql query in python of the sort: ``` select lastupdatedatetime from auth_principal_entity where lastupdateddatetime < '02-05-16 03:46:51:527000000 PM' ``` When it is executed, there are escape sequences that are being added which doesn't return me answers. Although when we print it in stdout, it looks perfect, but for python's understanding it has escape sequences which I don't want in the execution command ``` 'select lastupdatedatetime from auth_principal_entity where lastupdateddatetime < \\'02-05-16 03:50:14:388000000 PM\\'' ```
2016/05/02
[ "https://Stackoverflow.com/questions/36985391", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2576170/" ]
The escape sequences won't cause any problem in the cursor.execute(query) The real issue lies in the date that is being sent as a string is being used to compare and return values from db which are in date-object format. so something like this should work. ``` query = "SELECT LASTUPDATEDDATETIME FROM AUTH_PRINCIPAL_ENTITY WHERE LASTUPDATEDDATETIME < to_date('03-May-16', 'dd-mon-yy')" ``` Or ``` date_ = datetime.datetime.now().strftime('%d-%b-%y') query = "SELECT LASTUPDATEDDATETIME FROM AUTH_PRINCIPAL_ENTITY WHERE LASTUPDATEDDATETIME < to_date('{}', 'dd-mon-yy')".format(date_) ``` Try that. Should work for you :-)
For me, I wrap my SQL statements in triple quotes so it doesn't run into these issues when I execute: ``` query = """ select lastupdatedatetime from auth_principal_entity where lastupdateddatetime < '02-05-16 03:46:51:527000000 PM' """ ```
46,247,340
I am currently running tests between XGBoost/lightGBM for their ability to rank items. I am reproducing the benchmarks presented here: <https://github.com/guolinke/boosting_tree_benchmarks>. I have been able to successfully reproduce the benchmarks mentioned in their work. I want to make sure that I am correctly implementing my own version of the ndcg metric and also understanding the ranking problem correctly. My questions are: 1. When creating the validation for the test set using ndcg - there is a test.group file that says the first X rows are group 0, etc. To get the recommendations for the group, I get the predicted values and known relevance scores and sort that list by descending predicted values for each group? 2. In order to get the final ndcg scores from the lists created above - do I get the ndcg scores and take the mean over all the scores? Is this the same evaluation methodology that XGBoost/lightGBM in the evaluation phase? Here is my methodology for evaluating the test set after the model has finished training. For the final tree when I run `lightGBM` I obtain these values on the validation set: ``` [500] valid_0's ndcg@1: 0.513221 valid_0's ndcg@3: 0.499337 valid_0's ndcg@5: 0.505188 valid_0's ndcg@10: 0.523407 ``` My final step is to take the predicted output for the test set and calculate the ndcg values for the predictions. Here is my python code for calculating ndcg: ``` import numpy as np def dcg_at_k(r, k): r = np.asfarray(r)[:k] if r.size: return np.sum(np.subtract(np.power(2, r), 1) / np.log2(np.arange(2, r.size + 2))) return 0. def ndcg_at_k(r, k): idcg = dcg_at_k(sorted(r, reverse=True), k) if not idcg: return 0. return dcg_at_k(r, k) / idcg ``` After I get the predictions for the test set for a particular group (**GROUP-0**) I have these predictions: ``` query_id predict 0 0 (2.0, -0.221681199441) 1 0 (1.0, 0.109895548348) 2 0 (1.0, 0.0262799346312) 3 0 (0.0, -0.595343431322) 4 0 (0.0, -0.52689043426) 5 0 (0.0, -0.542221350664) 6 0 (1.0, -0.448015576024) 7 0 (1.0, -0.357090949646) 8 0 (0.0, -0.279677741045) 9 0 (0.0, 0.2182200869) ``` **NOTE** **Group-0** actually has about 112 rows. I then sort the list of tuples in descending order which provides a list of relevance scores: ``` def get_recommendations(x): sorted_list = sorted(list(x), key=lambda i: i[1], reverse=True) return [k for k, _ in sorted_list] relavance = evaluation.groupby('query_id').predict.apply(get_recommendations) query_id 0 [4.0, 2.0, 2.0, 3.0, 2.0, 2.0, 2.0, 2.0, 2.0, ... 1 [4.0, 2.0, 2.0, 2.0, 1.0, 1.0, 3.0, 2.0, 1.0, ... 2 [2.0, 3.0, 2.0, 2.0, 1.0, 0.0, 2.0, 2.0, 1.0, ... 3 [2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, ... 4 [1.0, 0.0, 1.0, 0.0, 0.0, 0.0, 0.0, 1.0, 0.0, ... ``` Finally, for each query id I calculated the ndcg scores on the relevance list and then take the mean of all the ndcg scores calculated for each query id: ``` relavance.apply(lambda x: ndcg_at_k(x, 10)).mean() ``` The value I obtain is `~0.497193`.
2017/09/15
[ "https://Stackoverflow.com/questions/46247340", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2800840/" ]
Cross-posting my Cross Validated answer to this cross-posted question: <https://stats.stackexchange.com/questions/303385/how-does-xgboost-lightgbm-evaluate-ndcg-metric-for-ranking/487487#487487> --- I happened across this myself, and finally dug into the code to figure it out. The difference is the handling of a missing IDCG. Your code returns 0, while [LightGBM is treating that case as a 1](https://github.com/microsoft/LightGBM/blob/ac5f5e56d012b1f435f1a52cbd6600f100ffa187/src/metric/rank_metric.hpp#L97). The following code produced matching results for me: ```py import numpy as np def dcg_at_k(r, k): r = np.asfarray(r)[:k] if r.size: return np.sum(np.subtract(np.power(2, r), 1) / np.log2(np.arange(2, r.size + 2))) return 0. def ndcg_at_k(r, k): idcg = dcg_at_k(sorted(r, reverse=True), k) if not idcg: return 1. # CHANGE THIS return dcg_at_k(r, k) / idcg ```
I think the problem is caused by data in the same query that have same labels. In that case, Both XGBoost and LightGBM will produce ndcg 1 for that query.
15,854,257
I am new to python. As part of writing a module to scrape URLs I noticed that what I get using the python requests module could be different from what I get if I load the URL in a browser. This is because the page could contain JS code which is executed and the result is hat I see in the browser. My questions - 1. how do I deal with such sites. 1. Is python or any other module limited to just getting static pages or pages completely rendered on the server side? 2. How to deal with pages that do an Ajax style queries to load pages? I am assuming that there probably isn't a library for this and I have to do something on my own. I hope I don't have to build in something like webkit into my code :) Thanks for any help.
2013/04/06
[ "https://Stackoverflow.com/questions/15854257", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1645536/" ]
The best way to go about this might be, to use [css-gradients](https://developer.mozilla.org/en-US/docs/CSS/gradient) instead of shadows. I have done a little demo on [jsfiddle](http://jsfiddle.net/Qjgps/1/). I am not sure this is what you are looking for though. Here is the css I used: ``` background: rgb(254,255,255); /* Old browsers */ background: -moz-linear-gradient(top, rgba(254,255,255,1) 69%, rgba(226,226,226,1) 100%); /* FF3.6+ */ background: -webkit-gradient(linear, left top, left bottom, color-stop(69%,rgba(254,255,255,1)), color-stop(100%,rgba(226,226,226,1))); /* Chrome,Safari4+ */ background: -webkit-linear-gradient(top, rgba(254,255,255,1) 69%,rgba(226,226,226,1) 100%); /* Chrome10+,Safari5.1+ */ background: -o-linear-gradient(top, rgba(254,255,255,1) 69%,rgba(226,226,226,1) 100%); /* Opera 11.10+ */ background: -ms-linear-gradient(top, rgba(254,255,255,1) 69%,rgba(226,226,226,1) 100%); /* IE10+ */ background: linear-gradient(to bottom, rgba(254,255,255,1) 69%,rgba(226,226,226,1) 100%); /* W3C */ filter: progid:DXImageTransform.Microsoft.gradient( startColorstr='#feffff', endColorstr='#e2e2e2',GradientType=0 ); /* IE6-9 ``` As generated by [this](http://www.colorzilla.com/gradient-editor/) tool
I came up with this using [this CSS3 Generator](http://css3generator.com/) ``` -webkit-box-shadow: inset 0px -350px 200px -250px rgba(5, 5, 5, 1); box-shadow: inset 0px -350px 200px -250px rgba(5, 5, 5, 1); ``` This is a very cross-browser friendly method and if you apply a background color it will achieve what I believe is your desired result. Check out this [jsFiddle](http://jsfiddle.net/Qjgps/4/) **Source(s)** [CSS3 Generator](http://css3generator.com/)
15,671,875
I seem to have some difficulty getting what I want to work. Basically, I have a series of variables that are assigned strings with some quotes and \ characters. I want to remove the quotes to embed them inside a json doc, since json hates quotes using python dump methods. I figured it would be easy. Just determine how to remove the characters easy and then write a simple for loop for the variable substitution, well it didn't work that way. Here is what I want to do. There is a variable called "MESSAGE23", it contains the following "com.centrify.tokend.cac", I want to strip out the quotes, which to me is easy, a simple `echo $opt | sed "s/\"//g"`. When I do this from the command line: ``` $> MESSAGE23="com."apple".cacng.tokend is present" $> MESSAGE23=`echo $MESSAGE23 | sed "s/\"//g"` $> com.apple.cacng.tokend is present ``` This works. I get the properly formatted string. When I then try to throw this into a loop, all hell breaks loose. ``` for i to {1..25}; do MESSAGE$i=`echo $MESSAGE$i | sed "s/\"//g"` done ``` This doesn't work (either it throws a bunch of indexes out or nothing), and I'm pretty sure I just don't know enough about `arg` or `eval` or other bash substitution variables. But basically I want to do this for another set of variables with the same problems, where I strip out the quotes and incidentally the "\" too. Any help would be greatly appreciated.
2013/03/28
[ "https://Stackoverflow.com/questions/15671875", "https://Stackoverflow.com", "https://Stackoverflow.com/users/933693/" ]
You can't do that. You could make it work using `eval`, but that introduces another level of quoting you have to worry about. Is there some reason you can't use an array? ``` MESSAGE=("this is MESSAGE[0]" "this is MESSAGE[1]") MESSAGE[2]="I can add more, too!" for (( i=0; i<${#MESSAGE[@]}; ++i )); do echo "${MESSAGE[i]}" done ``` Otherwise you need something like this: ``` eval 'echo "$MESSAGE'"$i"'"' ``` and it just gets worse from there.
First, a couple of preliminary problems: `MESSAGE23="com."apple".cacng.tokend is present"` will not embed double-quotes in the variable value, use `MESSAGE23="com.\"apple\".cacng.tokend is present"` or `MESSAGE23='com."apple".cacng.tokend is present'` instead. Second, you should almost always put double-quotes around variable expansions (e.g. `echo "$MESSAGE23"`) to prevent parsing oddities. Now, the real problems: the shell doesn't allow variable substitution on the left side of an assignment (i.e. `MESSAGE$i=something` won't work). Fortunately, it does allow this in a `declare` statement, so you can use that instead. Also, when the sees `$MESSAGE$i` it replaces it will the value of `$MESSAGE` followed by the value of `$i`; for this you need to use indirect expansion (`${!metavariable}'). ``` for i in {1..25}; do varname="MESSAGE$i" declare $varname="$(echo "${!varname}" | tr -d '"')" done ``` (Note that I also used `tr` instead of `sed`, but that's just my personal preference.) (Also, note that @Mark Reed's suggestion of an array is really the better way to do this sort of thing.)
46,721,993
I have a shell script that I use to export Env variables. This script calls a python script to get certain values from a web service that I need to store before running my primary python script. I've tried using a `RUN . /bot/env/setenv.sh`, but this doesn't seem to make the env variables available in the final container. I've tried putting the contents in an `entrypoint.sh` file that ends in calling `python jbot.py`, but the container never completes its setup (I assume because the script inside the entrypoint is a continuous loop?) My `entrypoint.sh` looks like this: ``` #!/bin/bash . /jirabot/env/setenv.sh python jbot.py ``` And the `setenv.sh` is just: ``` #!/bin/bash export SLACK_BOT_TOKEN="xoxb-token" export BOT_ID=`python env/print_bot_id.py ${SLACK_BOT_TOKEN}` ``` My full Dockerfile is: ``` FROM python:2 COPY jirabot/ /jirabot/ RUN pip install slackclient schedule jira WORKDIR /jirabot #CMD [ "python", "jbot.py" ] ENTRYPOINT [ "/jirabot/entrypoint.sh" ] ``` When I do `docker run bot`, I can verify that the application is running (the bot responds to my requests appropriately). However, all of the `print()` statements within `jbot.py` are absent from the output -- so I have two primary questions: 1. Why does my `entrypoint.sh` just hang the container from returning? I do `docker run bot`, and I'm never returned control of the terminal. However, the bot seems to startup fine. 2. Why do I not get any of my print statements from `jbot.py` when I open a second terminal and do `docker logs <container>`? fwiw, my `jbot.py` is a `while True:` loop, monitoring for input.
2017/10/13
[ "https://Stackoverflow.com/questions/46721993", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1143724/" ]
1. You are not running the docker container [as a daemon](https://docs.docker.com/engine/reference/run/#detached--d). > > docker run -d bot > > > 2. In my experience, print messages don't make it to the logs without input buffering disabled in python. > > python -u jbot.py > > >
For your first question, you should check the documentation of `docker run`. Shortly, you are attaching with container so you will never return to your terminal. In order to detach, you need to add option `-d`. The common used command to launch a container is `docker run -idt <container>`. For your second question, the information is not enough to identify the problem, sorry. Maybe you can try again after you launching a container properly.
52,911,986
I am working with ros on ubuntu 16.04. Because of this I am working with a virtual environment for python 2.7 and the ros python modules (rospy for example). The "python.pythonPath" is set to the virtual environment and the ros modules are linked through "python.autoComplete.extraPaths". This leads to the issue where the python linter raises an error for import rospy claiming that it can not import it. However, the python intellisense is still able detect and help with the rospy module (which makes sense due to the python.autoComplete.extraPaths setting). Is there a way to include the extra paths for autoComplete for the linter as well? At this point, no longer including the virtual environment for the python path is not a desirable option so I am looking for a way to have the linter include the extra paths for ros python modules and the modules in the virtual environment.
2018/10/21
[ "https://Stackoverflow.com/questions/52911986", "https://Stackoverflow.com", "https://Stackoverflow.com/users/10534965/" ]
I've never used probuilder, but in other 3d max app you will need to delete the inner polygons that to be able to cap that face. select all the polygons in the "well" and delete, you will now have a hole to fill [GIF] [![enter image description here](https://i.stack.imgur.com/W3My9.gif)][1](https://i.stack.imgur.com/W3My9.gif)
update I find a complex but accurate way to do this unity editor > Tools > Probuilder > editor > open vertex position editor when i select vertex, position editor put selected positions to txt file unity editor > tools > probuilder > editor > open new shape editor > shape selector > custom build face with positions in txt **attention**: "Custom" point order is very strange, for ex, "Custom" to build cube, the point order is, top-left, top-right, bottom-left, bottom-right, in my understanding, the correct order is: in x-z axis face, image put point in Scene to build more near "parallel" x axis edge put edge point from x axis < 0(left) side to x axis > 0(right) side, and then use same way to build other "parallel" x axis edge select my created face and old object, probuilder toolbar > Merge Objects to merge them into one above way I use do all with ProBuilder API since it's opensource but not document, but filling face is rare case, so I think do it in probuilder gui is enongh **old answer** my tmp solution is use probuilder > New Poly Shape to use mouse click 4 vertexs and manually make face, but it's not perfect, it has seam in edge
52,911,986
I am working with ros on ubuntu 16.04. Because of this I am working with a virtual environment for python 2.7 and the ros python modules (rospy for example). The "python.pythonPath" is set to the virtual environment and the ros modules are linked through "python.autoComplete.extraPaths". This leads to the issue where the python linter raises an error for import rospy claiming that it can not import it. However, the python intellisense is still able detect and help with the rospy module (which makes sense due to the python.autoComplete.extraPaths setting). Is there a way to include the extra paths for autoComplete for the linter as well? At this point, no longer including the virtual environment for the python path is not a desirable option so I am looking for a way to have the linter include the extra paths for ros python modules and the modules in the virtual environment.
2018/10/21
[ "https://Stackoverflow.com/questions/52911986", "https://Stackoverflow.com", "https://Stackoverflow.com/users/10534965/" ]
I've never used probuilder, but in other 3d max app you will need to delete the inner polygons that to be able to cap that face. select all the polygons in the "well" and delete, you will now have a hole to fill [GIF] [![enter image description here](https://i.stack.imgur.com/W3My9.gif)][1](https://i.stack.imgur.com/W3My9.gif)
As of probuilder 4, you can achieve this very quickly without using 'fill hole'. Fill Hole is useful, but not for this. 1. Enter edge select mode and select an edge containing two of the vertices you wish to make a face out of. 2. Extrude the edge (ctrl + e or 'extrude edges') to the general area of the third vertex. 3. Enter vertex select mode and select the third vertex, then the two vertices from step two and any others you want to add. 4. Click the collapse vertices button and ensure "Collapse at first" is enabled (it will stay enabled until you disable it). The shortcut is alt + C. You can also use this technique & the alt+c ctrl+e keys to quickly weld coincident vertices and quickly draft complex level geometry. You can also use this to quickly bridge edges even if they have a different vertex count. With the Probuilder API, you can easily make an editor script to automate this process & assign a special shortcut to it. If you can operate the Unity API, you can operate the probuilder API.
56,443,552
I will service my django project with uwsgi on Ubuntu Server, but It doesn't run. I am using python 3.6 but the uwsgi shows me it's 2.7 I changed default python to python3.6 but uwsgi still doesn't work. This is my command : ``` uwsgi --http :8001 --home /home/ubuntu/repository/env --chdir /home/ubuntu/repository/project -w project.wsgi ``` This is Error message : ``` *** Starting uWSGI 2.0.18 (64bit) on [Tue Jun 4 21:03:58 2019] *** compiled with version: 5.4.0 20160609 on 04 June 2019 11:39:14 os: Linux-4.4.0-1079-aws #89-Ubuntu SMP Tue Mar 26 15:25:52 UTC 2019 nodename: ip-172-31-18-239 machine: x86_64 clock source: unix detected number of CPU cores: 2 current working directory: /home/ubuntu/repository/charteredbus *** running under screen session 1636.sbus *** detected binary path: /usr/local/bin/uwsgi !!! no internal routing support, rebuild with pcre support !!! chdir() to /home/ubuntu/repository/charteredbus *** WARNING: you are running uWSGI without its master process manager *** your processes number limit is 15738 your memory page size is 4096 bytes detected max file descriptor number: 1024 lock engine: pthread robust mutexes thunder lock: disabled (you can enable it with --thunder-lock) uWSGI http bound on :8001 fd 4 spawned uWSGI http 1 (pid: 8402) uwsgi socket 0 bound to TCP address 127.0.0.1:39614 (port auto-assigned) fd 3 Python version: 2.7.12 (default, Nov 12 2018, 14:36:49) [GCC 5.4.0 20160609] Set PythonHome to /home/ubuntu/repository/env ImportError: No module named site ```
2019/06/04
[ "https://Stackoverflow.com/questions/56443552", "https://Stackoverflow.com", "https://Stackoverflow.com/users/11598754/" ]
Unfortunately, uWSGI has to be compiled with python version matching with your virtualenv. That means: if uWSGI was compiled with python 2.7, you cannot use python 3.6 in your virtualenv (and in your Django app). Fortunately, there are some methods to fix that: * Installing uWSGI inside your virtualenv and using that uWSGI binary to run Django. * Using Python as a plugin to uWSGI. First one is pretty straightforward. All you need to do is change path to uWSGI binary in your startup script to point to uWSGI installed in your virtualenv. (If you're starting uWSGI using systemd, I recommend systemd user units. Just don't forget to run `loginctl enable-linger`) Second one is not that complicated. First you have to install uWSGI without python plugin, then install separate plugins for all python versions you will need. More on that you can find [here](https://uwsgi-docs.readthedocs.io/en/latest/WSGIquickstart.html#bonus-multiple-python-versions-for-the-same-uwsgi-binary). There are probably ready plugins in your system package repository if you're using uWSGI from it.
The log tells there is no module named site > > ImportError: No module named site > > > I assume site is an django app. did you register this in your INSTALLED\_APPS (settings.py) Otherwise you may need to register your app. (apps.py in the site app) Please let me know if I helped you. Jasper
56,443,552
I will service my django project with uwsgi on Ubuntu Server, but It doesn't run. I am using python 3.6 but the uwsgi shows me it's 2.7 I changed default python to python3.6 but uwsgi still doesn't work. This is my command : ``` uwsgi --http :8001 --home /home/ubuntu/repository/env --chdir /home/ubuntu/repository/project -w project.wsgi ``` This is Error message : ``` *** Starting uWSGI 2.0.18 (64bit) on [Tue Jun 4 21:03:58 2019] *** compiled with version: 5.4.0 20160609 on 04 June 2019 11:39:14 os: Linux-4.4.0-1079-aws #89-Ubuntu SMP Tue Mar 26 15:25:52 UTC 2019 nodename: ip-172-31-18-239 machine: x86_64 clock source: unix detected number of CPU cores: 2 current working directory: /home/ubuntu/repository/charteredbus *** running under screen session 1636.sbus *** detected binary path: /usr/local/bin/uwsgi !!! no internal routing support, rebuild with pcre support !!! chdir() to /home/ubuntu/repository/charteredbus *** WARNING: you are running uWSGI without its master process manager *** your processes number limit is 15738 your memory page size is 4096 bytes detected max file descriptor number: 1024 lock engine: pthread robust mutexes thunder lock: disabled (you can enable it with --thunder-lock) uWSGI http bound on :8001 fd 4 spawned uWSGI http 1 (pid: 8402) uwsgi socket 0 bound to TCP address 127.0.0.1:39614 (port auto-assigned) fd 3 Python version: 2.7.12 (default, Nov 12 2018, 14:36:49) [GCC 5.4.0 20160609] Set PythonHome to /home/ubuntu/repository/env ImportError: No module named site ```
2019/06/04
[ "https://Stackoverflow.com/questions/56443552", "https://Stackoverflow.com", "https://Stackoverflow.com/users/11598754/" ]
Unfortunately, uWSGI has to be compiled with python version matching with your virtualenv. That means: if uWSGI was compiled with python 2.7, you cannot use python 3.6 in your virtualenv (and in your Django app). Fortunately, there are some methods to fix that: * Installing uWSGI inside your virtualenv and using that uWSGI binary to run Django. * Using Python as a plugin to uWSGI. First one is pretty straightforward. All you need to do is change path to uWSGI binary in your startup script to point to uWSGI installed in your virtualenv. (If you're starting uWSGI using systemd, I recommend systemd user units. Just don't forget to run `loginctl enable-linger`) Second one is not that complicated. First you have to install uWSGI without python plugin, then install separate plugins for all python versions you will need. More on that you can find [here](https://uwsgi-docs.readthedocs.io/en/latest/WSGIquickstart.html#bonus-multiple-python-versions-for-the-same-uwsgi-binary). There are probably ready plugins in your system package repository if you're using uWSGI from it.
For those who cannot (or failed to) build language-independent uwsgi binary by following the second option as mentioned by @GwynBleidD, you can also build a seperate standalone uwsgi binary tied to different python plugin, by: * preserving the previously-built uwsgi binary * clean up previous build by running `make clean` in `/PATH/TO/UWSGI_SOURCE_FOLDER` * running the command `YOUR_PYTHON_VERSION uwsgiconfig.py --build` in `/PATH/TO/UWSGI_SOURCE_FOLDER`, for example ``` python3.9 uwsgiconfig.py --build python3.6 uwsgiconfig.py --build python3.4 uwsgiconfig.py --build ```
20,200,307
I am using GAE long time but can not find what is maximum length of ListProperty. I was read [documentation](https://developers.google.com/appengine/docs/python/datastore/typesandpropertyclasses#ListProperty) but not found solution I want to create ListProperty(long) to keep about 30 values of long or more. I want use this field as filter - can I use it similar to StringListProperty? What is size limits of ListProperty(long)?
2013/11/25
[ "https://Stackoverflow.com/questions/20200307", "https://Stackoverflow.com", "https://Stackoverflow.com/users/665926/" ]
I've a list of 20K strings (not indexed though). I don't think there is limitation on the length, but there is limitation on each entity size. Be careful on indexing multi value properties, it could be expensive.
30 will be fine. Guido's answer about related question: <https://stackoverflow.com/a/15418435/1279005> So up to 100 repeated values will be fine. Repeated properties much easier to understand by using NDB as I think. You should try it. It's does not matter if you using it with Long or String properties - if the property is indexed you'll be able to filter by it.
20,200,307
I am using GAE long time but can not find what is maximum length of ListProperty. I was read [documentation](https://developers.google.com/appengine/docs/python/datastore/typesandpropertyclasses#ListProperty) but not found solution I want to create ListProperty(long) to keep about 30 values of long or more. I want use this field as filter - can I use it similar to StringListProperty? What is size limits of ListProperty(long)?
2013/11/25
[ "https://Stackoverflow.com/questions/20200307", "https://Stackoverflow.com", "https://Stackoverflow.com/users/665926/" ]
@marcadian has a pretty good answer. There's no limit specifically on a ListProperty. You do need to look at datastore limits on entities though: <https://developers.google.com/appengine/docs/python/datastore/#Python_Quotas_and_limits> The two most obvious limits are the 1MB maximum entity size and 20000 index entries. Depending on what's inside your list, it may vary. You can fit 130k 8-byte long's within that 1MB limit, but if they're indexed, you'll hit a barrier at 20k entries because of the index limit. The worst bit is that these limits are on the total entity size, so if you have two lists in an entity, the size of one list could be limited by the what's in the other list.
30 will be fine. Guido's answer about related question: <https://stackoverflow.com/a/15418435/1279005> So up to 100 repeated values will be fine. Repeated properties much easier to understand by using NDB as I think. You should try it. It's does not matter if you using it with Long or String properties - if the property is indexed you'll be able to filter by it.
20,200,307
I am using GAE long time but can not find what is maximum length of ListProperty. I was read [documentation](https://developers.google.com/appengine/docs/python/datastore/typesandpropertyclasses#ListProperty) but not found solution I want to create ListProperty(long) to keep about 30 values of long or more. I want use this field as filter - can I use it similar to StringListProperty? What is size limits of ListProperty(long)?
2013/11/25
[ "https://Stackoverflow.com/questions/20200307", "https://Stackoverflow.com", "https://Stackoverflow.com/users/665926/" ]
@marcadian has a pretty good answer. There's no limit specifically on a ListProperty. You do need to look at datastore limits on entities though: <https://developers.google.com/appengine/docs/python/datastore/#Python_Quotas_and_limits> The two most obvious limits are the 1MB maximum entity size and 20000 index entries. Depending on what's inside your list, it may vary. You can fit 130k 8-byte long's within that 1MB limit, but if they're indexed, you'll hit a barrier at 20k entries because of the index limit. The worst bit is that these limits are on the total entity size, so if you have two lists in an entity, the size of one list could be limited by the what's in the other list.
I've a list of 20K strings (not indexed though). I don't think there is limitation on the length, but there is limitation on each entity size. Be careful on indexing multi value properties, it could be expensive.
58,260,903
I tried various programs to get the required pattern (Given below). The program which got closest to the required result is given below: **Input:** ``` for i in range(1,6): for j in range(i,i*2): print(j, end=' ') print( ) ``` **Output:** ``` 1 2 3 3 4 5 4 5 6 7 5 6 7 8 9 ``` **Required Output:** ``` 1 2 3 4 5 6 7 8 9 10 ``` Can I get some hint to get the required output? Note- A newbie to python.
2019/10/06
[ "https://Stackoverflow.com/questions/58260903", "https://Stackoverflow.com", "https://Stackoverflow.com/users/9957954/" ]
Store the printed value outside of the loop, then increment after its printed ``` v = 1 lines = 4 for i in range(lines): for j in range(i): print(v, end=' ') v += 1 print( ) ```
If you don't want to keep track of the count and solve this mathematically and be able to directly calculate any n-th line, the formula you are looking for is the one for, well, [triangle numbers](https://en.wikipedia.org/wiki/Triangular_number): ``` triangle = lambda n: n * (n + 1) // 2 for line in range(1, 5): t = triangle(line) print(' '.join(str(x+1) for x in range(t-line, t))) # 1 # 2 3 # 4 5 6 # 7 8 9 10 ```
43,043,437
I'm trying to create a wordcloud from csv file. The csv file, as an example, has the following structure: ``` a,1 b,2 c,4 j,20 ``` It has more rows, more or less 1800. The first column has string values (names) and the second column has their respective frequency (int). Then, the file is read and the key,value row is stored in a dictionary (d) because later on we will use this to plot the wordcloud: ```py reader = csv.reader(open('namesDFtoCSV', 'r',newline='\n')) d = {} for k,v in reader: d[k] = v ``` Once we have the dictionary full of values, I try to plot the wordcloud: ```sh #Generating wordcloud. Relative scaling value is to adjust the importance of a frequency word. #See documentation: https://github.com/amueller/word_cloud/blob/master/wordcloud/wordcloud.py wordcloud = WordCloud(width=900,height=500, max_words=1628,relative_scaling=1,normalize_plurals=False).generate_from_frequencies(d) plt.imshow(wordcloud, interpolation='bilinear') plt.axis("off") plt.show() But an error is thrown: Traceback (most recent call last): File ".........../script.py", line 19, in <module> wordcloud = WordCloud(width=900,height=500, max_words=1628,relative_scaling=1,normalize_plurals=False).generate_from_frequencies(d) File "/usr/local/lib/python3.5/dist-packages/wordcloud/wordcloud.py", line 360, in generate_from_frequencies for word, freq in frequencies] File "/usr/local/lib/python3.5/dist-packages/wordcloud/wordcloud.py", line 360, in <listcomp> for word, freq in frequencies] TypeError: unsupported operand type(s) for /: 'str' and 'float ``` Finally, the documentation says: ```py def generate_from_frequencies(self, frequencies, max_font_size=None): """Create a word_cloud from words and frequencies. Parameters ---------- frequencies : dict from string to float A contains words and associated frequency. max_font_size : int Use this font-size instead of self.max_font_size Returns ------- self ```python So, I don't understand why is trowing me this error if I met the requirements of the function. I hope someone can help me, thanks. **Note** I work with worldcloud 1.3.1 ```
2017/03/27
[ "https://Stackoverflow.com/questions/43043437", "https://Stackoverflow.com", "https://Stackoverflow.com/users/7658581/" ]
This is because the values in your dictionary are strings but wordcloud expects integer or floats. After I run your code then inspect your dictionary `d` I get the following. ``` In [12]: d Out[12]: {'a': '1', 'b': '2', 'c': '4', 'j': '20'} ``` Note the `' '` around the numbers means these are really strings. A hacky way to resolve this is to cast `v` to an `int` in your `FOR` loop like: ``` d[k] = int(v) ``` I say this is hacky since it'll work on integers but if you have floats in your input then it may cause problems. Also, Python errors can be difficult to read. Your error above can be interpreted as ``` script.py", line 19 TypeError: unsupported operand type(s) for /: 'str' and 'float ``` > > "There's a type error on or before line 19 of my file. Let me look at > my data types to see if there is any mismatch between string and > float..." > > > The code below works for me: ``` import csv from wordcloud import WordCloud import matplotlib.pyplot as plt reader = csv.reader(open('namesDFtoCSV', 'r',newline='\n')) d = {} for k,v in reader: d[k] = int(v) #Generating wordcloud. Relative scaling value is to adjust the importance of a frequency word. #See documentation: https://github.com/amueller/word_cloud/blob/master/wordcloud/wordcloud.py wordcloud = WordCloud(width=900,height=500, max_words=1628,relative_scaling=1,normalize_plurals=False).generate_from_frequencies(d) plt.imshow(wordcloud, interpolation='bilinear') plt.axis("off") plt.show() ```
``` # LEARNER CODE START HERE file_c="" for index, char in enumerate(file_contents): if(char.isalpha()==True or char.isspace()): file_c+=char file_c=file_c.split() file_w=[] for word in file_c: if word.lower() not in uninteresting_words and word.isalpha()==True: file_w.append(word) frequency={} for word in file_w: if word.lower() not in frequency: frequency[word.lower()]=1 else: frequency[word.lower()]+=1 #wordcloud cloud = wordcloud.WordCloud() cloud.generate_from_frequencies(frequency) return cloud.to_array() ```
5,719,545
I am working on web-crawler [using python]. Situation is, for example, I am behind server-1 and I use proxy setting to connect to the Outside world. So in Python, using proxy-handler I can fetch the urls. Now thing is, I am building a crawler so I cannot use only one IP [otherwise I will be blocked]. To solve this, I have bunch of Proxies, I want to shuffle through. My question is: This is two level proxy, one to connect to main server-1, I use proxy and then after to shuffle through proxies, I want to use proxy. How can I achieve this?
2011/04/19
[ "https://Stackoverflow.com/questions/5719545", "https://Stackoverflow.com", "https://Stackoverflow.com/users/715600/" ]
**Update** Sounds like you're looking to connect to proxy A and from there initiate HTTP connections via proxies B, C, D which are outside of A. You might look into the [proxychains project](http://proxychains.sourceforge.net/) which says it can "tunnel any protocol via a user-defined chain of TOR, SOCKS 4/5, and HTTP proxies". Version 3.1 is available as a package in Ubuntu Lucid. If it doesn't work directly for you, the [proxychains source code](http://prdownloads.sourceforge.net/proxychains/proxychains-3.1.tar.gz?download) may provide some insight into how this capability could be implemented for your app. **Orig answer**: Check out the [urllib2.ProxyHandler](http://docs.python.org/library/urllib2.html#urllib2.ProxyHandler). Here is an example of how you can use several different proxies to open urls: ``` import random import urllib2 # put the urls for all of your proxies in a list proxies = ['http://localhost:8080/'] # construct your list of url openers which each use a different proxy openers = [] for proxy in proxies: opener = urllib2.build_opener(urllib2.ProxyHandler({'http': proxy})) openers.append(opener) # select a url opener randomly, round-robin, or with some other scheme opener = random.choice(openers) req = urllib2.Request(url) res = opener.open(req) ```
I recommend you take a look at CherryProxy. It lets you send a proxy request to an intermediate server (where CherryProxy is running) and then forward your HTTP request to a proxy on a second level machine (e.g. squid proxy on another server) for processing. Viola! A two-level proxy chain. <http://www.decalage.info/python/cherryproxy>
66,775,948
Some python packages wont work in python 3.7 . So wanted to downgrade the default python version in google colab.Is it possible to do? If so how to proceed.Please guide me..
2021/03/24
[ "https://Stackoverflow.com/questions/66775948", "https://Stackoverflow.com", "https://Stackoverflow.com/users/15069182/" ]
You could install python 3.6 with `miniconda`: ``` %%bash MINICONDA_INSTALLER_SCRIPT=Miniconda3-4.5.4-Linux-x86_64.sh MINICONDA_PREFIX=/usr/local wget https://repo.continuum.io/miniconda/$MINICONDA_INSTALLER_SCRIPT chmod +x $MINICONDA_INSTALLER_SCRIPT ./$MINICONDA_INSTALLER_SCRIPT -b -f -p $MINICONDA_PREFIX ``` And add to path: ``` import sys _ = (sys.path.append("/usr/local/lib/python3.6/site-packages")) ```
The following code snippet below will download Python 3.6 without any Colab pre-installed libraries (such as Tensorflow). You can install them later with pip, like `!pip install tensorflow`. Please note that this won't downgrade your default python in colab, rather it would provide a workaround to work with other python versions in colab. To run any python scripts with 3.6 version, use `!python3.6` instead of `!python` ``` !add-apt-repository ppa:deadsnakes/ppa !apt-get update !apt-get install python3.6 !apt-get install python3.6-dev !wget https://bootstrap.pypa.io/get-pip.py && python3.6 get-pip.py import sys sys.path[2] = '/usr/lib/python36.zip' sys.path[3] = '/usr/lib/python3.6' sys.path[4] = '/usr/lib/python3.6/lib-dynload' sys.path[5] = '/usr/local/lib/python3.6/dist-packages' sys.path[7] ='/usr/local/lib/python3.6/dist-packages/IPython/extensions' ```
53,972,642
I'm a lawyer and python beginner, so I'm both (a) dumb and (b) completely out of my lane. I'm trying to apply a regex pattern to a text file. The pattern can sometimes stretch across multiple lines. I'm specifically interested in these lines from the text file: ``` Considered and decided by Hemingway, Presiding Judge; Bell, Judge; and \n \n Dickinson, Emily, Judge. ``` I'd like to individually hunt for, extract, and then print the judges' names. My code so far looks like this: ``` import re def judges(): presiding = re.compile(r'by\s*?([A-Z].*),\s*?Presiding\s*?Judge;', re.DOTALL) judge2 = re.compile(r'Presiding\s*?Judge;\s*?([A-Z].*),\s*?Judge;', re.DOTALL) judge3 = re.compile(r'([A-Z].*), Judge\.', re.DOTALL) with open("text.txt", "r") as case: for lines in case: presiding_match = re.search(presiding, lines) judge2_match = re.search(judge2, lines) judge3_match = re.search(judge3, lines) if presiding_match or judge2_match or judge3_match: print(presiding_match.group(1)) print(judge2_match.group(1)) print(judge3_match.group(1)) break ``` When I run it, I can get Hemingway and Bell, but then I get an "AttributeError: 'NoneType' object has no attribute 'group'" for the third judge after the two line breaks. After trial-and-error, I've found that my code is only reading the first line (until the "Bell, Judge; and") then quits. I thought the re.DOTALL would solve it, but I can't seem to make it work. I've tried a million ways to capture the line breaks and get the whole thing, including re.match, re.DOTALL, re.MULTILINE, "".join, "".join(lines.strip()), and anything else I can throw against the wall to make stick. After a couple days, I've bowed to asking for help. Thanks for anything you can do. (As an aside, I've had no luck getting the regex to work with the ^ and $ characters. It also seems to hate the . escape in the judge3 regex.)
2018/12/29
[ "https://Stackoverflow.com/questions/53972642", "https://Stackoverflow.com", "https://Stackoverflow.com/users/10823652/" ]
You are passing in **single lines**, because you are iterating over the open file referenced by `case`. The regex is never passed anything other than a single line of text. Your regexes can each match *some* of the lines, but they don't all together match the same single line. You'd have to read in more than one line. If the file is small enough, just read it as one string: ``` with open("text.txt", "r") as case: case_text = case.read() ``` then apply your regular expressions to that one string. Or, you could test each of the match objects individually, not as a group, and only print those that matched: ``` if presiding_match: print(presiding_match.group(1)) elif judge2_match: print(judge2_match.group(1)) elif judge3_match: print(judge3_match.group(1)) ``` but then you'll have to create additional logic to determine when you are done reading from the file and break out of the loop. Note that the patterns you are matching are not broken across lines, so the `DOTALL` flag is not actually needed here. You do match `.*` text, so you are running the risk of matching *too much* if you use `DOTALL`: ``` >>> import re >>> case_text = """Considered and decided by Hemingway, Presiding Judge; Bell, Judge; and ... ... Dickinson, Emily, Judge. ... """ >>> presiding = re.compile(r'by\s*?([A-Z].*),\s*?Presiding\s*?Judge;', re.DOTALL) >>> judge2 = re.compile(r'Presiding\s*?Judge;\s*?([A-Z].*),\s*?Judge;', re.DOTALL) >>> judge3 = re.compile(r'([A-Z].*), Judge\.', re.DOTALL) >>> presiding.search(case_text).groups() ('Hemingway',) >>> judge2.search(case_text).groups() ('Bell',) >>> judge3.search(case_text).groups() ('Considered and decided by Hemingway, Presiding Judge; Bell, Judge; and \n\nDickinson, Emily',) ``` I'd at least replace `[A-Z].*` with `[A-Z][^;\n]+`, to *at least* exclude matching `;` semicolons and newlines, and only match names at least 2 characters long. Just drop the `DOTALL` flags altogether: ``` >>> presiding = re.compile(r'by\s*?([A-Z][^;]+),\s+?Presiding\s+?Judge;') >>> judge2 = re.compile(r'Presiding\s+?Judge;\s+?([A-Z][^;]+),\s+?Judge;') >>> judge3 = re.compile(r'([A-Z][^;]+), Judge\.') >>> presiding.search(case_text).groups() ('Hemingway',) >>> judge2.search(case_text).groups() ('Bell',) >>> judge3.search(case_text).groups() ('Dickinson, Emily',) ``` You can combine the three patterns into one: ``` judges = re.compile( r'(?:Considered\s+?and\s+?decided\s+?by\s+?)?' r'([A-Z][^;]+),\s+?(?:Presiding\s+?)?Judge[.;]' ) ``` which can find all the judges in your input in one go with `.findall()`: ``` >>> judges.findall(case_text) ['Hemingway', 'Bell', 'Dickinson, Emily'] ```
Instead of multiple `re.search`, you could use [`re.findall`](https://docs.python.org/3.7/library/re.html#re.findall) with a really short and simple pattern to find all judges at once: ``` import re text = """Considered and decided by Hemingway, Presiding Judge; Bell, Judge; and \n \n Dickinson, Emily, Judge.""" matches = re.findall(r"(\w+,)?\s(\w+),(\s+Presiding)?\s+Judge", text) print(matches) ``` Which prints: ``` [('', 'Hemingway', ' Presiding'), ('', 'Bell', ''), ('Dickinson,', 'Emily', '')] ``` All the raw information is there: first name, last name and "presiding attribute" (if Presiding Judge or not) of each judge. Afterwards, you can feed this raw information into a data structure which satisfies your needs, for example: ``` judges = [] for match in matches: if match[0]: first_name = match[1] last_name = match[0] else: first_name = "" last_name = match[1] presiding = "Presiding" in match[2] judges.append((first_name, last_name, presiding)) print(judges) ``` Which prints: ``` [('', 'Hemingway', True), ('', 'Bell', False), ('Emily', 'Dickinson,', False)] ``` As you can see, now you have a list of tuples, where the first element is the first name (if specified in the text), the second element is the last name and the third element is a `bool` whether the judge is the presiding judge or not. Obviously, the pattern works for your provided example. However, since `(\w+,)?\s(\w+),(\s+Presiding)?\s+Judge` is such a simple pattern, there are some edge cases to be aware of, where the pattern might return the wrong result: * Only one first name will be matched. A name like `Dickinson, Emily Mary` will result in `Mary` detected as the last name. * Last names like `de Broglie` will result in only `Broglie` matched, so `de` gets lost. * ... You will have to see if this fits your needs or provide more information to your question about your data.
53,972,642
I'm a lawyer and python beginner, so I'm both (a) dumb and (b) completely out of my lane. I'm trying to apply a regex pattern to a text file. The pattern can sometimes stretch across multiple lines. I'm specifically interested in these lines from the text file: ``` Considered and decided by Hemingway, Presiding Judge; Bell, Judge; and \n \n Dickinson, Emily, Judge. ``` I'd like to individually hunt for, extract, and then print the judges' names. My code so far looks like this: ``` import re def judges(): presiding = re.compile(r'by\s*?([A-Z].*),\s*?Presiding\s*?Judge;', re.DOTALL) judge2 = re.compile(r'Presiding\s*?Judge;\s*?([A-Z].*),\s*?Judge;', re.DOTALL) judge3 = re.compile(r'([A-Z].*), Judge\.', re.DOTALL) with open("text.txt", "r") as case: for lines in case: presiding_match = re.search(presiding, lines) judge2_match = re.search(judge2, lines) judge3_match = re.search(judge3, lines) if presiding_match or judge2_match or judge3_match: print(presiding_match.group(1)) print(judge2_match.group(1)) print(judge3_match.group(1)) break ``` When I run it, I can get Hemingway and Bell, but then I get an "AttributeError: 'NoneType' object has no attribute 'group'" for the third judge after the two line breaks. After trial-and-error, I've found that my code is only reading the first line (until the "Bell, Judge; and") then quits. I thought the re.DOTALL would solve it, but I can't seem to make it work. I've tried a million ways to capture the line breaks and get the whole thing, including re.match, re.DOTALL, re.MULTILINE, "".join, "".join(lines.strip()), and anything else I can throw against the wall to make stick. After a couple days, I've bowed to asking for help. Thanks for anything you can do. (As an aside, I've had no luck getting the regex to work with the ^ and $ characters. It also seems to hate the . escape in the judge3 regex.)
2018/12/29
[ "https://Stackoverflow.com/questions/53972642", "https://Stackoverflow.com", "https://Stackoverflow.com/users/10823652/" ]
You are passing in **single lines**, because you are iterating over the open file referenced by `case`. The regex is never passed anything other than a single line of text. Your regexes can each match *some* of the lines, but they don't all together match the same single line. You'd have to read in more than one line. If the file is small enough, just read it as one string: ``` with open("text.txt", "r") as case: case_text = case.read() ``` then apply your regular expressions to that one string. Or, you could test each of the match objects individually, not as a group, and only print those that matched: ``` if presiding_match: print(presiding_match.group(1)) elif judge2_match: print(judge2_match.group(1)) elif judge3_match: print(judge3_match.group(1)) ``` but then you'll have to create additional logic to determine when you are done reading from the file and break out of the loop. Note that the patterns you are matching are not broken across lines, so the `DOTALL` flag is not actually needed here. You do match `.*` text, so you are running the risk of matching *too much* if you use `DOTALL`: ``` >>> import re >>> case_text = """Considered and decided by Hemingway, Presiding Judge; Bell, Judge; and ... ... Dickinson, Emily, Judge. ... """ >>> presiding = re.compile(r'by\s*?([A-Z].*),\s*?Presiding\s*?Judge;', re.DOTALL) >>> judge2 = re.compile(r'Presiding\s*?Judge;\s*?([A-Z].*),\s*?Judge;', re.DOTALL) >>> judge3 = re.compile(r'([A-Z].*), Judge\.', re.DOTALL) >>> presiding.search(case_text).groups() ('Hemingway',) >>> judge2.search(case_text).groups() ('Bell',) >>> judge3.search(case_text).groups() ('Considered and decided by Hemingway, Presiding Judge; Bell, Judge; and \n\nDickinson, Emily',) ``` I'd at least replace `[A-Z].*` with `[A-Z][^;\n]+`, to *at least* exclude matching `;` semicolons and newlines, and only match names at least 2 characters long. Just drop the `DOTALL` flags altogether: ``` >>> presiding = re.compile(r'by\s*?([A-Z][^;]+),\s+?Presiding\s+?Judge;') >>> judge2 = re.compile(r'Presiding\s+?Judge;\s+?([A-Z][^;]+),\s+?Judge;') >>> judge3 = re.compile(r'([A-Z][^;]+), Judge\.') >>> presiding.search(case_text).groups() ('Hemingway',) >>> judge2.search(case_text).groups() ('Bell',) >>> judge3.search(case_text).groups() ('Dickinson, Emily',) ``` You can combine the three patterns into one: ``` judges = re.compile( r'(?:Considered\s+?and\s+?decided\s+?by\s+?)?' r'([A-Z][^;]+),\s+?(?:Presiding\s+?)?Judge[.;]' ) ``` which can find all the judges in your input in one go with `.findall()`: ``` >>> judges.findall(case_text) ['Hemingway', 'Bell', 'Dickinson, Emily'] ```
Assuming you can read the file all at once (ie the file is not too big). You can extract judge information as follows: ``` import re regex = re.compile( r'decided\s+by\s+(?P<presiding_judge>[A-Za-z]+)\s*,\s+Presiding\s+Judge;' r'\s+(?P<judge>[A-Za-z]+)\s*,\s+Judge;' r'\s+and\s+(?P<extra_judges>[A-Za-z,\s]+)\s*,\s+Judge\.?', re.DOTALL | re.MULTILINE ) filename = 'text.txt' with open(filename) as fd: data = fd.read() for match in regex.finditer(data): print(match.groupdict()) ``` with sample input text file (`text.txt`) looking [like this](https://paste.ubuntu.com/p/dcsKkQbcSn/), the output becomes: ``` {'judge': 'Bell', 'extra_judges': 'Dickinson, Emily', 'presiding_judge': 'Hemingway'} {'judge': 'Abel', 'extra_judges': 'Lagrange, Gauss', 'presiding_judge': 'Einstein'} {'judge': 'Dirichlet', 'extra_judges': 'Fourier, Cauchy', 'presiding_judge': 'Newton'} ``` You can also play with this at [regex101 site](https://regex101.com/r/P7IjiM/2/)
10,512,026
I'm new to python and am trying to read "blocks" of data from a file. The file is written something like: ``` # Some comment # 4 cols of data --x,vx,vy,vz # nsp, nskip = 2 10 # 0 0.0000000 # 1 4 0.5056E+03 0.8687E-03 -0.1202E-02 0.4652E-02 0.3776E+03 0.8687E-03 0.1975E-04 0.9741E-03 0.2496E+03 0.8687E-03 0.7894E-04 0.8334E-03 0.1216E+03 0.8687E-03 0.1439E-03 0.6816E-03 # 2 4 0.5056E+03 0.8687E-03 -0.1202E-02 0.4652E-02 0.3776E+03 0.8687E-03 0.1975E-04 0.9741E-03 0.2496E+03 0.8687E-03 0.7894E-04 0.8334E-03 0.1216E+03 0.8687E-03 0.1439E-03 0.6816E-03 # 500 0.99999422 # 1 4 0.5057E+03 0.7392E-03 -0.6891E-03 0.4700E-02 0.3777E+03 0.9129E-03 0.2653E-04 0.9641E-03 0.2497E+03 0.9131E-03 0.7970E-04 0.8173E-03 0.1217E+03 0.9131E-03 0.1378E-03 0.6586E-03 and so on ``` Now I want to be able specify and read only one block of data out of these many blocks. I'm using `numpy.loadtxt('filename',comments='#')` to read the data but it loads the whole file in one go. I searched online and someone has created a patch for the numpy io routine to specify reading blocks but it's not in mainstream numpy. It's much easier to choose blocks of data in gnuplot but I'd have to write the routine to plot the distribution functions. If I can figure out reading specific blocks, it would be much easier in python. Also, I'm moving all my visualization codes to python from IDL and gnuplot, so it'll be nice to have everything in python instead of having things scattered around in multiple packages. I thought about calling gnuplot from within python, plotting a block to a table and assigning the output to some array in python. But I'm still starting and I could not figure out the syntax to do it. Any ideas, pointers to solve this problem would be of great help.
2012/05/09
[ "https://Stackoverflow.com/questions/10512026", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1325437/" ]
A quick basic read: ``` >>> def read_blocks(input_file, i, j): empty_lines = 0 blocks = [] for line in open(input_file): # Check for empty/commented lines if not line or line.startswith('#'): # If 1st one: new block if empty_lines == 0: blocks.append([]) empty_lines += 1 # Non empty line: add line in current(last) block else: empty_lines = 0 blocks[-1].append(line) return blocks[i:j + 1] >>> for block in read_blocks(s, 1, 2): print '-> block' for line in block: print line -> block 0.5056E+03 0.8687E-03 -0.1202E-02 0.4652E-02 0.3776E+03 0.8687E-03 0.1975E-04 0.9741E-03 0.2496E+03 0.8687E-03 0.7894E-04 0.8334E-03 0.1216E+03 0.8687E-03 0.1439E-03 0.6816E-03 -> block 0.5057E+03 0.7392E-03 -0.6891E-03 0.4700E-02 0.3777E+03 0.9129E-03 0.2653E-04 0.9641E-03 0.2497E+03 0.9131E-03 0.7970E-04 0.8173E-03 0.1217E+03 0.9131E-03 0.1378E-03 0.6586E-03 >>> ``` Now I guess you can use numpy to read the lines...
The following code should probably get you started. You will probably need the re module. You can open the file for reading using: ``` f = open("file_name_here") ``` You can read the file one line at a time by using ``` line = f.readline() ``` To jump to the next line that starts with a "#", you can use: ``` while not line.startswith("#"): line = f.readline() ``` To parse a line that looks like "# i j", you could use the following regular expression: ``` is_match = re.match("#\s+(\d+)\s+(\d+)",line) if is_match: i = is_match.group(1) j = is_match.group(2) ``` See the documentation for the "re" module for more information on this. To parse a block, you could use the following bit of code: ``` block = [[]] # block[i][j] will contain element i,j in your block while not line.isspace(): # read until next blank line block.append(map(float,line.split(" "))) # splits each line at each space and turns all elements to float line = f.readline() ``` You can then turn your block into a numpy array if you want: ``` block = np.array(block) ``` Provided you have imported numpy as np. If you want to read multiple blocks between i and j, just put the above code to read one block into a function and use it multiple times. Hope this helps!
53,949,017
I'm trying to develop a lambda that has to work with S3 and dynamoDB. The thing is that because I am not familiar with the SDK of aws for go I will have lots of tests and tries. Each time I will change the code is another time I have to compile the project and upload it to aws. Is there any way to do it locally? pass some kind of configuration that lets me call the services of aws locally, from my computer? Thanks! *This has to do mostly with golang, other languages like python can run directly on the aws lambda function page, and node has `cloud9` support.*
2018/12/27
[ "https://Stackoverflow.com/questions/53949017", "https://Stackoverflow.com", "https://Stackoverflow.com/users/8885009/" ]
You can use the lambci docker image(s) to execute your code locally using the same Lambda runtimes that are used on AWS. <https://github.com/lambci/docker-lambda> You can also run dynamo DB locally in another container as well <https://hub.docker.com/r/amazon/dynamodb-local/> To simulate credentials/roles that would be available on Lambda, just pass in your Api creds VIA environment variables. ( for s3 access ) Cheers -JH
You could use this [aws-lambda-go-test](https://github.com/yogeshlonkar/aws-lambda-go-test) module which can run lambda locally and can be used to test the actual response from lambda full disclosure I forked and upgraded this module
59,001,784
I am building a webscrape that will run over and over that will insert new data or update data based on ID. `if 'id' == 'id':` My goal is to avoid duplicates. MySQL table is ready and built. What is the best Pythonic way to check your python list before inserting/updating it in MySQL DB using SQLAlchemy? Below are my dependenices: ``` from sqlalchemy.ext.automap import automap_base from sqlalchemy.orm import Session import requests from bs4 import BeautifulSoup from time import sleep from datetime import datetime import time engine = create_engine("mysql+pymysql:///blah") ``` I use a function to assign each `<td>` from scraped data: ``` def functionscrape( **kwargs ): scrape = { 'id': '', 'owner': '', 'street': '', 'city': '', 'state': '', } scrape.update(kwargs) return (scrape) ``` The list below is an example, but would be changing constantly with each webscrape. ``` myList = [{ 'id': '111', 'owner': 'Bob', 'street': '1212 North', 'city': 'Anywhere', 'state': 'TX', }, { 'id': '222', 'owner': 'Mary', 'street': '333 South', 'city': 'Overthere', 'state': 'AZ', }] ```
2019/11/22
[ "https://Stackoverflow.com/questions/59001784", "https://Stackoverflow.com", "https://Stackoverflow.com/users/11396478/" ]
I am using a helper function to create the dynamic sql update queries: ``` def construct_update(table_name, where_vals, update_vals): query = table_name.update() for k, v in where_vals.items(): query = query.where(getattr(table_name.c, k) == v) return query.values(**update_vals) ``` basically you pass the function the table and 2 dictionaries. The first would just be {'id': id} in your case, and the second is all the values you want to update, like ``` { 'owner': 'Bob', 'street': '1212 North', 'city': 'Anywhere', etc... } ``` the helper function then returns the query which can be executed with ``` my_session = Session(engine) my_session.execute(query) ``` Unfortunately, using this method, you'll have to update every single row individually (no bulk update) - but if you can live with that this works fine otherwise here's a similar post about bulk updates: [Bulk update in SQLAlchemy Core using WHERE](https://stackoverflow.com/questions/25694234/bulk-update-in-sqlalchemy-core-using-where)
You can try using <https://marshmallow.readthedocs.io/en/stable/> library to make validation Build `Schema` and define fields with types you need. You can also use `@pre_load` and `@post_load` decorators to manipulate your data
2,105,508
I'm new to Cython and I'm trying to use Cython to wrap a C/C++ static library. I made a simple example as follow. **Test.h:** ``` #ifndef TEST_H #define TEST_H int add(int a, int b); int multipy(int a, int b); #endif ``` **Test.cpp** ``` #include "test.h" int add(int a, int b) { return a+b; } int multipy(int a, int b) { return a*b; } ``` Then I used g++ to compile and build it. ``` g++ -c test.cpp -o libtest.o ar rcs libtest.a libtest.o ``` So now I got a static library called `libtest.a`. **Test.pyx:** ``` cdef extern from "test.h": int add(int a,int b) int multipy(int a,int b) print add(2,3) ``` **Setup.py:** ``` from distutils.core import setup from distutils.extension import Extension from Cython.Distutils import build_ext ext_modules = [Extension("test", ["test.pyx"], language='c++', include_dirs=[r'.'], library_dirs=[r'.'], libraries=['libtest'] )] setup( name = 'test', cmdclass = {'build_ext': build_ext}, ext_modules = ext_modules ) ``` The I called: ``` python setup.py build_ext --compiler=mingw32 --inplace ``` The output was: ``` running build_ext cythoning test.pyx to test.cpp building 'test' extension creating build creating build\temp.win32-2.6 creating build\temp.win32-2.6\Release C:\Program Files\pythonxy\mingw\bin\gcc.exe -mno-cygwin -mdll -O -Wall -I. -IC:\ Python26\include -IC:\Python26\PC -c test.cpp -o build\temp.win32-2.6\Release\test.o writing build\temp.win32-2.6\Release\test.def C:\Program Files\pythonxy\mingw\bin\g++.exe -mno-cygwin -mdll -static --entry _D llMain@12 --output-lib build\temp.win32-2.6\Release\libtest.a --def build\temp.w in32-2.6\Release\test.def -s build\temp.win32-2.6\Release\test.o -L. -LC:\Python 26\libs -LC:\Python26\PCbuild -ltest -lpython26 -lmsvcr90 -o test.pyd g++: build\temp.win32-2.6\Release\libtest.a: No such file or directory error: command 'g++' failed with exit status 1 ``` I also tried to use `libraries=['test']` instead of `libraries=['libtest']`. It gave me the same errors. Any clue about this?
2010/01/20
[ "https://Stackoverflow.com/questions/2105508", "https://Stackoverflow.com", "https://Stackoverflow.com/users/150324/" ]
If your C++ code is only used by the wrapper, another option is to let the setup compile your .cpp file, like this: ``` from distutils.core import setup from distutils.extension import Extension from Cython.Distutils import build_ext ext_modules = [Extension("test", ["test.pyx", "test.cpp"], language='c++', )] setup( name = 'test', cmdclass = {'build_ext': build_ext}, ext_modules = ext_modules ) ``` For linking to a static library you have to use the [extra\_objects](http://docs.python.org/distutils/apiref.html) argument in your `Extension`: ``` from distutils.core import setup from distutils.extension import Extension from Cython.Distutils import build_ext ext_modules = [Extension("test", ["test.pyx"], language='c++', extra_objects=["libtest.a"], )] setup( name = 'test', cmdclass = {'build_ext': build_ext}, ext_modules = ext_modules ) ```
I think you can fix this specific problem by specifying the right `library_dirs` (where you actually **put** libtest.a -- apparently it's not getting found), but I think then you'll have another problem -- your entry points are not properly declared as `extern "C"`, so the function's names will have been "mangled" by the C++ compiler (look at the names exported from your libtest.a and you'll see!), so any other language except C++ (including C, Cython, etc) will have problems getting at them. The fix is to declare them as `extern "C"`.
2,105,508
I'm new to Cython and I'm trying to use Cython to wrap a C/C++ static library. I made a simple example as follow. **Test.h:** ``` #ifndef TEST_H #define TEST_H int add(int a, int b); int multipy(int a, int b); #endif ``` **Test.cpp** ``` #include "test.h" int add(int a, int b) { return a+b; } int multipy(int a, int b) { return a*b; } ``` Then I used g++ to compile and build it. ``` g++ -c test.cpp -o libtest.o ar rcs libtest.a libtest.o ``` So now I got a static library called `libtest.a`. **Test.pyx:** ``` cdef extern from "test.h": int add(int a,int b) int multipy(int a,int b) print add(2,3) ``` **Setup.py:** ``` from distutils.core import setup from distutils.extension import Extension from Cython.Distutils import build_ext ext_modules = [Extension("test", ["test.pyx"], language='c++', include_dirs=[r'.'], library_dirs=[r'.'], libraries=['libtest'] )] setup( name = 'test', cmdclass = {'build_ext': build_ext}, ext_modules = ext_modules ) ``` The I called: ``` python setup.py build_ext --compiler=mingw32 --inplace ``` The output was: ``` running build_ext cythoning test.pyx to test.cpp building 'test' extension creating build creating build\temp.win32-2.6 creating build\temp.win32-2.6\Release C:\Program Files\pythonxy\mingw\bin\gcc.exe -mno-cygwin -mdll -O -Wall -I. -IC:\ Python26\include -IC:\Python26\PC -c test.cpp -o build\temp.win32-2.6\Release\test.o writing build\temp.win32-2.6\Release\test.def C:\Program Files\pythonxy\mingw\bin\g++.exe -mno-cygwin -mdll -static --entry _D llMain@12 --output-lib build\temp.win32-2.6\Release\libtest.a --def build\temp.w in32-2.6\Release\test.def -s build\temp.win32-2.6\Release\test.o -L. -LC:\Python 26\libs -LC:\Python26\PCbuild -ltest -lpython26 -lmsvcr90 -o test.pyd g++: build\temp.win32-2.6\Release\libtest.a: No such file or directory error: command 'g++' failed with exit status 1 ``` I also tried to use `libraries=['test']` instead of `libraries=['libtest']`. It gave me the same errors. Any clue about this?
2010/01/20
[ "https://Stackoverflow.com/questions/2105508", "https://Stackoverflow.com", "https://Stackoverflow.com/users/150324/" ]
Your `Test.pyx` file isn't doing what you expect. The `print add(2,3)` line *will not* call the `add()` C++ function; you have to explicitly create a wrapper function to do that. Cython doesn't create wrappers for you automatically. Something like this is probably what you want: ``` cdef extern from "test.h": int _add "add"(int a,int b) int _multiply "multiply"(int a,int b) def add(a, b): return _add(a, b) def multiply(a, b): return _multiply(a, b) print add(2, 3) ``` You can look at Cython's [documentation](http://docs.cython.org/src/tutorial/external.html) for more details.
I think you can fix this specific problem by specifying the right `library_dirs` (where you actually **put** libtest.a -- apparently it's not getting found), but I think then you'll have another problem -- your entry points are not properly declared as `extern "C"`, so the function's names will have been "mangled" by the C++ compiler (look at the names exported from your libtest.a and you'll see!), so any other language except C++ (including C, Cython, etc) will have problems getting at them. The fix is to declare them as `extern "C"`.
2,105,508
I'm new to Cython and I'm trying to use Cython to wrap a C/C++ static library. I made a simple example as follow. **Test.h:** ``` #ifndef TEST_H #define TEST_H int add(int a, int b); int multipy(int a, int b); #endif ``` **Test.cpp** ``` #include "test.h" int add(int a, int b) { return a+b; } int multipy(int a, int b) { return a*b; } ``` Then I used g++ to compile and build it. ``` g++ -c test.cpp -o libtest.o ar rcs libtest.a libtest.o ``` So now I got a static library called `libtest.a`. **Test.pyx:** ``` cdef extern from "test.h": int add(int a,int b) int multipy(int a,int b) print add(2,3) ``` **Setup.py:** ``` from distutils.core import setup from distutils.extension import Extension from Cython.Distutils import build_ext ext_modules = [Extension("test", ["test.pyx"], language='c++', include_dirs=[r'.'], library_dirs=[r'.'], libraries=['libtest'] )] setup( name = 'test', cmdclass = {'build_ext': build_ext}, ext_modules = ext_modules ) ``` The I called: ``` python setup.py build_ext --compiler=mingw32 --inplace ``` The output was: ``` running build_ext cythoning test.pyx to test.cpp building 'test' extension creating build creating build\temp.win32-2.6 creating build\temp.win32-2.6\Release C:\Program Files\pythonxy\mingw\bin\gcc.exe -mno-cygwin -mdll -O -Wall -I. -IC:\ Python26\include -IC:\Python26\PC -c test.cpp -o build\temp.win32-2.6\Release\test.o writing build\temp.win32-2.6\Release\test.def C:\Program Files\pythonxy\mingw\bin\g++.exe -mno-cygwin -mdll -static --entry _D llMain@12 --output-lib build\temp.win32-2.6\Release\libtest.a --def build\temp.w in32-2.6\Release\test.def -s build\temp.win32-2.6\Release\test.o -L. -LC:\Python 26\libs -LC:\Python26\PCbuild -ltest -lpython26 -lmsvcr90 -o test.pyd g++: build\temp.win32-2.6\Release\libtest.a: No such file or directory error: command 'g++' failed with exit status 1 ``` I also tried to use `libraries=['test']` instead of `libraries=['libtest']`. It gave me the same errors. Any clue about this?
2010/01/20
[ "https://Stackoverflow.com/questions/2105508", "https://Stackoverflow.com", "https://Stackoverflow.com/users/150324/" ]
If your C++ code is only used by the wrapper, another option is to let the setup compile your .cpp file, like this: ``` from distutils.core import setup from distutils.extension import Extension from Cython.Distutils import build_ext ext_modules = [Extension("test", ["test.pyx", "test.cpp"], language='c++', )] setup( name = 'test', cmdclass = {'build_ext': build_ext}, ext_modules = ext_modules ) ``` For linking to a static library you have to use the [extra\_objects](http://docs.python.org/distutils/apiref.html) argument in your `Extension`: ``` from distutils.core import setup from distutils.extension import Extension from Cython.Distutils import build_ext ext_modules = [Extension("test", ["test.pyx"], language='c++', extra_objects=["libtest.a"], )] setup( name = 'test', cmdclass = {'build_ext': build_ext}, ext_modules = ext_modules ) ```
Your `Test.pyx` file isn't doing what you expect. The `print add(2,3)` line *will not* call the `add()` C++ function; you have to explicitly create a wrapper function to do that. Cython doesn't create wrappers for you automatically. Something like this is probably what you want: ``` cdef extern from "test.h": int _add "add"(int a,int b) int _multiply "multiply"(int a,int b) def add(a, b): return _add(a, b) def multiply(a, b): return _multiply(a, b) print add(2, 3) ``` You can look at Cython's [documentation](http://docs.cython.org/src/tutorial/external.html) for more details.
74,179,391
I am responsible for a series of exercises for nonlinear optimization. I thought it would be cool to start with some examples of optimization problems and solve them with `pyomo` + some black box solvers. However, as the students learn more about optimization algorithms I wanted them to also implement some simple methods and test there implementation for the same examples. I hoped there would be an "easy" way to add a custom solver to `pyomo` however I cannot find any information about this. Basically that would allow the students to check their implementation by just changing a single line in there code and compare to a well tested solver. I would also try to implement a **simple** wrapper myself but I do not know anything about the `pyomo` internals. **Q: Can I add my own solvers written in python to `pyomo`? Solver could have an interface like the ones of `scipy.optimize`.** Ty for reading, Franz --- Related: * [Pyomo-Solver Communication](https://stackoverflow.com/questions/51631899/pyomo-solver-communication) * [Call scipy.optimize inside pyomo](https://stackoverflow.com/questions/47821346/call-scipy-optimize-inside-pyomo)
2022/10/24
[ "https://Stackoverflow.com/questions/74179391", "https://Stackoverflow.com", "https://Stackoverflow.com/users/11785620/" ]
You said using jQuery , so no need to loops and addEventListener , all you need is just specifying displayed data inside link using data attribute (like data-text in below snippet ) use the [hover](https://api.jquery.com/hover/) listener then access current hovered by using **`$(this)`** key word , then display the data , that's all See below snippet : ```js const $firstul = $('#span_Lan'); $("#ul_box li").hover(function() { $firstul.html( $(this).find('a').data("text") ) }) ``` ```css #ul_box li { border:1px solid black; } #ul_box li:hover { border-color:red; } ``` ```html <script src="https://cdnjs.cloudflare.com/ajax/libs/jquery/3.3.1/jquery.min.js"></script> <div> <ul> <li id="li_box"> <span id="span_Lan"></span></li> </ul> <ul id="ul_box"> <li><a id="lnk1" data-text="111" class="">aaa</a></li> <li><a id="lnk2" data-text="222" class="">bbb</a></li> <li><a id="lnk3" data-text="333" class="">ccc</a></li> <li><a id="lnk3" data-text="444" class="">ddd</a></li> </ul> </div> ```
There's a few issues with your code * you need to use `innerText` or `innerHtml` instead of `value` * next you need to pass the event into your mouse over and use the current target instead of `boxLi[i]` * finally, move your ids to the li as that is what the mouse over is on Also this isn't jQuery ```js const firstul = document.getElementById('span_Lan'); const boxLi = document.getElementById('ul_box').children; for (let i = 0; i < boxLi.length; i++) { boxLi[i].addEventListener('mouseover', e => { firstul.innerText = e.currentTarget.textContent; // not sure if you want this line - it's in your code but your question says nothing about having the letters in the first ul if (e.currentTarget.id == "lnk1") firstul.innerText += "111"; else if (e.currentTarget.id == "lnk2") firstul.innerText += "222"; else if (e.currentTarget.id == "lnk3") firstul.innerText += "333"; }) } ``` ```html <div> <ul> <li id="li_box"> <span id="span_Lan">111</span></li> </ul> <ul id="ul_box"> <li id="lnk1"><a class="">aaa</a></li> <li id="lnk2"><a class="">bbb</a></li> <li id="lnk3"><a class="">ccc</a></li> </ul> </div> ```
74,179,391
I am responsible for a series of exercises for nonlinear optimization. I thought it would be cool to start with some examples of optimization problems and solve them with `pyomo` + some black box solvers. However, as the students learn more about optimization algorithms I wanted them to also implement some simple methods and test there implementation for the same examples. I hoped there would be an "easy" way to add a custom solver to `pyomo` however I cannot find any information about this. Basically that would allow the students to check their implementation by just changing a single line in there code and compare to a well tested solver. I would also try to implement a **simple** wrapper myself but I do not know anything about the `pyomo` internals. **Q: Can I add my own solvers written in python to `pyomo`? Solver could have an interface like the ones of `scipy.optimize`.** Ty for reading, Franz --- Related: * [Pyomo-Solver Communication](https://stackoverflow.com/questions/51631899/pyomo-solver-communication) * [Call scipy.optimize inside pyomo](https://stackoverflow.com/questions/47821346/call-scipy-optimize-inside-pyomo)
2022/10/24
[ "https://Stackoverflow.com/questions/74179391", "https://Stackoverflow.com", "https://Stackoverflow.com/users/11785620/" ]
You said using jQuery , so no need to loops and addEventListener , all you need is just specifying displayed data inside link using data attribute (like data-text in below snippet ) use the [hover](https://api.jquery.com/hover/) listener then access current hovered by using **`$(this)`** key word , then display the data , that's all See below snippet : ```js const $firstul = $('#span_Lan'); $("#ul_box li").hover(function() { $firstul.html( $(this).find('a').data("text") ) }) ``` ```css #ul_box li { border:1px solid black; } #ul_box li:hover { border-color:red; } ``` ```html <script src="https://cdnjs.cloudflare.com/ajax/libs/jquery/3.3.1/jquery.min.js"></script> <div> <ul> <li id="li_box"> <span id="span_Lan"></span></li> </ul> <ul id="ul_box"> <li><a id="lnk1" data-text="111" class="">aaa</a></li> <li><a id="lnk2" data-text="222" class="">bbb</a></li> <li><a id="lnk3" data-text="333" class="">ccc</a></li> <li><a id="lnk3" data-text="444" class="">ddd</a></li> </ul> </div> ```
To make your code a bit more readable, you should use `querySelectorAll` to select the links. Then run over the elements with `forEach` to add an `eventListener` per element. In the example below I have created a function called *handleMouseOver*. This function expects an id as a parameter, which is the id of the listitem. The function then fires a switch statement to determine which text belongs to this ID. This text is then applied to your *span\_Lan* element. I also call the function once when initiating the script, to fill in the default value (namely 111). ```js const firstul = document.getElementById('span_Lan'); document.querySelectorAll('#ul_box li').forEach(e => e.addEventListener('mouseover', (e) => handleMouseOver(e.target.id))); function handleMouseOver(id) { let text; switch (id) { case "lnk1": text = "111" break; case "lnk2": text = "222" break; case "lnk3": text = "333" break; default: text = "111" } firstul.innerText = text; } handleMouseOver(); ``` ```html <div> <ul> <li id="li_box"> <span id="span_Lan"></span></li> </ul> <ul id="ul_box"> <li><a id="lnk1" class="">aaa</a></li> <li><a id="lnk2" class="">bbb</a></li> <li><a id="lnk3" class="">ccc</a></li> </ul> </div> ```
18,874,387
Just a little question : it's possible to force a build in Buildbot via a python script or command line (and not via the web interface) ? Thank you!
2013/09/18
[ "https://Stackoverflow.com/questions/18874387", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1330954/" ]
If you have a PBSource configured in your master.cfg, you can send a change from the command line: ``` buildbot sendchange --master {MASTERHOST}:{PORT} --auth {USER}:{PASS} --who {USER} {FILENAMES..} ```
You can make a python script using the urlib2 or requests library to simulate a POST to the web UI ``` import urllib2 import urllib import cookielib import uuid import unittest import sys from StringIO import StringIO class ForceBuildApi(): MAX_RETRY = 3 def __init__(self, server): self.server = server cookiejar = cookielib.CookieJar() self.urlOpener = urllib2.build_opener(urllib2.HTTPCookieProcessor(cookiejar)) def login(self, user, passwd): data = urllib.urlencode(dict(username=user, passwd=passwd)) url = self.server + "login" request = urllib2.Request(url, data) res = self.urlOpener.open(request).read() if res.find("The username or password you entered were not correct") > 0: raise Exception("invalid password") def force_build(self, builder, reason, **kw): """Create a buildbot build request several attempts are created in case of errors """ reason = reason + " ID="+str(uuid.uuid1()) kw['reason'] = reason data_str = urllib.urlencode(kw) url = "%s/builders/%s/force" % (self.server, builder) print url request = urllib2.Request(url, data_str) file_desc = None for i in xrange(self.MAX_RETRY): try: file_desc = self.urlOpener.open(request) break except Exception as e: print >>sys.stderr, "error when doing force build", e if file_desc is None: print >>sys.stderr, "too many errors, giving up" return None for line in file_desc: if 'alert' in line: print >>sys.stderr, "invalid arguments", url, data_str return None if 'Authorization Failed' in line: print >>sys.stderr, "Authorization Failed" return return reason class ForceBuildApiTest(unittest.TestCase): def setUp(self): from mock import Mock # pip install mock for test self.api = ForceBuildApi("server/") self.api.urlOpener = Mock() urllib2.Request = Mock() uuid.uuid1 = Mock() uuid.uuid1.return_value = "myuuid" sys.stderr = StringIO() def test_login(self): from mock import call self.api.login("log", "pass") self.assertEquals(len(self.api.urlOpener.open.call_args_list), 1) req = urllib2.Request.call_args_list self.assertEquals([call('server/login', 'passwd=pass&username=log')], req) def test_force(self): from mock import call self.api.urlOpener.open.return_value = ["blabla"] r = self.api.force_build("builder1", reason="reason", param1="foo", param2="bar") self.assertEquals(len(self.api.urlOpener.open.call_args_list), 1) req = urllib2.Request.call_args_list self.assertEquals([call('server//builders/builder1/force', 'reason=reason+ID%3Dmyuuid&param2=bar&param1=foo')], req) self.assertEquals(r, "reason ID=myuuid") def test_force_fail1(self): from mock import call self.api.urlOpener.open.return_value = ["alert bla"] r = self.api.force_build("builder1", reason="reason", param1="foo", param2="bar") self.assertEquals(len(self.api.urlOpener.open.call_args_list), 1) req = urllib2.Request.call_args_list self.assertEquals([call('server//builders/builder1/force', 'reason=reason+ID%3Dmyuuid&param2=bar&param1=foo')], req) self.assertEquals(sys.stderr.getvalue(), "invalid arguments server//builders/builder1/force reason=reason+ID%3Dmyuuid&param2=bar&param1=foo\n") self.assertEquals(r, None) def test_force_fail2(self): from mock import call def raise_exception(*a, **kw): raise Exception("oups") self.api.urlOpener.open = raise_exception r = self.api.force_build("builder1", reason="reason", param1="foo", param2="bar") req = urllib2.Request.call_args_list self.assertEquals([call('server//builders/builder1/force', 'reason=reason+ID%3Dmyuuid&param2=bar&param1=foo')], req) self.assertEquals(sys.stderr.getvalue(), "error when doing force build oups\n"*3 + "too many errors, giving up\n") self.assertEquals(r, None) def test_force_fail3(self): from mock import call self.api.urlOpener.open.return_value = ["bla", "blu", "Authorization Failed"] r = self.api.force_build("builder1", reason="reason", param1="foo", param2="bar") req = urllib2.Request.call_args_list self.assertEquals([call('server//builders/builder1/force', 'reason=reason+ID%3Dmyuuid&param2=bar&param1=foo')], req) self.assertEquals(sys.stderr.getvalue(), "Authorization Failed\n") self.assertEquals(r, None) if __name__ == '__main__': unittest.main() ```
60,913,598
I am trying to train EfficientNetB1 on Google Colab and constantly running into different issues with correct import statements from Keras or Tensorflow.Keras, currently this is how my imports look like ``` import tensorflow as tf from tensorflow.keras import backend as K from tensorflow.keras.preprocessing.image import ImageDataGenerator from tensorflow.python.keras.layers.pooling import AveragePooling2D from tensorflow.keras.applications import ResNet50 from tensorflow.keras.models import Model from tensorflow.keras.optimizers import SGD from sklearn.preprocessing import LabelBinarizer from sklearn.model_selection import train_test_split from sklearn.metrics import classification_report from imutils import paths import matplotlib.pyplot as plt import numpy as np import argparse import pickle import cv2 import os from sklearn.metrics import confusion_matrix from sklearn.utils.multiclass import unique_labels import efficientnet.keras as enet from tensorflow.keras.layers import Dense, Dropout, Activation, BatchNormalization, Flatten, Input ``` and this is how my model looks like ``` load the ResNet-50 network, ensuring the head FC layer sets are left # off baseModel = enet.EfficientNetB1(weights="imagenet", include_top=False, input_tensor=Input(shape=(224, 224, 3)), pooling='avg') # Adding 2 fully-connected layers to B0. x = baseModel.output x = BatchNormalization()(x) x = Dropout(0.7)(x) x = Dense(512)(x) x = BatchNormalization()(x) x = Activation('relu')(x) x = Dropout(0.5)(x) x = Dense(512)(x) x = BatchNormalization()(x) x = Activation('relu')(x) # Output layer predictions = Dense(len(lb.classes_), activation="softmax")(x) model = Model(inputs = baseModel.input, outputs = predictions) # loop over all layers in the base model and freeze them so they will # *not* be updated during the training process for layer in baseModel.layers: layer.trainable = False ``` But for the life of me I can't figure out why I am getting the below error ``` AttributeError Traceback (most recent call last) <ipython-input-19-269fe6fc6f99> in <module>() ----> 1 baseModel = enet.EfficientNetB1(weights="imagenet", include_top=False, input_tensor=Input(shape=(224, 224, 3)), pooling='avg') 2 3 # Adding 2 fully-connected layers to B0. 4 x = baseModel.output 5 x = BatchNormalization()(x) 5 frames /usr/local/lib/python3.6/dist-packages/keras/engine/base_layer.py in _collect_previous_mask(input_tensors) 1439 inbound_layer, node_index, tensor_index = x._keras_history 1440 node = inbound_layer._inbound_nodes[node_index] -> 1441 mask = node.output_masks[tensor_index] 1442 masks.append(mask) 1443 else: AttributeError: 'Node' object has no attribute 'output_masks' ```
2020/03/29
[ "https://Stackoverflow.com/questions/60913598", "https://Stackoverflow.com", "https://Stackoverflow.com/users/9201770/" ]
The problem is the way you import the efficientnet. You import it from the `Keras` package and not from the `TensorFlow.Keras` package. Change your efficientnet import to ``` import efficientnet.tfkeras as enet ```
Not sure, but this error maybe caused by wrong TF version. Google Colab for now comes with TF 1.x by default. Try this to change the TF version and see if this resolves the issue. ``` try: %tensorflow_version 2.x except: print("Failed to load") ```
43,109,167
Below is my data set ``` Date Time 2015-05-13 23:53:00 ``` I want to convert date and time into floats as separate columns in a python script. The output should be like date as `20150513` and time as `235300`
2017/03/30
[ "https://Stackoverflow.com/questions/43109167", "https://Stackoverflow.com", "https://Stackoverflow.com/users/7521618/" ]
If all you need is to strip the hyphens and colons, [*str.replace()*](https://docs.python.org/2.7/library/stdtypes.html#str.replace) should do the job: ``` >>> s = '2015-05-13 23:53:00' >>> s.replace('-', '').replace(':', '') '20150513 235300' ``` For mort sophisticated reformatting, parse the input with [*time.strptime()*](https://docs.python.org/2.7/library/time.html#time.strptime) and then reformat with [*time.strftime()*](https://docs.python.org/2.7/library/time.html#time.strftime): ``` >>> import time >>> t = time.strptime('2015-05-13 23:53:00', '%Y-%m-%d %H:%M:%S') >>> time.strftime('%Y%m%d %H%M%S', t) '20150513 235300' ```
If you have a datetime you can use [strftime()](http://strftime.org/) ``` your_time.strftime('%Y%m%d.%H%M%S') ``` And if your variables are string, You can use replace() ``` dt = '2015-05-13 23:53:00' date = dt.split()[0].replace('-','') time = dt.split()[1].replace(':','') fl = float(date+ '.' + time) ```
43,109,167
Below is my data set ``` Date Time 2015-05-13 23:53:00 ``` I want to convert date and time into floats as separate columns in a python script. The output should be like date as `20150513` and time as `235300`
2017/03/30
[ "https://Stackoverflow.com/questions/43109167", "https://Stackoverflow.com", "https://Stackoverflow.com/users/7521618/" ]
If all you need is to strip the hyphens and colons, [*str.replace()*](https://docs.python.org/2.7/library/stdtypes.html#str.replace) should do the job: ``` >>> s = '2015-05-13 23:53:00' >>> s.replace('-', '').replace(':', '') '20150513 235300' ``` For mort sophisticated reformatting, parse the input with [*time.strptime()*](https://docs.python.org/2.7/library/time.html#time.strptime) and then reformat with [*time.strftime()*](https://docs.python.org/2.7/library/time.html#time.strftime): ``` >>> import time >>> t = time.strptime('2015-05-13 23:53:00', '%Y-%m-%d %H:%M:%S') >>> time.strftime('%Y%m%d %H%M%S', t) '20150513 235300' ```
``` date = "2015-05-13".replace("-", "") time = "10:58:56".replace(":", "") ```
43,109,167
Below is my data set ``` Date Time 2015-05-13 23:53:00 ``` I want to convert date and time into floats as separate columns in a python script. The output should be like date as `20150513` and time as `235300`
2017/03/30
[ "https://Stackoverflow.com/questions/43109167", "https://Stackoverflow.com", "https://Stackoverflow.com/users/7521618/" ]
If you have a datetime you can use [strftime()](http://strftime.org/) ``` your_time.strftime('%Y%m%d.%H%M%S') ``` And if your variables are string, You can use replace() ``` dt = '2015-05-13 23:53:00' date = dt.split()[0].replace('-','') time = dt.split()[1].replace(':','') fl = float(date+ '.' + time) ```
``` date = "2015-05-13".replace("-", "") time = "10:58:56".replace(":", "") ```
31,244,525
In python, one can attribute some values to some of the keywords that are already predefined in python, unlike other languages. Why? This is not all, some. ``` > range = 5 > range > 5 ``` But for ``` > def = 5 File "<stdin>", line 1 def = 5 ^ SyntaxError: invalid syntax ``` One possible hypothesis is - Lazy coders with unique parsing rules. For those new to python, yeah, this actually works, for keywords like True, False, range, len, so on. I wrote a compiler for python in college and, if I remember correctly, the keywords list did not have them.
2015/07/06
[ "https://Stackoverflow.com/questions/31244525", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3302133/" ]
While `range` is nothing but a built-in function, `def` is a keyword. (Most IDEs should indicate the difference with appropriate colors.) Functions - whether built-in or not - can be redefined. And they don't have to remain functions, but can become integers like `range` in your example. But you can never redefine keywords. If you wish, you can print the list of all Python keywords with the following lines of code (borrowed from [here](https://stackoverflow.com/a/14595949/3419103)): ``` import keyword for keyword in keyword.kwlist: print keyword ``` Output: ``` and as assert break class continue def del elif else except exec finally for from global if import in is lambda not or pass print raise return try while with yield ``` And for Python 3 (notice the absence of `print`): ``` False None True and as assert break class continue def del elif else except finally for from global if import in is lambda nonlocal not or pass raise return try while with yield ``` In contrast, the built-in functions can be found here: <https://docs.python.org/2/library/functions.html>
The keyword 'range' is a function, you can create some other vars as well as sum, max... In the other hand, keyword 'def' expects a defined structure in order to create a function. ``` def <functionName>(args): ```
31,244,525
In python, one can attribute some values to some of the keywords that are already predefined in python, unlike other languages. Why? This is not all, some. ``` > range = 5 > range > 5 ``` But for ``` > def = 5 File "<stdin>", line 1 def = 5 ^ SyntaxError: invalid syntax ``` One possible hypothesis is - Lazy coders with unique parsing rules. For those new to python, yeah, this actually works, for keywords like True, False, range, len, so on. I wrote a compiler for python in college and, if I remember correctly, the keywords list did not have them.
2015/07/06
[ "https://Stackoverflow.com/questions/31244525", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3302133/" ]
While `range` is nothing but a built-in function, `def` is a keyword. (Most IDEs should indicate the difference with appropriate colors.) Functions - whether built-in or not - can be redefined. And they don't have to remain functions, but can become integers like `range` in your example. But you can never redefine keywords. If you wish, you can print the list of all Python keywords with the following lines of code (borrowed from [here](https://stackoverflow.com/a/14595949/3419103)): ``` import keyword for keyword in keyword.kwlist: print keyword ``` Output: ``` and as assert break class continue def del elif else except exec finally for from global if import in is lambda not or pass print raise return try while with yield ``` And for Python 3 (notice the absence of `print`): ``` False None True and as assert break class continue def del elif else except finally for from global if import in is lambda nonlocal not or pass raise return try while with yield ``` In contrast, the built-in functions can be found here: <https://docs.python.org/2/library/functions.html>
You are confused between keywords and built-in functions. `def` is a keyword, but `range` and `len` are simply built-in functions. Any function can always be overridden, but a keyword cannot. The full list of keywords can be found in `keywords.kwlist`.
69,023,789
I'm working on automating changing image colors using python. The image I'm using is below, i'd love to move it from red to another range of colors, say green, keeping the detail and shading if possible. I've been able to convert *some* of the image to a solid color, losing all detail. [![Blue instead of red.](https://i.stack.imgur.com/09uYC.jpg)](https://i.stack.imgur.com/09uYC.jpg) The code I'm currently using is below, I can't quite figure out the correct range of red to make it work correctly, and also it only converts to a single color, again losing all detail and shade. Any help is appreciated, thank you. ```python import cv2 import numpy as np import skimage.exposure # load image and get dimensions img = cv2.imread("test5.jpg") # convert to hsv hsv = cv2.cvtColor(img,cv2.COLOR_BGR2HSV) ## mask of upper red (170,50,50) ~ (180,255,255) ## mask of lower red (0,50,50) ~ (10,255,255) # threshold using inRange range1 = (0,50,50) range2 = (1,255,255) mask = cv2.inRange(hsv,range1,range2) mask = 255 - mask # apply morphology opening to mask kernel = np.ones((3,3), np.uint8) mask = cv2.morphologyEx(mask, cv2.MORPH_ERODE, kernel) mask = cv2.morphologyEx(mask, cv2.MORPH_CLOSE, kernel) # antialias mask mask = cv2.GaussianBlur(mask, (0,0), sigmaX=3, sigmaY=3, borderType = cv2.BORDER_DEFAULT) mask = skimage.exposure.rescale_intensity(mask, in_range=(127.5,255), out_range=(0,255)) result = img.copy() result[mask==0] = (255,255,255) # write result to disk cv2.imwrite("test6.jpg", result) ```
2021/09/02
[ "https://Stackoverflow.com/questions/69023789", "https://Stackoverflow.com", "https://Stackoverflow.com/users/13919173/" ]
This is one way to approach the problem in Python/OpenCV. But for red, it is very hard to do because red spans 0 hue, which also is the hue for gray and white and black, which you have in your image. The other issue you have is that skin tones has red shades, so you cannot pick too large of ranges for your colors. Also when dealing with red ranges, you need two sets, one for hues up to 180 and another for hues above 0. Input: [![enter image description here](https://i.stack.imgur.com/fgW2w.jpg)](https://i.stack.imgur.com/fgW2w.jpg) ``` import cv2 import numpy as np # load image img = cv2.imread('red_clothes.jpg') # convert to HSV hsv = cv2.cvtColor(img, cv2.COLOR_BGR2HSV) h,s,v = cv2.split(hsv) blue_hue = 120 red_hue = 0 # diff hue (blue_hue - red_hue) diff_hue = blue_hue - red_hue # create mask for red color in hsv lower1 = (150,150,150) upper1 = (180,255,255) mask1 = cv2.inRange(hsv, lower1, upper1) lower2 = (0,150,150) upper2 = (30,255,255) mask2 = cv2.inRange(hsv, lower2, upper2) mask = cv2.add(mask1,mask2) mask = cv2.merge([mask,mask,mask]) # apply morphology to clean mask kernel = cv2.getStructuringElement(cv2.MORPH_ELLIPSE, (9,9)) mask = cv2.morphologyEx(mask, cv2.MORPH_OPEN, kernel) mask = cv2.morphologyEx(mask, cv2.MORPH_CLOSE, kernel) # modify hue channel by adding difference and modulo 180 hnew = np.mod(h + diff_hue, 180).astype(np.uint8) # recombine channels hsv_new = cv2.merge([hnew,s,v]) # convert back to bgr bgr_new = cv2.cvtColor(hsv_new, cv2.COLOR_HSV2BGR) # blend with original using mask result = np.where(mask==(255, 255, 255), bgr_new, img) # save output cv2.imwrite('red_clothes_mask.png', mask) cv2.imwrite('red_clothes_hue_shift.png', bgr_new) cv2.imwrite('red_clothes_red2blue.png', result) # Display various images to see the steps cv2.imshow('mask1',mask1) cv2.imshow('mask2',mask2) cv2.imshow('mask',mask) cv2.imshow('bgr_new',bgr_new) cv2.imshow('result',result) cv2.waitKey(0) cv2.destroyAllWindows() ``` Mask: [![enter image description here](https://i.stack.imgur.com/sXkS4.png)](https://i.stack.imgur.com/sXkS4.png) Hue Shifted Image: [![enter image description here](https://i.stack.imgur.com/oQUBP.png)](https://i.stack.imgur.com/oQUBP.png) Blend between Input and Hue Shifted Image using Mask to blend: [![enter image description here](https://i.stack.imgur.com/zliQd.png)](https://i.stack.imgur.com/zliQd.png) So the result is speckled because of the black mixed with the red and from limited ranges due to skin color.
You can start with red, but the trick is to invert the image so red is now at hue 90 in OpenCV range and for example blue is at hue 30. So in Python/OpenCV, you can do the following: Input: [![enter image description here](https://i.stack.imgur.com/sFW5v.jpg)](https://i.stack.imgur.com/sFW5v.jpg) ``` import cv2 import numpy as np # load image img = cv2.imread('red_clothes.jpg') # invert image imginv = 255 - img # convert to HSV hsv = cv2.cvtColor(imginv, cv2.COLOR_BGR2HSV) h,s,v = cv2.split(hsv) blueinv_hue = 30 #(=120+180/2=210-180=30) redinv_hue = 90 #(=0+180/2=90) # diff hue (blue_hue - red_hue) diff_hue = blueinv_hue - redinv_hue # create mask for redinv color in hsv lower = (80,150,150) upper = (100,255,255) mask = cv2.inRange(hsv, lower, upper) mask = cv2.merge([mask,mask,mask]) # apply morphology to clean mask kernel = cv2.getStructuringElement(cv2.MORPH_ELLIPSE, (9,9)) mask = cv2.morphologyEx(mask, cv2.MORPH_OPEN, kernel) mask = cv2.morphologyEx(mask, cv2.MORPH_CLOSE, kernel) # modify hue channel by adding difference and modulo 180 hnew = np.mod(h + diff_hue, 180).astype(np.uint8) # recombine channels hsv_new = cv2.merge([hnew,s,v]) # convert back to bgr bgrinv_new = cv2.cvtColor(hsv_new, cv2.COLOR_HSV2BGR) # invert bgr_new = 255 -bgrinv_new # blend with original using mask result = np.where(mask==(255, 255, 255), bgr_new, img) # save output cv2.imwrite('red_clothes_mask.png', mask) cv2.imwrite('red_clothes_hue_shift.png', bgr_new) cv2.imwrite('red_clothes_red2blue.png', result) # Display various images to see the steps cv2.imshow('mask',mask) cv2.imshow('bgr_new',bgr_new) cv2.imshow('result',result) cv2.waitKey(0) cv2.destroyAllWindows() ``` Mask: [![enter image description here](https://i.stack.imgur.com/oeZGH.png)](https://i.stack.imgur.com/oeZGH.png) Red to Blue before masking: [![enter image description here](https://i.stack.imgur.com/IQOZM.png)](https://i.stack.imgur.com/IQOZM.png) Red to Blue after masking: [![enter image description here](https://i.stack.imgur.com/9GNi3.png)](https://i.stack.imgur.com/9GNi3.png) However, one is still limited by the fact that red is close to skin tones, so the range for red is limited.
69,023,789
I'm working on automating changing image colors using python. The image I'm using is below, i'd love to move it from red to another range of colors, say green, keeping the detail and shading if possible. I've been able to convert *some* of the image to a solid color, losing all detail. [![Blue instead of red.](https://i.stack.imgur.com/09uYC.jpg)](https://i.stack.imgur.com/09uYC.jpg) The code I'm currently using is below, I can't quite figure out the correct range of red to make it work correctly, and also it only converts to a single color, again losing all detail and shade. Any help is appreciated, thank you. ```python import cv2 import numpy as np import skimage.exposure # load image and get dimensions img = cv2.imread("test5.jpg") # convert to hsv hsv = cv2.cvtColor(img,cv2.COLOR_BGR2HSV) ## mask of upper red (170,50,50) ~ (180,255,255) ## mask of lower red (0,50,50) ~ (10,255,255) # threshold using inRange range1 = (0,50,50) range2 = (1,255,255) mask = cv2.inRange(hsv,range1,range2) mask = 255 - mask # apply morphology opening to mask kernel = np.ones((3,3), np.uint8) mask = cv2.morphologyEx(mask, cv2.MORPH_ERODE, kernel) mask = cv2.morphologyEx(mask, cv2.MORPH_CLOSE, kernel) # antialias mask mask = cv2.GaussianBlur(mask, (0,0), sigmaX=3, sigmaY=3, borderType = cv2.BORDER_DEFAULT) mask = skimage.exposure.rescale_intensity(mask, in_range=(127.5,255), out_range=(0,255)) result = img.copy() result[mask==0] = (255,255,255) # write result to disk cv2.imwrite("test6.jpg", result) ```
2021/09/02
[ "https://Stackoverflow.com/questions/69023789", "https://Stackoverflow.com", "https://Stackoverflow.com/users/13919173/" ]
This is one way to approach the problem in Python/OpenCV. But for red, it is very hard to do because red spans 0 hue, which also is the hue for gray and white and black, which you have in your image. The other issue you have is that skin tones has red shades, so you cannot pick too large of ranges for your colors. Also when dealing with red ranges, you need two sets, one for hues up to 180 and another for hues above 0. Input: [![enter image description here](https://i.stack.imgur.com/fgW2w.jpg)](https://i.stack.imgur.com/fgW2w.jpg) ``` import cv2 import numpy as np # load image img = cv2.imread('red_clothes.jpg') # convert to HSV hsv = cv2.cvtColor(img, cv2.COLOR_BGR2HSV) h,s,v = cv2.split(hsv) blue_hue = 120 red_hue = 0 # diff hue (blue_hue - red_hue) diff_hue = blue_hue - red_hue # create mask for red color in hsv lower1 = (150,150,150) upper1 = (180,255,255) mask1 = cv2.inRange(hsv, lower1, upper1) lower2 = (0,150,150) upper2 = (30,255,255) mask2 = cv2.inRange(hsv, lower2, upper2) mask = cv2.add(mask1,mask2) mask = cv2.merge([mask,mask,mask]) # apply morphology to clean mask kernel = cv2.getStructuringElement(cv2.MORPH_ELLIPSE, (9,9)) mask = cv2.morphologyEx(mask, cv2.MORPH_OPEN, kernel) mask = cv2.morphologyEx(mask, cv2.MORPH_CLOSE, kernel) # modify hue channel by adding difference and modulo 180 hnew = np.mod(h + diff_hue, 180).astype(np.uint8) # recombine channels hsv_new = cv2.merge([hnew,s,v]) # convert back to bgr bgr_new = cv2.cvtColor(hsv_new, cv2.COLOR_HSV2BGR) # blend with original using mask result = np.where(mask==(255, 255, 255), bgr_new, img) # save output cv2.imwrite('red_clothes_mask.png', mask) cv2.imwrite('red_clothes_hue_shift.png', bgr_new) cv2.imwrite('red_clothes_red2blue.png', result) # Display various images to see the steps cv2.imshow('mask1',mask1) cv2.imshow('mask2',mask2) cv2.imshow('mask',mask) cv2.imshow('bgr_new',bgr_new) cv2.imshow('result',result) cv2.waitKey(0) cv2.destroyAllWindows() ``` Mask: [![enter image description here](https://i.stack.imgur.com/sXkS4.png)](https://i.stack.imgur.com/sXkS4.png) Hue Shifted Image: [![enter image description here](https://i.stack.imgur.com/oQUBP.png)](https://i.stack.imgur.com/oQUBP.png) Blend between Input and Hue Shifted Image using Mask to blend: [![enter image description here](https://i.stack.imgur.com/zliQd.png)](https://i.stack.imgur.com/zliQd.png) So the result is speckled because of the black mixed with the red and from limited ranges due to skin color.
Starting with a blue image rather than red allows one to use an expanded range for inRange() and do a better job in Python/OpenCV. Here is a change from blue to red. Input: [![enter image description here](https://i.stack.imgur.com/f4iX0.jpg)](https://i.stack.imgur.com/f4iX0.jpg) ``` import cv2 import numpy as np # load image img = cv2.imread('blue_clothes.jpg') # convert to HSV hsv = cv2.cvtColor(img, cv2.COLOR_BGR2HSV) h,s,v = cv2.split(hsv) red_hue = 0 blue_hue = 120 # diff hue (red_hue - blue_hue) diff_hue = red_hue - blue_hue # create mask for blue color in hsv lower = (100,90,90) upper = (140,255,255) mask = cv2.inRange(hsv, lower, upper) mask = cv2.merge([mask,mask,mask]) # apply morphology to clean mask kernel = cv2.getStructuringElement(cv2.MORPH_ELLIPSE, (9,9)) mask = cv2.morphologyEx(mask, cv2.MORPH_OPEN, kernel) mask = cv2.morphologyEx(mask, cv2.MORPH_CLOSE, kernel) # modify hue channel by adding difference and modulo 180 hnew = np.mod(h + diff_hue, 180).astype(np.uint8) # recombine channels hsv_new = cv2.merge([hnew,s,v]) # convert back to bgr bgr_new = cv2.cvtColor(hsv_new, cv2.COLOR_HSV2BGR) # blend with original using mask result = np.where(mask==(255, 255, 255), bgr_new, img) # save output cv2.imwrite('blue_clothes_mask.png', mask) cv2.imwrite('blue_clothes_hue_shift.png', bgr_new) cv2.imwrite('blue_clothes_blue2red.png', result) # Display various images to see the steps cv2.imshow('mask',mask) cv2.imshow('bgr_new',bgr_new) cv2.imshow('result',result) cv2.waitKey(0) cv2.destroyAllWindows() ``` Mask: [![enter image description here](https://i.stack.imgur.com/sYKeI.png)](https://i.stack.imgur.com/sYKeI.png) Blue to Red before masking: [![enter image description here](https://i.stack.imgur.com/CXbtt.png)](https://i.stack.imgur.com/CXbtt.png) Blue to Red after masking: [![enter image description here](https://i.stack.imgur.com/L7s1m.png)](https://i.stack.imgur.com/L7s1m.png)
41,480,055
Using Python 2.7 and Django 1.10.4, I was trying to deploy my app to pythonanywhere, but I keep getting this error. [![enter image description here](https://i.stack.imgur.com/1FGB1.png)](https://i.stack.imgur.com/1FGB1.png) **Error Log** [![enter image description here](https://i.stack.imgur.com/bEiKL.png)](https://i.stack.imgur.com/bEiKL.png) **wsgi.py** ``` import os import sys path = '/home/hellcracker/First-Blog' if path not in sys.path: sys.path.append(path) os.environ['DJANGO_SETTINGS_MODULE'] = 'mysite.settings' from django.core.wsgi import get_wsgi_application from django.contrib.staticfiles.handlers import StaticFilesHandler application = StaticFilesHandler(get_wsgi_application()) ``` I can't tell where the error is coming from. Any help would be appreciated!
2017/01/05
[ "https://Stackoverflow.com/questions/41480055", "https://Stackoverflow.com", "https://Stackoverflow.com/users/7248057/" ]
First of all, check the link give in the error log - <https://help.pythonanywhere.com/pages/DebuggingImportError/> You could also search for 'from django core wsgi no module named wsgi'. There are many answers already, and I think you should be able to find the answer to your problem there.
Make sure that your project name is "mysite" if not update this line ``` os.environ['DJANGO_SETTINGS_MODULE'] = '<your-project-name>.settings' ``` Project name will be the directory name parent to your app name, see that in your local machine.
32,814,489
I've read numerous tutorials and stackex questions/answers, but apparently my questions is too specific and my knowledge too limited to piece together a solution. **[Edit]** *My confusion was mostly due to the fact that my project required both a shell script and a makefile to run a simple python program. I was not sure why that was necessary, as it seemed like such a roundabout way to do things. It looks like the makefile and script are likely just there to make the autograder happy, as the kind respondents below have mentioned. So, I guess this is something best clarified with the prof, then. I really appreciate the answers--thanks very much for the help!* Basically, what I want to do is to run `program.py` (my source code) via `program.sh` (shell script), so that when I type the following into the command line ``` ./program.sh arg1 ``` it runs `program.py` and passes `arg1` into the program as though I had manually typed the following into the command line ``` $ python program.py arg1 ``` I also need to automatically set `program.sh` to executable, so that `$ chmod +x program.sh` doesn't have to be typed beforehand. I've read the solution presented [here](https://stackoverflow.com/questions/8073561/how-to-make-an-executable-to-use-in-a-shell-python), which was very helpful, but this seems to require that the file be executed with the `.py` extension, whereas my particular application requires a `.sh` extension, so as to be run as desired above. Another reason as to why I'd like to run a `.sh` file is because I also need to somehow include a makefile to run the program, along with the script (so I'm assuming I'd have to include a `make` command in the script?). Honestly, I am not sure why the makefile is necessary for python programs, but we were instructed by the prof that the makefile would simply be more convenient for the grading scripts, and that we should simply write the following for the makefile's contents: ``` all: /bin/true ``` Thanks so much in advance for any help in this matter!
2015/09/28
[ "https://Stackoverflow.com/questions/32814489", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4876803/" ]
To pass data between two otherwise unconnected View Controllers you'll need to use: ``` presentingViewController!.dismissViewControllerAnimated(true, completion: nil) ``` and transmit your data via viewWillDisappear like this: ``` override func viewWillDisappear(animated: Bool) { super.viewWillDisappear(animated) if self.isBeingDismissed() { self.delegate?.acceptData(textFieldOutlet.text) } } ``` I've posted [a tutorial](https://www.codebeaulieu.com/36/Passing-data-with-the-protocol-delegate-pattern), that includes a working project file that you can download and inspect. --- Heres an example of the pattern in context. ViewController 2: ----------------- ``` // place the protocol in the view controller that is being presented protocol PresentedViewControllerDelegate { func acceptData(data: AnyObject!) } class PresentedViewController: UIViewController { // create a variable that will recieve / send messages // between the view controllers. var delegate : PresentedViewControllerDelegate? // another data outlet var data : AnyObject? @IBOutlet weak var textFieldOutlet: UITextField! @IBAction func doDismiss(sender: AnyObject) { if textFieldOutlet.text != "" { self.presentingViewController!.dismissViewControllerAnimated(true, completion: nil) } } override func viewDidLoad() { super.viewDidLoad() print("\(data!)") // Do any additional setup after loading the view. } override func didReceiveMemoryWarning() { super.didReceiveMemoryWarning() // Dispose of any resources that can be recreated. } override func viewWillDisappear(animated: Bool) { super.viewWillDisappear(animated) if self.isBeingDismissed() { self.delegate?.acceptData(textFieldOutlet.text) } } } ``` ViewController 1: ----------------- ``` class ViewController: UIViewController, PresentedViewControllerDelegate { @IBOutlet weak var textOutlet: UILabel! @IBAction func doPresent(sender: AnyObject) { let pvc = storyboard?.instantiateViewControllerWithIdentifier("PresentedViewController") as! PresentedViewController pvc.data = "important data sent via delegate!" pvc.delegate = self self.presentViewController(pvc, animated: true, completion: nil) } override func viewDidLoad() { super.viewDidLoad() } func acceptData(data: AnyObject!) { self.textOutlet.text = "\(data!)" } } ```
I have found an answer: using global variables. Here is what I did: In my first view controller (the view controller that is sending the string), I made a global variable above the class definition, like this: ``` import UIKit var chosenClass = String() class EntryViewController: UIViewController, UITableViewDelegate, UITableViewDataSource { // other code here that isn't relevant to the topic at hand } ``` then, in the same view controller, when a table cell was selected, I did this: ``` override func prepareForSegue(segue: UIStoryboardSegue, sender: AnyObject?) { var row = self.tableView.indexPathForSelectedRow()?.row chosenClass = array2[row!] } ``` where ``` array2[row!] ``` is the string that I am wanting to pass. in the third view controller, I made another string local variable, string to receive : ``` import UIKit import Parse import ParseUI import Foundation class FirstTableViewController: PFQueryTableViewController { @IBOutlet weak var navigationBarTop: UINavigationItem! var stringToRecieve = String() } ``` In the viewDidLoad of the third view controller, I simply put: ``` stringToRecieve = chosenClass ``` and that is it. No additional code was needed for the second view controller, where the container view is.
59,302,243
I'm using this code with python and opencv for displaying about 100 images. But `imshow` function throws an error. Here is my code: ```py nn=[] for j in range (187) : nn.append(j+63) images =[] for i in nn: path = "02291G0AR\\" n1=cv2.imread(path +"Bi000{}".format(i)) images.append(n1) cv2.imshow(images) ``` And here is the error: ``` imshow() missing required argument 'mat' (pos 2) ```
2019/12/12
[ "https://Stackoverflow.com/questions/59302243", "https://Stackoverflow.com", "https://Stackoverflow.com/users/7415394/" ]
1. You have to visualize one image at a time, while you are passing `images` which is a list 2. `cv2.imshow()` takes as first argument the name of the window So you should iterate on your loaded images like: ```py for image in images: cv2.imshow('Image', image) cv2.waitKey(0) # Wait for user interaction ``` --- You may want to take a look to the python opencv documentation about displaying images [here](https://opencv-python-tutroals.readthedocs.io/en/latest/py_tutorials/py_gui/py_image_display/py_image_display.html#display-an-image).
You can use the following snippet to montage more than one image: ``` from imutils import build_montages im_shape = (129,196) montage_shape = (7,3) montages = build_montages(images, im_shape, montage_shape) ``` `im_shape :` A tuple containing the width and height of each image in the montage. Here we indicate that all images in the montage will be resized to 129 x 196. Resizing every image in the montage to a fixed size is a requirement so we can properly allocate memory in the resulting NumPy array. Note: Empty space in the montage will be filled with black pixels. `montage_shape :` A second tuple, this one specifying the number of columns and rows in the montage. Here we indicate that our montage will have 7 columns (7 images wide) and 3 rows (3 images tall).
53,647,426
How can we fetch details of particular row from mysql database using variable in python? I want to print the details of particular row using variable from my database and I think I should use something like this: ``` data = cur.execute("SELECT * FROM loginproject.Pro WHERE Username = '%s';"% rob) ``` But this is showing only the index value, not the data. Please help me out.
2018/12/06
[ "https://Stackoverflow.com/questions/53647426", "https://Stackoverflow.com", "https://Stackoverflow.com/users/10333908/" ]
Don't put html in laravel controller, you can return the $employees as data and then add html in your ajax success action
``` public function search(Request $request){ if($request->ajax()) { $employees = DB::table('employeefms')->where('last_name','LIKE','%'.$request->search.'%') ->orWhere('first_name','LIKE','%'.$request->search.'%')->get(); if(!empty($employees)) { return json_encode(array("msg"=>"success", "data"=>$employees)); } return json_encode(array("msg"=>"error")); } ``` } \\\\\\\\\ Ajax ``` $.ajax({ type : 'get', url : '{{ URL::to('admin/employeemaintenance/search') }}', data : {'search':$value}, success:function(data){ var data1 = jQuery.parseJSON(data); if(data1.msg == "success"){ $.each(eval(data1.data), function(){ //html here }) }, //no data found } }); ```
18,289,377
I'm using Code::Blocks and want to have gdb python-enabled. So I followed the C::B wiki <http://wiki.codeblocks.org/index.php?title=Pretty_Printers> to configure it. My pp.gdb is the same as that in the wiki except that I replace the path with my path to printers.py. ``` python import sys sys.path.insert(0, 'C:/Program Files (x86)/mingw-builds/x32-4.8.1-posix-dwarf-rev3/mingw32/share/gcc-4.8.1/python/libstdcxx/v6') from printers import register_libstdcxx_printers register_libstdcxx_printers (None) end ``` Then I tested it: ``` (gdb) source C:\Program Files (x86)\mingw-builds\x32-4.8.1-posix-dwarf-rev3\mingw32\bin\pp.gdb ``` And the error message showed: ``` Traceback (most recent call last): File "<string>", line 4, in <module> File "C:/Program Files (x86)/mingw-builds/x32-4.8.1-posix-dwarf-rev3/mingw32/ share/gcc-4.8.1/python/libstdcxx/v6/printers.py", line 911, in register_libstdcxx_printers gdb.printing.register_pretty_printer(obj, libstdcxx_printer) File "c:\program files (x86)\mingw-builds\x32-4.8.1-posix-dwarf-rev3\mingw32\ share\gdb/python/gdb/printing.py", line 146, in register_pretty_printer printer.name) RuntimeError: pretty-printer already registered: libstdc++-v6 C:\Program Files (x86)\mingw-builds\x32-4.8.1-posix-dwarf-rev3\mingw32\bin\pp.gd b:6: Error in sourced command file: Error while executing Python code. ``` How can I fix it?
2013/08/17
[ "https://Stackoverflow.com/questions/18289377", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1549348/" ]
Today, I also see this similar issue, after I update my libstdcxx's pretty from an old gcc4.7.x version to the gcc's trunk HEAD version to fix some other problems. I'm also using Codeblocks, and I have those two lines in my customized gdb script. ``` from libstdcxx.v6.printers import register_libstdcxx_printers register_libstdcxx_printers (None) ``` Note that I already have `-nx` option parssed to gdb when it started. After tweak for a while, I found that the libstdcxx's pretty printer is automatically loaded and get registered after the `from...import...` line. So, as a solution, you can just comment out the second line, and everything works just fine here. ``` from libstdcxx.v6.printers import register_libstdcxx_printers #register_libstdcxx_printers (None) ``` Also, I think the GDB's official wiki [STLSupport - GDB Wiki](https://sourceware.org/gdb/wiki/STLSupport) and Codeblocks' officical wiki [Pretty Printers - CodeBlocks](http://wiki.codeblocks.org/index.php?title=Pretty_Printers) should be updated to state this issue. EDIT: I just see the file: libstdcxx\v6\_\_init\_\_.py from GCC svn trunk(maybe, it was added recently), and I see it has code: ``` # Load the pretty-printers. from printers import register_libstdcxx_printers register_libstdcxx_printers(gdb.current_objfile()) ``` So, I think this code will automatically register the printers, so you don't need explicitly call `register_libstdcxx_printers (None)`.
You probably don't need to have this code. It seems like the libstdc++ prnters are preloaded -- which is normal in many setups... we designed printers to "just work", and the approach of using python code to explicitly load printers was a transitional thing. One way to check is to run gdb -nx, start your C++ program, and then use "info pretty-printer".
18,289,377
I'm using Code::Blocks and want to have gdb python-enabled. So I followed the C::B wiki <http://wiki.codeblocks.org/index.php?title=Pretty_Printers> to configure it. My pp.gdb is the same as that in the wiki except that I replace the path with my path to printers.py. ``` python import sys sys.path.insert(0, 'C:/Program Files (x86)/mingw-builds/x32-4.8.1-posix-dwarf-rev3/mingw32/share/gcc-4.8.1/python/libstdcxx/v6') from printers import register_libstdcxx_printers register_libstdcxx_printers (None) end ``` Then I tested it: ``` (gdb) source C:\Program Files (x86)\mingw-builds\x32-4.8.1-posix-dwarf-rev3\mingw32\bin\pp.gdb ``` And the error message showed: ``` Traceback (most recent call last): File "<string>", line 4, in <module> File "C:/Program Files (x86)/mingw-builds/x32-4.8.1-posix-dwarf-rev3/mingw32/ share/gcc-4.8.1/python/libstdcxx/v6/printers.py", line 911, in register_libstdcxx_printers gdb.printing.register_pretty_printer(obj, libstdcxx_printer) File "c:\program files (x86)\mingw-builds\x32-4.8.1-posix-dwarf-rev3\mingw32\ share\gdb/python/gdb/printing.py", line 146, in register_pretty_printer printer.name) RuntimeError: pretty-printer already registered: libstdc++-v6 C:\Program Files (x86)\mingw-builds\x32-4.8.1-posix-dwarf-rev3\mingw32\bin\pp.gd b:6: Error in sourced command file: Error while executing Python code. ``` How can I fix it?
2013/08/17
[ "https://Stackoverflow.com/questions/18289377", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1549348/" ]
You probably don't need to have this code. It seems like the libstdc++ prnters are preloaded -- which is normal in many setups... we designed printers to "just work", and the approach of using python code to explicitly load printers was a transitional thing. One way to check is to run gdb -nx, start your C++ program, and then use "info pretty-printer".
If you got this error `RuntimeError: pretty-printer already registered: libstdc++-v6` that's mean that you don't need to do anything mentioned in [C::B wiki](http://wiki.codeblocks.org/index.php?title=Pretty_Printers). You can just uncheck `Disable startup scripts (-nx)` option under: `Codeblocks->Settings->Debugger->Default` and that's it.
18,289,377
I'm using Code::Blocks and want to have gdb python-enabled. So I followed the C::B wiki <http://wiki.codeblocks.org/index.php?title=Pretty_Printers> to configure it. My pp.gdb is the same as that in the wiki except that I replace the path with my path to printers.py. ``` python import sys sys.path.insert(0, 'C:/Program Files (x86)/mingw-builds/x32-4.8.1-posix-dwarf-rev3/mingw32/share/gcc-4.8.1/python/libstdcxx/v6') from printers import register_libstdcxx_printers register_libstdcxx_printers (None) end ``` Then I tested it: ``` (gdb) source C:\Program Files (x86)\mingw-builds\x32-4.8.1-posix-dwarf-rev3\mingw32\bin\pp.gdb ``` And the error message showed: ``` Traceback (most recent call last): File "<string>", line 4, in <module> File "C:/Program Files (x86)/mingw-builds/x32-4.8.1-posix-dwarf-rev3/mingw32/ share/gcc-4.8.1/python/libstdcxx/v6/printers.py", line 911, in register_libstdcxx_printers gdb.printing.register_pretty_printer(obj, libstdcxx_printer) File "c:\program files (x86)\mingw-builds\x32-4.8.1-posix-dwarf-rev3\mingw32\ share\gdb/python/gdb/printing.py", line 146, in register_pretty_printer printer.name) RuntimeError: pretty-printer already registered: libstdc++-v6 C:\Program Files (x86)\mingw-builds\x32-4.8.1-posix-dwarf-rev3\mingw32\bin\pp.gd b:6: Error in sourced command file: Error while executing Python code. ``` How can I fix it?
2013/08/17
[ "https://Stackoverflow.com/questions/18289377", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1549348/" ]
Today, I also see this similar issue, after I update my libstdcxx's pretty from an old gcc4.7.x version to the gcc's trunk HEAD version to fix some other problems. I'm also using Codeblocks, and I have those two lines in my customized gdb script. ``` from libstdcxx.v6.printers import register_libstdcxx_printers register_libstdcxx_printers (None) ``` Note that I already have `-nx` option parssed to gdb when it started. After tweak for a while, I found that the libstdcxx's pretty printer is automatically loaded and get registered after the `from...import...` line. So, as a solution, you can just comment out the second line, and everything works just fine here. ``` from libstdcxx.v6.printers import register_libstdcxx_printers #register_libstdcxx_printers (None) ``` Also, I think the GDB's official wiki [STLSupport - GDB Wiki](https://sourceware.org/gdb/wiki/STLSupport) and Codeblocks' officical wiki [Pretty Printers - CodeBlocks](http://wiki.codeblocks.org/index.php?title=Pretty_Printers) should be updated to state this issue. EDIT: I just see the file: libstdcxx\v6\_\_init\_\_.py from GCC svn trunk(maybe, it was added recently), and I see it has code: ``` # Load the pretty-printers. from printers import register_libstdcxx_printers register_libstdcxx_printers(gdb.current_objfile()) ``` So, I think this code will automatically register the printers, so you don't need explicitly call `register_libstdcxx_printers (None)`.
If you got this error `RuntimeError: pretty-printer already registered: libstdc++-v6` that's mean that you don't need to do anything mentioned in [C::B wiki](http://wiki.codeblocks.org/index.php?title=Pretty_Printers). You can just uncheck `Disable startup scripts (-nx)` option under: `Codeblocks->Settings->Debugger->Default` and that's it.
23,480,431
I want to format a .py file (that generates a random face every time the page is refreshed) using HTML, in the Terminal, that can run in a browser. I have **chmod`ed** it so it should work, but whenever I run in it in a browser, I get an **internal service error**. Can someone help me figure out what it wrong? ``` #!/usr/bin/python print "Content-Type: text/html\n" print "" <!DOCTYPE html> <html> <pre> from random import choice def facegenerator(): T = "" hair = ["I I I I I","^ ^ ^ ^ ^"] eyes = ["O O"," O O ",] nose = [" O "," v "] mouth = ["~~~~~","_____","-----"] T += choice(hair) T += "\n" T += choice(eyes) T += "\n" T += choice(nose) T += "\n" T += choice(mouth) T += "\n" return T print facegenerator() </pre> </html> ``` The code works in IDLE, but I can`t it to work on a webpage. Thanks in advance for any help!
2014/05/05
[ "https://Stackoverflow.com/questions/23480431", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2888249/" ]
This is neither valid HTML nor valid Python. You can't simply mix in HTML tags into the middle of a Python script like that: you need at the very least to put them inside quotes so that they are a valid string. ``` #!/usr/bin/python print "Content-Type: text/html\n" print """ <!DOCTYPE html> <html> <pre> """ def ... print facegenerator() print """</pre> </html>""" ```
You need a templating engine like jinja to have this <http://jinja.pocoo.org/>
28,979,898
I downloaded `Microsoft Visual C++ Compiler for Python 2.7` and it installed in `C:\Users\user\AppData\Local\Programs\Common\Microsoft\Visual C++ for Python\9.0\vcvarsall.bat` However, I am getting the `error: Unable to find vcvarsall.bat` error when attempting to install "MySQL-python". I added `C:\Users\user\AppData\Local\Programs\Common\Microsoft\Visual C++ for Python\9.0;` to my Path. I am using python 2.7.8
2015/03/11
[ "https://Stackoverflow.com/questions/28979898", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3412518/" ]
Use the command prompt shortcut provided from installing the MSI. This will launch the prompt with VCVarsall.bat activated for the targeted environment. Depending on your installation, you can find this in the Start Menu under All Program -> Microsoft Visual C++ For Python -> then pick the command prompt based on x64 or x86. Otherwise, press Windows Key and search for "Microsoft Visual C++ For Python".
this worked: <https://github.com/cython/cython/wiki/64BitCythonExtensionsOnWindows> i uninstalled visual studio 2013 and net framework 4 first. i didnt need visual studio. i only installed it because i was playing with c++. this worked on a virtual environment: add `C:\Program Files\Microsoft SDKs\Windows\v7.0\Bin;` to System Paths ``` Start SDK Command Prompt "C:\Program Files\Microsoft SDKs\Windows\v7.0\SetEnv.Cmd" Setting SDK environment relative to C:\Program Files\Microsoft SDKs\Windows\v7.0. Targeting Windows Server 2008 x64 DEBUG C:\Program Files\Microsoft SDKs\Windows\v7.0>setlocal enabledelayedexpansion C:\Program Files\Microsoft SDKs\Windows\v7.0>set DISTUTILS_USE_SDK=1 C:\Program Files\Microsoft SDKs\Windows\v7.0>SetEnv.Cmd /x86 /release Setting SDK environment relative to C:\Program Files\Microsoft SDKs\Windows\v7.0. Targeting Windows Server 2008 x86 RELEASE C:\Program Files\Microsoft SDKs\Windows\v7.0>cd "C:\Users\USR01\virtualenvs\env1" C:\Program Files\Microsoft SDKs\Windows\v7.0>.\Scripts\activate.bat (env1) C:\Users\USR01\virtualenvs\env1> (env1) C:\Users\USR01\virtualenvs\env1>pip install <module> (env1) C:\Users\USR01\virtualenvs\env1>deactivate ```
28,979,898
I downloaded `Microsoft Visual C++ Compiler for Python 2.7` and it installed in `C:\Users\user\AppData\Local\Programs\Common\Microsoft\Visual C++ for Python\9.0\vcvarsall.bat` However, I am getting the `error: Unable to find vcvarsall.bat` error when attempting to install "MySQL-python". I added `C:\Users\user\AppData\Local\Programs\Common\Microsoft\Visual C++ for Python\9.0;` to my Path. I am using python 2.7.8
2015/03/11
[ "https://Stackoverflow.com/questions/28979898", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3412518/" ]
Use the command prompt shortcut provided from installing the MSI. This will launch the prompt with VCVarsall.bat activated for the targeted environment. Depending on your installation, you can find this in the Start Menu under All Program -> Microsoft Visual C++ For Python -> then pick the command prompt based on x64 or x86. Otherwise, press Windows Key and search for "Microsoft Visual C++ For Python".
Setting ``` SET DISTUTILS_USE_SDK=1 SET MSSdk=1 ``` in Visual C++ 2008 Command Prompt worked for me.
60,063,620
I am working with code that my client insists cannot be changed. It needs to be called using a python command like subprocess.call() ... The code includes a use of the exit() function. When exiting, the exit() function contains data as a parameter: ``` exit(data) ``` How can I capture the data parameter that the script is using when calling exit() without modifying the code to use a return or anything like that?
2020/02/04
[ "https://Stackoverflow.com/questions/60063620", "https://Stackoverflow.com", "https://Stackoverflow.com/users/7188090/" ]
I found this in 3 Swift files at the end of my code: ``` class UIImage { private func newBorderMask(_ borderSize: Int, size: CGSize) -> CGImageRef? { } } ``` So I saw that the code was redeclaring `class UIImage` after the `extension UIImage`. In each case, I moved the `private func` into the `extension UIImage` and removed the `class UIImage` from the code. This removed all of the `'UIImage' is ambiguous for type lookup in this context` errors throughout my project.
You need ``` import UIKit ``` at the top of the file
10,188,165
I'm using mongodb for my python(2.7) project with django framework..when i give python manage.py runserver it will work but if i sync the db (python manage.py syncdb) the following error displayed in terminal ``` Creating tables ... Traceback (most recent call last): File "manage.py", line 14, in <module> execute_manager(settings) File "/usr/lib/pymodules/python2.7/django/core/management/__init__.py", line 438, in execute_manager utility.execute() File "/usr/lib/pymodules/python2.7/django/core/management/__init__.py", line 379, in execute self.fetch_command(subcommand).run_from_argv(self.argv) File "/usr/lib/pymodules/python2.7/django/core/management/base.py", line 191, in run_from_argv self.execute(*args, **options.__dict__) File "/usr/lib/pymodules/python2.7/django/core/management/base.py", line 220, in execute output = self.handle(*args, **options) File "/usr/lib/pymodules/python2.7/django/core/management/base.py", line 351, in handle return self.handle_noargs(**options) File "/usr/lib/pymodules/python2.7/django/core/management/commands/syncdb.py", line 109, in handle_noargs emit_post_sync_signal(created_models, verbosity, interactive, db) File "/usr/lib/pymodules/python2.7/django/core/management/sql.py", line 190, in emit_post_sync_signal interactive=interactive, db=db) File "/usr/lib/pymodules/python2.7/django/dispatch/dispatcher.py", line 172, in send response = receiver(signal=self, sender=sender, **named) File "/usr/lib/pymodules/python2.7/django/contrib/auth/management/__init__.py", line 41, in create_permissions "content_type", "codename" File "/usr/lib/pymodules/python2.7/django/db/models/query.py", line 107, in _result_iter self._fill_cache() File "/usr/lib/pymodules/python2.7/django/db/models/query.py", line 772, in _fill_cache self._result_cache.append(self._iter.next()) File "/usr/lib/pymodules/python2.7/django/db/models/query.py", line 959, in iterator for row in self.query.get_compiler(self.db).results_iter(): File "/usr/local/lib/python2.7/dist-packages/djangotoolbox/db/basecompiler.py", line 229, in results_iter for entity in self.build_query(fields).fetch(low_mark, high_mark): File "/usr/local/lib/python2.7/dist-packages/djangotoolbox/db/basecompiler.py", line 290, in build_query query.order_by(self._get_ordering()) File "/usr/local/lib/python2.7/dist-packages/djangotoolbox/db/basecompiler.py", line 339, in _get_ordering raise DatabaseError("Ordering can't span tables on non-relational backends (%s)" % order) ``` and ``` django.db.utils.DatabaseError: Ordering can't span tables on non-relational backends (content_type__app_label) ``` How to solve this problem?
2012/04/17
[ "https://Stackoverflow.com/questions/10188165", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1095290/" ]
You need to use Django-nonrel instead of Django.
I've used mongoengine with django but you need to create a file like mongo\_models.py for example. In that file you define your Mongo documents. You then create forms to match each Mongo document. Each form has a save method which inserts or updates whats stored in Mongo. Django forms are designed to plug into any data back end ( with a bit of craft ) BEWARE: If you have very well defined and structured data that can be described in documents or models then don't use Mongo. Its not designed for that and something like PostGreSQL will work much better. * I use PostGreSQL for relational or well structured data because its good for that. Small memory footprint and good response. * I use Redis to cache or operate in memory queues/lists because its very good for that. great performance providing you have the memory to cope with it. * I use Mongo to store large JSON documents and to perform Map and reduce on them ( if needed ) because its very good for that. Be sure to use indexing on certain columns if you can to speed up lookups. Don't circle to fill a square hole. It won't fill it. I've seen too many posts where someone wanted to swap a relational DB for Mongo because Mongo is a buzz word. Don't get me wrong, Mongo is really great... when you use it appropriately. I love using Mongo appropriately
56,693,939
My dict (`cpc_docs`) has a structure like ```py { sym1:[app1, app2, app3], sym2:[app1, app6, app56, app89], sym3:[app3, app887] } ``` My dict has 15K keys and they are unique strings. Values for each key are a list of app numbers and they can appear as values for more than one key. I've looked here [[Python: Best Way to Exchange Keys with Values in a Dictionary?](https://stackoverflow.com/questions/1031851/python-best-way-to-exchange-keys-with-values-in-a-dictionary]), but since my value is a list, i get an error `unhashable type: list` I've tried the following methods: ```py res = dict((v,k) for k,v in cpc_docs.items()) ``` ```py for x,y in cpc_docs.items(): res.setdefault(y,[]).append(x) ``` ```py new_dict = dict (zip(cpc_docs.values(),cpc_docs.keys())) ``` None of these work of course since my values are lists. I want each unique element from the value lists and all of its keys as a list. Something like this: ```py { app1:[sym1, sym2] app2:[sym1] app3:[sym1, sym3] app6:[sym2] app56:[sym2] app89:[sym2] app887:[sym3] } ``` A bonus would be to order the new dict based on the len of each value list. So like: ```py { app1:[sym1, sym2] app3:[sym1, sym3] app2:[sym1] app6:[sym2] app56:[sym2] app89:[sym2] app887:[sym3] } ```
2019/06/20
[ "https://Stackoverflow.com/questions/56693939", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5170800/" ]
Your `setdefault` code is almost there, you just need an extra loop over the lists of values: ``` res = {} for k, lst in cpc_docs.items(): for v in lst: res.setdefault(v, []).append(k) ```
### First create a list of key, value tuples ```py new_list=[] for k,v in cpc_docs.items(): for i in range(len(v)): new_list.append((k,v[i])) ``` ### Then for each tuple in the list, add the key if it isn't in the dict and append the ```py doc_cpc = defaultdict(set) for tup in cpc_doc_list: doc_cpc[tup[1]].add(tup[0]) ``` Probably many better ways, but this works.
6,143,087
For one of my projects I have a python program built around the python [cmd](http://docs.python.org/library/cmd.html) class. This allowed me to craft a mini language around sql statements that I was sending to a database. Besides making it far easier to connect with python, I could do things that sql can't do. This was very important for several projects. However, I now need to add in if blocks for greater control flow. My current thinking is that I will just add two new commands to the language, IF and END. These set a variable which determines whether or not to skip a line. I would like to know if anyone else has done this with the cmd module, and if so, is there a standard method I'm missing? Google doesn't seem to reveal anything, and the cmd docs don't reveal anything either. For an idea that's similar to what I'm doing, go [here](http://blog.fogcreek.com/cheeky-python-a-redis-cli/). Questions and comments welcome. :) Hmm, a little more complicated than what I was thinking, though having python syntax would be nice. I debated building a mini language for quite some time before I finally did it. The problem primarily comes in from the external limitations. I have a bunch of "data", which is being generous, to turn into sql. This is based on other "data" that won't pass through. It's also unique to each specific "version" of the problem. Doing straight data to sql would have been my first inclination, but was not practical. For the curious, I spent a great deal of time going over the mini languages chapter in the art of unix programming, found [here](http://www.catb.org/~esr/writings/taoup/html/minilanguageschapter.html). If I had built the thing in pure python, I wouldn't have had the flexibility I absolutely needed for the problem set.
2011/05/26
[ "https://Stackoverflow.com/questions/6143087", "https://Stackoverflow.com", "https://Stackoverflow.com/users/724357/" ]
The limitations of making a "mini language" have become apparent. Proper languages have a tree-like structure and more complex syntax than `cmd` can handle easily. Sometimes it's actually easier to use Python directly than it is to invent your own DSL. Currently, your DSL probably reads a script-like file of commands. Because of the way `cmd` works, your little comments get a string argument, which must be parsed. Then the command gets executed. And, further, each command is a method of the `cmd.Cmd` subclass. Here's what you can do. Each `do_foo( self, args )` method becomes a stand-alone callable object. It will follow the **Command** design pattern. It will do **exactly** what the method function does now. No less. **Exactly** the same. ``` class Foo( object ): def __init__( self, global_context ): self.context= global_context def __call__( self, args ): ... The rest of do_foo ... ``` Additionally, your existing `cmd.Cmd` subclass probably maintains some internal state. All of the `self.this` and `self.that` instance variables must be changed to reference and explicit context object. ``` class Context( object ): pass ``` Change `self.this` or `self.that` to `self.context.this` or `self.context.that` Now, you can create your context and your various commands. ``` ctx = Context() foo= Foo(ctx) ``` Your script changes syntax slightly. From: ``` foo argstring bar argstring ``` to: ``` from mylanguage import foo, bar foo( "argstring" ) bar( "argstring" ) ``` This does **Exactly** what the CLI does now. No more. No less. **Exactly** the same. Slightly different syntax. Now your script is no longer in a DSL that's hard to expand. It's in Python. Having done that, you can now use Python syntax `if` statements. You have the **Exact** functionality currently implemented in `cmd` with better syntax.
After examining the problem set some more, I've come to the conclusion that I can leave the minilanguage alone. It has all the features I need, and I don't have the time to rebuild the project from the ground up. This has been an interesting problem and I'm no longer sure I would build another minilanguage if I encountered the same situation. OTOH, it works very well here, and I am loathe to give up the advantages it has conferred.
56,331,413
I am wondering how I can save whatever I added to a list when I close a python file. For example, in this "my contact" program that I wrote below, if I add information about 'Jane Doe', what could I do so that next time I open up the same file, Jane Doe still exists. ``` def main(): myBook = Book([{"name": 'John Doe', "phone": '123-456-7890', "address": '1000 Constitution Ave'}]) class Book: def __init__(self, peoples): self.peoples = peoples self.main_menu() def main_menu(self): print('Main Menu') print('1. Display Contact Names') print('2. Search For Contacts') print('3. Edit Contact') print('4. New Contact') print('5. Remove Contact') print('6. Exit') self.selection = input('Enter a # form the menu: ') if (self.selection == "1"): self.display_names() if (self.selection == "2"): self.search() if (self.selection == "3"): self.edit() if (self.selection == "4"): self.new() if (self.selection == "5"): self.delete() if (self.selection == "6"): self.end() def display_names(self): for people in self.peoples: print("Name: " + people["name"]) self.main_menu() def search(self): searchname = input('What is the name of your contact: ') for index in range(len(self.peoples)): if (self.peoples[index]["name"] == searchname): print("Name: " + self.peoples[index]["name"]) print("Address: " + self.peoples[index]["address"]) print("Phone: " + self.peoples[index]["phone"]) self.main_menu() def edit(self): searchname = input('What is the name of the contact that you want to edit: ') for index in range(len(self.peoples)): if (self.peoples[index]["name"] == searchname): self.peoples.pop(index) name = input('What is your name: ') address = input('What is your address: ') phone = input('What is your phone number: ') self.peoples.append({"name": name, "phone": phone, "address": address}) self.main_menu() def new(self): name = input('What is your name: ') address = input('What is your address: ') phone = input('What is your phone number: ') self.peoples.append({"name": name, "phone": phone, "address": address}) self.main_menu() def delete(self): searchname = input('What is the name of the contact that you want to delete: ') for index in reversed(range(len(self.peoples))): if (self.peoples[index]["name"] == searchname): self.peoples.pop(index) print(searchname, 'has been removed') self.main_menu() def end(self): print('Thank you for using the contact book, have a nice day') print('Copyright Carson147 2019©, All Rights Reserved') main() ```
2019/05/27
[ "https://Stackoverflow.com/questions/56331413", "https://Stackoverflow.com", "https://Stackoverflow.com/users/11563937/" ]
Use a module from the [Data Persistence](https://docs.python.org/3/library/persistence.html) section of the standard library, or save it as [json](https://docs.python.org/3/library/json.html), or as a [csv file](https://docs.python.org/3/library/csv.html).
You just convert your list to array inside in function . ``` np.save('path/to/save', np.array(your_list)) ``` to load : ``` arr=np.load(''path/to/save.npy').tolist() ``` I hope it will be helpful
56,331,413
I am wondering how I can save whatever I added to a list when I close a python file. For example, in this "my contact" program that I wrote below, if I add information about 'Jane Doe', what could I do so that next time I open up the same file, Jane Doe still exists. ``` def main(): myBook = Book([{"name": 'John Doe', "phone": '123-456-7890', "address": '1000 Constitution Ave'}]) class Book: def __init__(self, peoples): self.peoples = peoples self.main_menu() def main_menu(self): print('Main Menu') print('1. Display Contact Names') print('2. Search For Contacts') print('3. Edit Contact') print('4. New Contact') print('5. Remove Contact') print('6. Exit') self.selection = input('Enter a # form the menu: ') if (self.selection == "1"): self.display_names() if (self.selection == "2"): self.search() if (self.selection == "3"): self.edit() if (self.selection == "4"): self.new() if (self.selection == "5"): self.delete() if (self.selection == "6"): self.end() def display_names(self): for people in self.peoples: print("Name: " + people["name"]) self.main_menu() def search(self): searchname = input('What is the name of your contact: ') for index in range(len(self.peoples)): if (self.peoples[index]["name"] == searchname): print("Name: " + self.peoples[index]["name"]) print("Address: " + self.peoples[index]["address"]) print("Phone: " + self.peoples[index]["phone"]) self.main_menu() def edit(self): searchname = input('What is the name of the contact that you want to edit: ') for index in range(len(self.peoples)): if (self.peoples[index]["name"] == searchname): self.peoples.pop(index) name = input('What is your name: ') address = input('What is your address: ') phone = input('What is your phone number: ') self.peoples.append({"name": name, "phone": phone, "address": address}) self.main_menu() def new(self): name = input('What is your name: ') address = input('What is your address: ') phone = input('What is your phone number: ') self.peoples.append({"name": name, "phone": phone, "address": address}) self.main_menu() def delete(self): searchname = input('What is the name of the contact that you want to delete: ') for index in reversed(range(len(self.peoples))): if (self.peoples[index]["name"] == searchname): self.peoples.pop(index) print(searchname, 'has been removed') self.main_menu() def end(self): print('Thank you for using the contact book, have a nice day') print('Copyright Carson147 2019©, All Rights Reserved') main() ```
2019/05/27
[ "https://Stackoverflow.com/questions/56331413", "https://Stackoverflow.com", "https://Stackoverflow.com/users/11563937/" ]
You just convert your list to array inside in function . ``` np.save('path/to/save', np.array(your_list)) ``` to load : ``` arr=np.load(''path/to/save.npy').tolist() ``` I hope it will be helpful
There is no way you can do that without any external modules, such as `numpy` or `pickle`. Using `pickle`, you can do this: (I am assuming you want to save the `myBook` variable) ```py import pickle pickle.dump(myBook, open("foo.bar", "wb")) #where foo is name of file and bar is extension #also wb is saving type, you can find documentation online ``` To load: ```py pickle.load(myBook, open("foo.bar", "rb")) ``` **EDIT:** I was wrong in my first statement. There is a way to save without importing a module. Here is how: ```py myBook.save(foo.bar) #foo is file name and bar is extention ``` To load: ```py myBook=open(foo.bar) ```
56,331,413
I am wondering how I can save whatever I added to a list when I close a python file. For example, in this "my contact" program that I wrote below, if I add information about 'Jane Doe', what could I do so that next time I open up the same file, Jane Doe still exists. ``` def main(): myBook = Book([{"name": 'John Doe', "phone": '123-456-7890', "address": '1000 Constitution Ave'}]) class Book: def __init__(self, peoples): self.peoples = peoples self.main_menu() def main_menu(self): print('Main Menu') print('1. Display Contact Names') print('2. Search For Contacts') print('3. Edit Contact') print('4. New Contact') print('5. Remove Contact') print('6. Exit') self.selection = input('Enter a # form the menu: ') if (self.selection == "1"): self.display_names() if (self.selection == "2"): self.search() if (self.selection == "3"): self.edit() if (self.selection == "4"): self.new() if (self.selection == "5"): self.delete() if (self.selection == "6"): self.end() def display_names(self): for people in self.peoples: print("Name: " + people["name"]) self.main_menu() def search(self): searchname = input('What is the name of your contact: ') for index in range(len(self.peoples)): if (self.peoples[index]["name"] == searchname): print("Name: " + self.peoples[index]["name"]) print("Address: " + self.peoples[index]["address"]) print("Phone: " + self.peoples[index]["phone"]) self.main_menu() def edit(self): searchname = input('What is the name of the contact that you want to edit: ') for index in range(len(self.peoples)): if (self.peoples[index]["name"] == searchname): self.peoples.pop(index) name = input('What is your name: ') address = input('What is your address: ') phone = input('What is your phone number: ') self.peoples.append({"name": name, "phone": phone, "address": address}) self.main_menu() def new(self): name = input('What is your name: ') address = input('What is your address: ') phone = input('What is your phone number: ') self.peoples.append({"name": name, "phone": phone, "address": address}) self.main_menu() def delete(self): searchname = input('What is the name of the contact that you want to delete: ') for index in reversed(range(len(self.peoples))): if (self.peoples[index]["name"] == searchname): self.peoples.pop(index) print(searchname, 'has been removed') self.main_menu() def end(self): print('Thank you for using the contact book, have a nice day') print('Copyright Carson147 2019©, All Rights Reserved') main() ```
2019/05/27
[ "https://Stackoverflow.com/questions/56331413", "https://Stackoverflow.com", "https://Stackoverflow.com/users/11563937/" ]
You just convert your list to array inside in function . ``` np.save('path/to/save', np.array(your_list)) ``` to load : ``` arr=np.load(''path/to/save.npy').tolist() ``` I hope it will be helpful
As evinced by the many other answers, there are many ways to do this, but I thought it was helpful to have a example. By changing the top of your file as so, you can use the shelve module. There are a variety of other things you can fix in your code if you are curious, you could try <https://codereview.stackexchange.com/> if you want more feedback. ``` import shelve def main(): default = [ {'name': 'John Doe', 'phone': '123-456-7890', 'address': '1000 Constitution Ave'} ] with Book('foo', default=default) as myBook: myBook.main_menu() class Book: def __init__(self, filename, default=None): if default is None: default = [] self._db = shelve.open(filename) self.people = self._db.setdefault('people', default) def __enter__(self): return self def __exit__(self): self._db['people'] = self.people self._db.close() ```
56,331,413
I am wondering how I can save whatever I added to a list when I close a python file. For example, in this "my contact" program that I wrote below, if I add information about 'Jane Doe', what could I do so that next time I open up the same file, Jane Doe still exists. ``` def main(): myBook = Book([{"name": 'John Doe', "phone": '123-456-7890', "address": '1000 Constitution Ave'}]) class Book: def __init__(self, peoples): self.peoples = peoples self.main_menu() def main_menu(self): print('Main Menu') print('1. Display Contact Names') print('2. Search For Contacts') print('3. Edit Contact') print('4. New Contact') print('5. Remove Contact') print('6. Exit') self.selection = input('Enter a # form the menu: ') if (self.selection == "1"): self.display_names() if (self.selection == "2"): self.search() if (self.selection == "3"): self.edit() if (self.selection == "4"): self.new() if (self.selection == "5"): self.delete() if (self.selection == "6"): self.end() def display_names(self): for people in self.peoples: print("Name: " + people["name"]) self.main_menu() def search(self): searchname = input('What is the name of your contact: ') for index in range(len(self.peoples)): if (self.peoples[index]["name"] == searchname): print("Name: " + self.peoples[index]["name"]) print("Address: " + self.peoples[index]["address"]) print("Phone: " + self.peoples[index]["phone"]) self.main_menu() def edit(self): searchname = input('What is the name of the contact that you want to edit: ') for index in range(len(self.peoples)): if (self.peoples[index]["name"] == searchname): self.peoples.pop(index) name = input('What is your name: ') address = input('What is your address: ') phone = input('What is your phone number: ') self.peoples.append({"name": name, "phone": phone, "address": address}) self.main_menu() def new(self): name = input('What is your name: ') address = input('What is your address: ') phone = input('What is your phone number: ') self.peoples.append({"name": name, "phone": phone, "address": address}) self.main_menu() def delete(self): searchname = input('What is the name of the contact that you want to delete: ') for index in reversed(range(len(self.peoples))): if (self.peoples[index]["name"] == searchname): self.peoples.pop(index) print(searchname, 'has been removed') self.main_menu() def end(self): print('Thank you for using the contact book, have a nice day') print('Copyright Carson147 2019©, All Rights Reserved') main() ```
2019/05/27
[ "https://Stackoverflow.com/questions/56331413", "https://Stackoverflow.com", "https://Stackoverflow.com/users/11563937/" ]
Use a module from the [Data Persistence](https://docs.python.org/3/library/persistence.html) section of the standard library, or save it as [json](https://docs.python.org/3/library/json.html), or as a [csv file](https://docs.python.org/3/library/csv.html).
There is no way you can do that without any external modules, such as `numpy` or `pickle`. Using `pickle`, you can do this: (I am assuming you want to save the `myBook` variable) ```py import pickle pickle.dump(myBook, open("foo.bar", "wb")) #where foo is name of file and bar is extension #also wb is saving type, you can find documentation online ``` To load: ```py pickle.load(myBook, open("foo.bar", "rb")) ``` **EDIT:** I was wrong in my first statement. There is a way to save without importing a module. Here is how: ```py myBook.save(foo.bar) #foo is file name and bar is extention ``` To load: ```py myBook=open(foo.bar) ```
56,331,413
I am wondering how I can save whatever I added to a list when I close a python file. For example, in this "my contact" program that I wrote below, if I add information about 'Jane Doe', what could I do so that next time I open up the same file, Jane Doe still exists. ``` def main(): myBook = Book([{"name": 'John Doe', "phone": '123-456-7890', "address": '1000 Constitution Ave'}]) class Book: def __init__(self, peoples): self.peoples = peoples self.main_menu() def main_menu(self): print('Main Menu') print('1. Display Contact Names') print('2. Search For Contacts') print('3. Edit Contact') print('4. New Contact') print('5. Remove Contact') print('6. Exit') self.selection = input('Enter a # form the menu: ') if (self.selection == "1"): self.display_names() if (self.selection == "2"): self.search() if (self.selection == "3"): self.edit() if (self.selection == "4"): self.new() if (self.selection == "5"): self.delete() if (self.selection == "6"): self.end() def display_names(self): for people in self.peoples: print("Name: " + people["name"]) self.main_menu() def search(self): searchname = input('What is the name of your contact: ') for index in range(len(self.peoples)): if (self.peoples[index]["name"] == searchname): print("Name: " + self.peoples[index]["name"]) print("Address: " + self.peoples[index]["address"]) print("Phone: " + self.peoples[index]["phone"]) self.main_menu() def edit(self): searchname = input('What is the name of the contact that you want to edit: ') for index in range(len(self.peoples)): if (self.peoples[index]["name"] == searchname): self.peoples.pop(index) name = input('What is your name: ') address = input('What is your address: ') phone = input('What is your phone number: ') self.peoples.append({"name": name, "phone": phone, "address": address}) self.main_menu() def new(self): name = input('What is your name: ') address = input('What is your address: ') phone = input('What is your phone number: ') self.peoples.append({"name": name, "phone": phone, "address": address}) self.main_menu() def delete(self): searchname = input('What is the name of the contact that you want to delete: ') for index in reversed(range(len(self.peoples))): if (self.peoples[index]["name"] == searchname): self.peoples.pop(index) print(searchname, 'has been removed') self.main_menu() def end(self): print('Thank you for using the contact book, have a nice day') print('Copyright Carson147 2019©, All Rights Reserved') main() ```
2019/05/27
[ "https://Stackoverflow.com/questions/56331413", "https://Stackoverflow.com", "https://Stackoverflow.com/users/11563937/" ]
Use a module from the [Data Persistence](https://docs.python.org/3/library/persistence.html) section of the standard library, or save it as [json](https://docs.python.org/3/library/json.html), or as a [csv file](https://docs.python.org/3/library/csv.html).
There are innumerable kinds of serialization options, but a time-tested favorite is JSON. JavaScript Object Notation looks like: ``` [ "this", "is", "a", "list", "of", "strings", "with", "a", { "dictionary": "of", "values": 4, "an": "example" }, "can strings be single-quoted?", false, "can objects nest?", { "I": { "Think": { "They": "can" } } } ] ``` JSON is widely used, and the Python stdlib has a method of converting objects to and from JSON in the `json` package. ```py >>> import json >>> data = ['a', 'list', 'full', 'of', 'entries'] >>> json.dumps(data) # dumps will dump to string ["a", "list", "full", "of", "entries"] ``` You can then save your Book data to json before the program shuts down, and read from json after it starts up. ```py # at the top import json from pathlib import Path # at the bottom of your program: if __name__ == '__main__': persistence = Path('book.json') if persistence.exists(): with persistence.open() as f: data = json.load(f) else: data = [{"name": 'John Doe', "phone": '123-456-7890', "address": '1000 Constitution Ave'}] book = Book(data) with persistence.open('w') as f: json.dump(f, indent=4) ```
56,331,413
I am wondering how I can save whatever I added to a list when I close a python file. For example, in this "my contact" program that I wrote below, if I add information about 'Jane Doe', what could I do so that next time I open up the same file, Jane Doe still exists. ``` def main(): myBook = Book([{"name": 'John Doe', "phone": '123-456-7890', "address": '1000 Constitution Ave'}]) class Book: def __init__(self, peoples): self.peoples = peoples self.main_menu() def main_menu(self): print('Main Menu') print('1. Display Contact Names') print('2. Search For Contacts') print('3. Edit Contact') print('4. New Contact') print('5. Remove Contact') print('6. Exit') self.selection = input('Enter a # form the menu: ') if (self.selection == "1"): self.display_names() if (self.selection == "2"): self.search() if (self.selection == "3"): self.edit() if (self.selection == "4"): self.new() if (self.selection == "5"): self.delete() if (self.selection == "6"): self.end() def display_names(self): for people in self.peoples: print("Name: " + people["name"]) self.main_menu() def search(self): searchname = input('What is the name of your contact: ') for index in range(len(self.peoples)): if (self.peoples[index]["name"] == searchname): print("Name: " + self.peoples[index]["name"]) print("Address: " + self.peoples[index]["address"]) print("Phone: " + self.peoples[index]["phone"]) self.main_menu() def edit(self): searchname = input('What is the name of the contact that you want to edit: ') for index in range(len(self.peoples)): if (self.peoples[index]["name"] == searchname): self.peoples.pop(index) name = input('What is your name: ') address = input('What is your address: ') phone = input('What is your phone number: ') self.peoples.append({"name": name, "phone": phone, "address": address}) self.main_menu() def new(self): name = input('What is your name: ') address = input('What is your address: ') phone = input('What is your phone number: ') self.peoples.append({"name": name, "phone": phone, "address": address}) self.main_menu() def delete(self): searchname = input('What is the name of the contact that you want to delete: ') for index in reversed(range(len(self.peoples))): if (self.peoples[index]["name"] == searchname): self.peoples.pop(index) print(searchname, 'has been removed') self.main_menu() def end(self): print('Thank you for using the contact book, have a nice day') print('Copyright Carson147 2019©, All Rights Reserved') main() ```
2019/05/27
[ "https://Stackoverflow.com/questions/56331413", "https://Stackoverflow.com", "https://Stackoverflow.com/users/11563937/" ]
Use a module from the [Data Persistence](https://docs.python.org/3/library/persistence.html) section of the standard library, or save it as [json](https://docs.python.org/3/library/json.html), or as a [csv file](https://docs.python.org/3/library/csv.html).
As evinced by the many other answers, there are many ways to do this, but I thought it was helpful to have a example. By changing the top of your file as so, you can use the shelve module. There are a variety of other things you can fix in your code if you are curious, you could try <https://codereview.stackexchange.com/> if you want more feedback. ``` import shelve def main(): default = [ {'name': 'John Doe', 'phone': '123-456-7890', 'address': '1000 Constitution Ave'} ] with Book('foo', default=default) as myBook: myBook.main_menu() class Book: def __init__(self, filename, default=None): if default is None: default = [] self._db = shelve.open(filename) self.people = self._db.setdefault('people', default) def __enter__(self): return self def __exit__(self): self._db['people'] = self.people self._db.close() ```
56,331,413
I am wondering how I can save whatever I added to a list when I close a python file. For example, in this "my contact" program that I wrote below, if I add information about 'Jane Doe', what could I do so that next time I open up the same file, Jane Doe still exists. ``` def main(): myBook = Book([{"name": 'John Doe', "phone": '123-456-7890', "address": '1000 Constitution Ave'}]) class Book: def __init__(self, peoples): self.peoples = peoples self.main_menu() def main_menu(self): print('Main Menu') print('1. Display Contact Names') print('2. Search For Contacts') print('3. Edit Contact') print('4. New Contact') print('5. Remove Contact') print('6. Exit') self.selection = input('Enter a # form the menu: ') if (self.selection == "1"): self.display_names() if (self.selection == "2"): self.search() if (self.selection == "3"): self.edit() if (self.selection == "4"): self.new() if (self.selection == "5"): self.delete() if (self.selection == "6"): self.end() def display_names(self): for people in self.peoples: print("Name: " + people["name"]) self.main_menu() def search(self): searchname = input('What is the name of your contact: ') for index in range(len(self.peoples)): if (self.peoples[index]["name"] == searchname): print("Name: " + self.peoples[index]["name"]) print("Address: " + self.peoples[index]["address"]) print("Phone: " + self.peoples[index]["phone"]) self.main_menu() def edit(self): searchname = input('What is the name of the contact that you want to edit: ') for index in range(len(self.peoples)): if (self.peoples[index]["name"] == searchname): self.peoples.pop(index) name = input('What is your name: ') address = input('What is your address: ') phone = input('What is your phone number: ') self.peoples.append({"name": name, "phone": phone, "address": address}) self.main_menu() def new(self): name = input('What is your name: ') address = input('What is your address: ') phone = input('What is your phone number: ') self.peoples.append({"name": name, "phone": phone, "address": address}) self.main_menu() def delete(self): searchname = input('What is the name of the contact that you want to delete: ') for index in reversed(range(len(self.peoples))): if (self.peoples[index]["name"] == searchname): self.peoples.pop(index) print(searchname, 'has been removed') self.main_menu() def end(self): print('Thank you for using the contact book, have a nice day') print('Copyright Carson147 2019©, All Rights Reserved') main() ```
2019/05/27
[ "https://Stackoverflow.com/questions/56331413", "https://Stackoverflow.com", "https://Stackoverflow.com/users/11563937/" ]
There are innumerable kinds of serialization options, but a time-tested favorite is JSON. JavaScript Object Notation looks like: ``` [ "this", "is", "a", "list", "of", "strings", "with", "a", { "dictionary": "of", "values": 4, "an": "example" }, "can strings be single-quoted?", false, "can objects nest?", { "I": { "Think": { "They": "can" } } } ] ``` JSON is widely used, and the Python stdlib has a method of converting objects to and from JSON in the `json` package. ```py >>> import json >>> data = ['a', 'list', 'full', 'of', 'entries'] >>> json.dumps(data) # dumps will dump to string ["a", "list", "full", "of", "entries"] ``` You can then save your Book data to json before the program shuts down, and read from json after it starts up. ```py # at the top import json from pathlib import Path # at the bottom of your program: if __name__ == '__main__': persistence = Path('book.json') if persistence.exists(): with persistence.open() as f: data = json.load(f) else: data = [{"name": 'John Doe', "phone": '123-456-7890', "address": '1000 Constitution Ave'}] book = Book(data) with persistence.open('w') as f: json.dump(f, indent=4) ```
There is no way you can do that without any external modules, such as `numpy` or `pickle`. Using `pickle`, you can do this: (I am assuming you want to save the `myBook` variable) ```py import pickle pickle.dump(myBook, open("foo.bar", "wb")) #where foo is name of file and bar is extension #also wb is saving type, you can find documentation online ``` To load: ```py pickle.load(myBook, open("foo.bar", "rb")) ``` **EDIT:** I was wrong in my first statement. There is a way to save without importing a module. Here is how: ```py myBook.save(foo.bar) #foo is file name and bar is extention ``` To load: ```py myBook=open(foo.bar) ```
56,331,413
I am wondering how I can save whatever I added to a list when I close a python file. For example, in this "my contact" program that I wrote below, if I add information about 'Jane Doe', what could I do so that next time I open up the same file, Jane Doe still exists. ``` def main(): myBook = Book([{"name": 'John Doe', "phone": '123-456-7890', "address": '1000 Constitution Ave'}]) class Book: def __init__(self, peoples): self.peoples = peoples self.main_menu() def main_menu(self): print('Main Menu') print('1. Display Contact Names') print('2. Search For Contacts') print('3. Edit Contact') print('4. New Contact') print('5. Remove Contact') print('6. Exit') self.selection = input('Enter a # form the menu: ') if (self.selection == "1"): self.display_names() if (self.selection == "2"): self.search() if (self.selection == "3"): self.edit() if (self.selection == "4"): self.new() if (self.selection == "5"): self.delete() if (self.selection == "6"): self.end() def display_names(self): for people in self.peoples: print("Name: " + people["name"]) self.main_menu() def search(self): searchname = input('What is the name of your contact: ') for index in range(len(self.peoples)): if (self.peoples[index]["name"] == searchname): print("Name: " + self.peoples[index]["name"]) print("Address: " + self.peoples[index]["address"]) print("Phone: " + self.peoples[index]["phone"]) self.main_menu() def edit(self): searchname = input('What is the name of the contact that you want to edit: ') for index in range(len(self.peoples)): if (self.peoples[index]["name"] == searchname): self.peoples.pop(index) name = input('What is your name: ') address = input('What is your address: ') phone = input('What is your phone number: ') self.peoples.append({"name": name, "phone": phone, "address": address}) self.main_menu() def new(self): name = input('What is your name: ') address = input('What is your address: ') phone = input('What is your phone number: ') self.peoples.append({"name": name, "phone": phone, "address": address}) self.main_menu() def delete(self): searchname = input('What is the name of the contact that you want to delete: ') for index in reversed(range(len(self.peoples))): if (self.peoples[index]["name"] == searchname): self.peoples.pop(index) print(searchname, 'has been removed') self.main_menu() def end(self): print('Thank you for using the contact book, have a nice day') print('Copyright Carson147 2019©, All Rights Reserved') main() ```
2019/05/27
[ "https://Stackoverflow.com/questions/56331413", "https://Stackoverflow.com", "https://Stackoverflow.com/users/11563937/" ]
There are innumerable kinds of serialization options, but a time-tested favorite is JSON. JavaScript Object Notation looks like: ``` [ "this", "is", "a", "list", "of", "strings", "with", "a", { "dictionary": "of", "values": 4, "an": "example" }, "can strings be single-quoted?", false, "can objects nest?", { "I": { "Think": { "They": "can" } } } ] ``` JSON is widely used, and the Python stdlib has a method of converting objects to and from JSON in the `json` package. ```py >>> import json >>> data = ['a', 'list', 'full', 'of', 'entries'] >>> json.dumps(data) # dumps will dump to string ["a", "list", "full", "of", "entries"] ``` You can then save your Book data to json before the program shuts down, and read from json after it starts up. ```py # at the top import json from pathlib import Path # at the bottom of your program: if __name__ == '__main__': persistence = Path('book.json') if persistence.exists(): with persistence.open() as f: data = json.load(f) else: data = [{"name": 'John Doe', "phone": '123-456-7890', "address": '1000 Constitution Ave'}] book = Book(data) with persistence.open('w') as f: json.dump(f, indent=4) ```
As evinced by the many other answers, there are many ways to do this, but I thought it was helpful to have a example. By changing the top of your file as so, you can use the shelve module. There are a variety of other things you can fix in your code if you are curious, you could try <https://codereview.stackexchange.com/> if you want more feedback. ``` import shelve def main(): default = [ {'name': 'John Doe', 'phone': '123-456-7890', 'address': '1000 Constitution Ave'} ] with Book('foo', default=default) as myBook: myBook.main_menu() class Book: def __init__(self, filename, default=None): if default is None: default = [] self._db = shelve.open(filename) self.people = self._db.setdefault('people', default) def __enter__(self): return self def __exit__(self): self._db['people'] = self.people self._db.close() ```
73,542,262
I've created 3 files, `snek.py`, `requirements.txt` and `runsnek.py`. `runsnek.py` installs all the required modules in `requirements.txt` with pip and runs `snek.py`. Everything works fine on Windows 10, but when trying to run on Ubuntu (WSL2), an error is thrown: ``` ❯ python runsnek.py Requirement already up-to-date: pathlib in /home/rootuser/.local/lib/python3.8/site-packages (from -r requirements.txt (line 2)) (1.0.1) Traceback (most recent call last): File "snek.py", line 1, in <module> from pathlib import Path ImportError: No module named pathlib ``` I'm not sure what could've caused the problem on Linux. It might be some kind of pip modules path that isn't defined. `printenv` does not show anything containing the word python. files ===== Here are all of the mentioned files. `runsnek.py`: ``` import os, platform os.system('pip install --upgrade -r requirements.txt') if platform.system() == 'Windows': os.system('py snek.py') elif '': raise Warning('snek could not be ran, try running snek.py instead') else: os.system('python snek.py') ``` `requirements.txt`: ``` # pip reqs pathlib ``` `snek.py`: ``` from pathlib import Path cwd = Path('.') # [...] ```
2022/08/30
[ "https://Stackoverflow.com/questions/73542262", "https://Stackoverflow.com", "https://Stackoverflow.com/users/17747971/" ]
It seems you are using python2 in your WSL2 instance. In the line `os.system('python snek.py')` it should run python2 instead of python3. To correct the problem, you can change this line of code by `os.system('python3 snek.py')`.
Your `run` file can be simplified: ``` import sys, os print('Running with ' + sys.executable) os.system(sys.executable + ' -m pip install --upgrade -r requirements.txt') os.system(sys.executable +' snek.py') ``` `sys.executable` always contains the path of the python interpreter running the current script. Using `python -m pip install` also ensures that the same python interpreter is used for pip installing, which solves your original problem
72,097,284
How can you set the desktop background to a solid color programmatically in python? The reason I want this is to make myself a utility which changes the background color depending on which of several virtual desktops I'm using.
2022/05/03
[ "https://Stackoverflow.com/questions/72097284", "https://Stackoverflow.com", "https://Stackoverflow.com/users/169774/" ]
With a little help of `row_number` analytic function: ``` SQL> ALTER TABLE dummy_test_table ADD batch_id VARCHAR2 (10); Table altered. SQL> UPDATE dummy_test_table a 2 SET a.batch_id = 3 (WITH 4 temp 5 AS 6 (SELECT seq_no, 7 ROW_NUMBER () OVER (ORDER BY seq_no) batch_id 8 FROM (SELECT DISTINCT seq_no 9 FROM dummy_test_table)) 10 SELECT LPAD (t.batch_id, 3, '0') 11 FROM temp t 12 WHERE t.seq_no = a.seq_no); 9 rows updated. ``` Result: ``` SQL> SELECT * 2 FROM dummy_test_table 3 ORDER BY seq_no, batch_id; SEQ_NO BATCH_ID ---------- ---------- 0000000957 001 0000000957 001 0000000957 001 0000000958 002 0000000958 002 0000000958 002 0000000959 003 0000000969 004 0000000969 004 9 rows selected. SQL> ```
One option is to use `DENSE_RANK()` analytic function within a MERGE DML statement such as ```sql MERGE INTO dummy_test_table d1 USING (SELECT seq_no, LPAD(DENSE_RANK() OVER(ORDER BY seq_no), 3, '0') AS dr FROM dummy_test_table) d2 ON (d1.rowid = d2.rowid) WHEN MATCHED THEN UPDATE SET d1.batch_id = dr ``` `[Demo](https://dbfiddle.uk/?rdbms=oracle_21&fiddle=f420bf79feb8379cfe2d11a5c3bfa4a0)` In my opinion, no need to add an extra column and populate it. Rather, you can use such a query or create a SQL-view(*and query that whenever needed*) : ```sql --CREATE OR REPLACE v_dts AS SELECT seq_no, LPAD(DENSE_RANK() OVER(ORDER BY seq_no), 3, '0') AS batch_id FROM dummy_test_table ```
24,027,579
I am working on a project where I have a client server model in python. I set up a server to monitor requests and send back data. PYZMQ supports: tcp, udp, pgm, epgm, inproc and ipc. I have been using tcp for interprocess communication, but have no idea what i should use for sending a request over the internet to a server. I simply need something to put in: ``` socket.bind(BIND_ADDRESS) ``` [DIAGRAM: Client Communicating over internet to server running a program](http://i.imgur.com/oqUaId4.png)
2014/06/04
[ "https://Stackoverflow.com/questions/24027579", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2383037/" ]
Any particular reason you're not using `ipc` or `inproc` for interprocess communication? Other than that, generally, you can consider `tcp` the universal communicator; it's not always the best choice, but no matter what (so long as you actually have an IP address) it will work. Here's what you need to know when making a choice between transports: 1. PGM/EPGM are multicast transports - the idea is that you send one message and it gets delivered as a single message until the last possible moment where it will be broken up into multiple messages, one for each receiver. Unless you *absolutely know* you need this, you don't need this. 2. IPC/Inproc are for interprocess communication... if you're communicating between different threads in the same process, or different processes on the same logical host, then these might be appropriate. You get the benefit of a little less overhead. If you might ever add new logical hosts, this is probably not appropriate. 3. Russle Borogove enumerates the difference between TCP and UDP well. Typically you'll want to use TCP. Only if absolute speed is more important than reliability then you'll use UDP. It was always my understanding that UDP wasn't supported by ZMQ, so if it's there it's probably added by the pyzmq binding. Also, I took a look at your diagram - you *probably* want the server ZMQ socket to `bind` and the client ZMQ socket to `connect`... there are some reasons why you might reverse this, but as a general rule the server is considered the "reliable" peer, and the client is the "transient" peer, and you want the "reliable" peer to bind, the "transient" peer to connect.
Over the internet, TCP or UDP are the usual choices. I don't know if pyzmq has its own delivery guarantees on top of the transport protocol. If it doesn't, TCP will guarantee in-order delivery of all messages, while UDP may drop messages if the network is congested. If you don't know what you want, TCP is the simplest and safest choice.
42,972,184
I am new to python. As part of my project, I am working with python2.7. I am dealing with multiple files in python. Here I am facing a problem to use a variable of particular function from another file which was I already imported in my current file. Please help me to achieve this. ``` file1.py class connect(): # Contains different definitions def output(): a = "Hello" data = // some operations return data file2.py from file1 import * # Here I need to access both 'a' and 'data' variable from output() ```
2017/03/23
[ "https://Stackoverflow.com/questions/42972184", "https://Stackoverflow.com", "https://Stackoverflow.com/users/7167331/" ]
So you have edited it quite a bit since I started writing about conventions so I have started again. First, your return statement is out of indentation, it should be indented into the output method. ``` def output(): a = "Hello" data = // some operations return data ``` Second, the convention in Python regarding class names is CamelCase, which means your class should be called "Connect". There is also no need to add the round brackets when your class doesn't inherit anything. Third, right now you can only use "data" since only data is returned. What you can do is return both a and data by replacing your return statement to this: ``` return a, data ``` Then in your second file, all you have to do is write `a_received, data_received = connect.output()` Full code example: file1.py ``` class Connect: def output(): a = "Hello" data = "abc" return a, data ``` file2.py ``` from file1 import Connect a_received, data_received = Connect.output() # Print results print(a_received) print(data_received) ``` Fourth, there are other ways to combat this, like create **instance variables** for example and then there is no need for return. file1.py ``` class Connect: def output(self): self.a = "Hello" self.data = "abc" ``` file2.py ``` from file1 import Connect connection = Connect() connection.output() print(connection.a) print(connection.data) ``` There is also the **class variable** version. file1.py ``` class Connect: def output(): Connect.a = "Hello" Connect.data = "abc" ``` file2.py ``` from file1 import Connect Connect.output() print(Connect.a) print(Connect.data) ``` Eventually, the "right" way to do it depends on the use.
One option you have is to return all the data you need from the function: file1.py ``` class connect(): # Contains different definitions def output(): a = "Hello" data = // some operations return a,data # Return all the variables as a tuple ``` file2.py ``` from file1 import connect c = connect() a,data = c.output() # Now you have local variables 'a' and 'data' from output() ```
17,438,469
This python3.3 code on win 7, why I got error: ``` import random guesses_made = 0 name = raw_input('Hello! What is your name?\n') number = random.randint(1, 20) print "Well, {0}, I am thinking of a number between 1 and 20" # error here !!! **print "Well, {0}, I am thinking of a number between 1 and 20" ^ SyntaxError: invalid syntax** ``` Thanks !!!
2013/07/03
[ "https://Stackoverflow.com/questions/17438469", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2420472/" ]
Two things: In python 3, `raw_input()` [has been changed](http://docs.python.org/3.0/whatsnew/3.0.html#builtins) to `input()`. Also, [`print` is no longer a statement but a function](http://docs.python.org/3.0/whatsnew/3.0.html#print-is-a-function), so you must do: ``` print("Well, {0}, I am thinking of a number between 1 and 20") ```
I think that last line should read: ``` print("Well, {0}, I am thinking of a number between 1 and 20".format(name)) ``` This was tested. I am pretty new to p3.3, so go easy on me :)
5,823,163
I'm currently in the process of programming a server which can let clients interact with a piece of hardware. For the interested readers it's a device which monitors the wavelength of a set of lasers concurrently (and controls the lasers). The server should be able to broadcast the wavelengths (a list of floats) on a regular basis and let the clients change the settings of the device through dll calls. My initial idea was to write a custom protocol to handle the communication, but after thinking about how to handle TCP fragmentation and data encoding I bumped into Twisted, and it looks like most of the work is already done if I use perspective broker to share the data and call server methods directly from the clients. This solution might be a bit overkill, but for me it appeared obvious, what do you think? My main concern arrose when I thought about the clients. Basically I need two types of clients, one which just displays the wavelengths (this should be straight forward) and a second which can change the device settings and get feedback when it's changed. My idea was to create a single client capable of both, but thinking about combining it with our previous system got me thinking... The second client should be controlled from an already rather complex python framework which controls a lot of independant hardware with relatively strict timing requirements, and the settings of the wavelengthmeter should then be called within this sequential code. Now the thing is, how do I mix this with the Twisted client? As I understand Twisted is not threadsafe, so I can't simply spawn a new thread running the reactor and then inteact with it from my main thread, can I? Any suggestions for writing this server/client framework through different means than Twisted are very welcome! Thanks
2011/04/28
[ "https://Stackoverflow.com/questions/5823163", "https://Stackoverflow.com", "https://Stackoverflow.com/users/729885/" ]
You can start the reactor in a dedicated thread, and then issue calls to it with [`blockingCallFromThread`](http://twistedmatrix.com/documents/11.0.0/api/twisted.internet.threads.html#blockingCallFromThread) from your existing "sequential" code. Also, I'd recommend [AMP](http://twistedmatrix.com/documents/11.0.0/api/twisted.protocols.amp.html) for the protocol rather than PB, since AMP is more amenable to heterogeneous environments ([see amp-protocol.net for independent protocol information](http://amp-protocol.net/)), and it sounds like you have a substantial amount of other technology you might want to integrate with this system.
Have you tried [zeromq](http://www.zeromq.org/)? It's a library that simplifies working with sockets. It can operate over TCP and implements several topologies, such as publisher/subscriber (for broadcasting data, such as your laser readings) and request/response (that you can use for you control scheme). There are bindings for several languages and the site is full of examples. Also, it's amazingly fast. Good stuff.
11,274,290
I have made a `.deb` of my app using [fpm](https://github.com/jordansissel/fpm/wiki): ``` fpm -s dir -t deb -n myapp -v 9 -a all -x "*.git" -x "*.bak" -x "*.orig" \ --after-remove debian/postrm --after-install debian/postinst \ --description "Automated build." -d mysql-client -d python-virtualenv home ``` Among other things, the `postinst` script is supposed to create a user for the app: ``` #!/bin/sh set -e APP_NAME=myapp case "$1" in configure) virtualenv /home/$APP_NAME/local #supervisorctl start $APP_NAME ;; # http://www.debian.org/doc/manuals/securing-debian-howto/ch9.en.html#s-bpp-lower-privs install|upgrade) # If the package has default file it could be sourced, so that # the local admin can overwrite the defaults [ -f "/etc/default/$APP_NAME" ] && . /etc/default/$APP_NAME # Sane defaults: [ -z "$SERVER_HOME" ] && SERVER_HOME=/home/$APP_NAME [ -z "$SERVER_USER" ] && SERVER_USER=$APP_NAME [ -z "$SERVER_NAME" ] && SERVER_NAME="" [ -z "$SERVER_GROUP" ] && SERVER_GROUP=$APP_NAME # Groups that the user will be added to, if undefined, then none. ADDGROUP="" # create user to avoid running server as root # 1. create group if not existing if ! getent group | grep -q "^$SERVER_GROUP:" ; then echo -n "Adding group $SERVER_GROUP.." addgroup --quiet --system $SERVER_GROUP 2>/dev/null ||true echo "..done" fi # 2. create homedir if not existing test -d $SERVER_HOME || mkdir $SERVER_HOME # 3. create user if not existing if ! getent passwd | grep -q "^$SERVER_USER:"; then echo -n "Adding system user $SERVER_USER.." adduser --quiet \ --system \ --ingroup $SERVER_GROUP \ --no-create-home \ --disabled-password \ $SERVER_USER 2>/dev/null || true echo "..done" fi # … and a bunch of other stuff. ``` It seems like the `postinst` script is being called with `configure`, but not with `install`, and I am trying to understand why. In `/var/log/dpkg.log`, I see the lines I would expect: ``` 2012-06-30 13:28:36 configure myapp 9 9 2012-06-30 13:28:36 status unpacked myapp 9 2012-06-30 13:28:36 status half-configured myapp 9 2012-06-30 13:28:43 status installed myapp 9 ``` I checked that `/etc/default/myapp` does not exist. The file `/var/lib/dpkg/info/myapp.postinst` exists, and if I run it manually with `install` as the first parameter, it works as expected. Why is the `postinst` script not being run with `install`? What can I do to debug this further?
2012/06/30
[ "https://Stackoverflow.com/questions/11274290", "https://Stackoverflow.com", "https://Stackoverflow.com/users/17498/" ]
I think the example script you copied is simply wrong. `postinst` is not supposed to be called with any `install` or `upgrade` argument, ever. The authoritative definition of the dpkg format is the Debian Policy Manual. The current version describes `postinst` in [chapter 6](http://www.debian.org/doc/debian-policy/ch-maintainerscripts.html) and only lists `configure`, `abort-upgrade`, `abort-remove`, `abort-remove`, and `abort-deconfigure` as possible first arguments. I don't have complete confidence in my answer, because your bad example is still up on debian.org and it's hard to believe such a bug could slip through.
I believe the answer provided by Alan Curry is incorrect, at least as of 2015 and beyond. There must be some fault with the way the that your package is built or an error in the `postinst` file which is causing your problem. You can debug your install by adding the `-D` (debug) option to your command line i.e.: ``` sudo dpkg -D2 -i yourpackage_name_1.0.0_all.deb ``` `-D2` should sort out this type of issue for the record the debug levels are as follows: ``` Number Description 1 Generally helpful progress information 2 Invocation and status of maintainer scripts 10 Output for each file processed 100 Lots of output for each file processed 20 Output for each configuration file 200 Lots of output for each configuration file 40 Dependencies and conflicts 400 Lots of dependencies/conflicts output 10000 Trigger activation and processing 20000 Lots of output regarding triggers 40000 Silly amounts of output regarding triggers 1000 Lots of drivel about e.g. the dpkg/info dir 2000 Insane amounts of drivel ``` The `install` command calls the `configure` option and in my experience the `postinst` script will always be run. One thing that may trip you up is that the `postrm` script of the "old" version, if upgrading a package, will be run **after** your current packages `preinst` script, this can cause havoc if you don't realise what is going on. From the dpkg man page: Installation consists of the following steps: ``` 1. Extract the control files of the new package. 2. If another version of the same package was installed before the new installation, execute prerm script of the old package. 3. Run preinst script, if provided by the package. 4. Unpack the new files, and at the same time back up the old files, so that if something goes wrong, they can be restored. 5. If another version of the same package was installed before the new installation, execute the postrm script of the old pack‐ age. Note that this script is executed after the preinst script of the new package, because new files are written at the same time old files are removed. 6. Configure the package. Configuring consists of the following steps: 1. Unpack the conffiles, and at the same time back up the old conffiles, so that they can be restored if something goes wrong. 2. Run postinst script, if provided by the package. ```
11,274,290
I have made a `.deb` of my app using [fpm](https://github.com/jordansissel/fpm/wiki): ``` fpm -s dir -t deb -n myapp -v 9 -a all -x "*.git" -x "*.bak" -x "*.orig" \ --after-remove debian/postrm --after-install debian/postinst \ --description "Automated build." -d mysql-client -d python-virtualenv home ``` Among other things, the `postinst` script is supposed to create a user for the app: ``` #!/bin/sh set -e APP_NAME=myapp case "$1" in configure) virtualenv /home/$APP_NAME/local #supervisorctl start $APP_NAME ;; # http://www.debian.org/doc/manuals/securing-debian-howto/ch9.en.html#s-bpp-lower-privs install|upgrade) # If the package has default file it could be sourced, so that # the local admin can overwrite the defaults [ -f "/etc/default/$APP_NAME" ] && . /etc/default/$APP_NAME # Sane defaults: [ -z "$SERVER_HOME" ] && SERVER_HOME=/home/$APP_NAME [ -z "$SERVER_USER" ] && SERVER_USER=$APP_NAME [ -z "$SERVER_NAME" ] && SERVER_NAME="" [ -z "$SERVER_GROUP" ] && SERVER_GROUP=$APP_NAME # Groups that the user will be added to, if undefined, then none. ADDGROUP="" # create user to avoid running server as root # 1. create group if not existing if ! getent group | grep -q "^$SERVER_GROUP:" ; then echo -n "Adding group $SERVER_GROUP.." addgroup --quiet --system $SERVER_GROUP 2>/dev/null ||true echo "..done" fi # 2. create homedir if not existing test -d $SERVER_HOME || mkdir $SERVER_HOME # 3. create user if not existing if ! getent passwd | grep -q "^$SERVER_USER:"; then echo -n "Adding system user $SERVER_USER.." adduser --quiet \ --system \ --ingroup $SERVER_GROUP \ --no-create-home \ --disabled-password \ $SERVER_USER 2>/dev/null || true echo "..done" fi # … and a bunch of other stuff. ``` It seems like the `postinst` script is being called with `configure`, but not with `install`, and I am trying to understand why. In `/var/log/dpkg.log`, I see the lines I would expect: ``` 2012-06-30 13:28:36 configure myapp 9 9 2012-06-30 13:28:36 status unpacked myapp 9 2012-06-30 13:28:36 status half-configured myapp 9 2012-06-30 13:28:43 status installed myapp 9 ``` I checked that `/etc/default/myapp` does not exist. The file `/var/lib/dpkg/info/myapp.postinst` exists, and if I run it manually with `install` as the first parameter, it works as expected. Why is the `postinst` script not being run with `install`? What can I do to debug this further?
2012/06/30
[ "https://Stackoverflow.com/questions/11274290", "https://Stackoverflow.com", "https://Stackoverflow.com/users/17498/" ]
I think the example script you copied is simply wrong. `postinst` is not supposed to be called with any `install` or `upgrade` argument, ever. The authoritative definition of the dpkg format is the Debian Policy Manual. The current version describes `postinst` in [chapter 6](http://www.debian.org/doc/debian-policy/ch-maintainerscripts.html) and only lists `configure`, `abort-upgrade`, `abort-remove`, `abort-remove`, and `abort-deconfigure` as possible first arguments. I don't have complete confidence in my answer, because your bad example is still up on debian.org and it's hard to believe such a bug could slip through.
This is an old issue that has been resolved, but it seems to me that the accepted solution is not totally correct and I believe that it is necessary to provide information for those who, like me, are having this same problem. [Chapter 6.5](https://www.debian.org/doc/debian-policy/ch-maintainerscripts.html) details all the parameters with which the preinst and postinst files are called At <https://wiki.debian.org/MaintainerScripts> the installation and uninstallation flow is detailed. Watch what happens in the following case: apt-get install package - It runs **preinst install** and then **postinst configure** apt-get remove package - Execute postrm remove and the package will be set to "**Config Files**" For the package to actually be in the "**not installed**" state it must be used: apt-get purge package That's the only way we'll be able to run **preinst install** and **postinst configure** the next time the package is installed.
11,274,290
I have made a `.deb` of my app using [fpm](https://github.com/jordansissel/fpm/wiki): ``` fpm -s dir -t deb -n myapp -v 9 -a all -x "*.git" -x "*.bak" -x "*.orig" \ --after-remove debian/postrm --after-install debian/postinst \ --description "Automated build." -d mysql-client -d python-virtualenv home ``` Among other things, the `postinst` script is supposed to create a user for the app: ``` #!/bin/sh set -e APP_NAME=myapp case "$1" in configure) virtualenv /home/$APP_NAME/local #supervisorctl start $APP_NAME ;; # http://www.debian.org/doc/manuals/securing-debian-howto/ch9.en.html#s-bpp-lower-privs install|upgrade) # If the package has default file it could be sourced, so that # the local admin can overwrite the defaults [ -f "/etc/default/$APP_NAME" ] && . /etc/default/$APP_NAME # Sane defaults: [ -z "$SERVER_HOME" ] && SERVER_HOME=/home/$APP_NAME [ -z "$SERVER_USER" ] && SERVER_USER=$APP_NAME [ -z "$SERVER_NAME" ] && SERVER_NAME="" [ -z "$SERVER_GROUP" ] && SERVER_GROUP=$APP_NAME # Groups that the user will be added to, if undefined, then none. ADDGROUP="" # create user to avoid running server as root # 1. create group if not existing if ! getent group | grep -q "^$SERVER_GROUP:" ; then echo -n "Adding group $SERVER_GROUP.." addgroup --quiet --system $SERVER_GROUP 2>/dev/null ||true echo "..done" fi # 2. create homedir if not existing test -d $SERVER_HOME || mkdir $SERVER_HOME # 3. create user if not existing if ! getent passwd | grep -q "^$SERVER_USER:"; then echo -n "Adding system user $SERVER_USER.." adduser --quiet \ --system \ --ingroup $SERVER_GROUP \ --no-create-home \ --disabled-password \ $SERVER_USER 2>/dev/null || true echo "..done" fi # … and a bunch of other stuff. ``` It seems like the `postinst` script is being called with `configure`, but not with `install`, and I am trying to understand why. In `/var/log/dpkg.log`, I see the lines I would expect: ``` 2012-06-30 13:28:36 configure myapp 9 9 2012-06-30 13:28:36 status unpacked myapp 9 2012-06-30 13:28:36 status half-configured myapp 9 2012-06-30 13:28:43 status installed myapp 9 ``` I checked that `/etc/default/myapp` does not exist. The file `/var/lib/dpkg/info/myapp.postinst` exists, and if I run it manually with `install` as the first parameter, it works as expected. Why is the `postinst` script not being run with `install`? What can I do to debug this further?
2012/06/30
[ "https://Stackoverflow.com/questions/11274290", "https://Stackoverflow.com", "https://Stackoverflow.com/users/17498/" ]
I believe the answer provided by Alan Curry is incorrect, at least as of 2015 and beyond. There must be some fault with the way the that your package is built or an error in the `postinst` file which is causing your problem. You can debug your install by adding the `-D` (debug) option to your command line i.e.: ``` sudo dpkg -D2 -i yourpackage_name_1.0.0_all.deb ``` `-D2` should sort out this type of issue for the record the debug levels are as follows: ``` Number Description 1 Generally helpful progress information 2 Invocation and status of maintainer scripts 10 Output for each file processed 100 Lots of output for each file processed 20 Output for each configuration file 200 Lots of output for each configuration file 40 Dependencies and conflicts 400 Lots of dependencies/conflicts output 10000 Trigger activation and processing 20000 Lots of output regarding triggers 40000 Silly amounts of output regarding triggers 1000 Lots of drivel about e.g. the dpkg/info dir 2000 Insane amounts of drivel ``` The `install` command calls the `configure` option and in my experience the `postinst` script will always be run. One thing that may trip you up is that the `postrm` script of the "old" version, if upgrading a package, will be run **after** your current packages `preinst` script, this can cause havoc if you don't realise what is going on. From the dpkg man page: Installation consists of the following steps: ``` 1. Extract the control files of the new package. 2. If another version of the same package was installed before the new installation, execute prerm script of the old package. 3. Run preinst script, if provided by the package. 4. Unpack the new files, and at the same time back up the old files, so that if something goes wrong, they can be restored. 5. If another version of the same package was installed before the new installation, execute the postrm script of the old pack‐ age. Note that this script is executed after the preinst script of the new package, because new files are written at the same time old files are removed. 6. Configure the package. Configuring consists of the following steps: 1. Unpack the conffiles, and at the same time back up the old conffiles, so that they can be restored if something goes wrong. 2. Run postinst script, if provided by the package. ```
This is an old issue that has been resolved, but it seems to me that the accepted solution is not totally correct and I believe that it is necessary to provide information for those who, like me, are having this same problem. [Chapter 6.5](https://www.debian.org/doc/debian-policy/ch-maintainerscripts.html) details all the parameters with which the preinst and postinst files are called At <https://wiki.debian.org/MaintainerScripts> the installation and uninstallation flow is detailed. Watch what happens in the following case: apt-get install package - It runs **preinst install** and then **postinst configure** apt-get remove package - Execute postrm remove and the package will be set to "**Config Files**" For the package to actually be in the "**not installed**" state it must be used: apt-get purge package That's the only way we'll be able to run **preinst install** and **postinst configure** the next time the package is installed.
39,103,057
Ok So ive been able to send mail and read mail but I am now trying to attach an attachment to the mail and it doesnt seem to append the document as expected. I dont get any errors but I also dont get the mail if I attempt to add the attachment. The library im using is [here](https://github.com/Narcolapser/python-o365) The returned value frome the function is `True` but an email never arrives if i remove the `m.attachments.append('/path/to/data.xls')` line the email arrives as expected (without an attachment of course). **Code** ``` def sendAddresses(username, password): try: authenticiation = (username, password) m = Message(auth=authenticiation) m.attachments.append('/path/to/data.xls') m.setRecipients("email@address.com") m.setSubject("Test Subject") m.setBody("Test Email") m.sendMessage() except Exception, e: print e return False return True ```
2016/08/23
[ "https://Stackoverflow.com/questions/39103057", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3449832/" ]
I've reproduced your attached command as a ``` public class MyBehavior : Behavior<ListBox> { ``` to a XAML ``` <ListBox SelectedItem="SelCust" Name="MyListBox" Loaded="MyListBox_Loaded" IsSynchronizedWithCurrentItem="True" DisplayMemberPath="Name" ItemsSource="{Binding Customers}"> <i:Interaction.Behaviors> <local:MyBehavior/> </i:Interaction.Behaviors> <i:Interaction.Triggers> <i:EventTrigger EventName="Loaded"> <i:InvokeCommandAction Command="{Binding Path=LoadCommand}"/> </i:EventTrigger> </i:Interaction.Triggers> </ListBox> ``` where I've also added a binding for the Load event to the ViewModel ``` public CustomerViewModel() { IList<Customer> customers = Customer.GetCustomers().ToList(); _customerView = CollectionViewSource.GetDefaultView(customers); _customerView.MoveCurrentToLast(); _customerView.CurrentChanged += CustomerSelectionChanged; } private void CustomerSelectionChanged(object sender, EventArgs e) { // React to the changed selection Debug.WriteLine("Here"); var sel = (sender as CollectionView).CurrentItem; if ( sel!= null) { //Do Something } } private DelegateCommand loadCommand; public ICommand LoadCommand { get { if (loadCommand == null) { loadCommand = new DelegateCommand(VMLoad); } return (ICommand)loadCommand; } } bool IsLoaded = false; private void VMLoad(object obj) { IsLoaded = true; } ``` and in the code-behind ``` public MainWindow() { InitializeComponent(); DataContext = new CustomerViewModel(); } private void MyListBox_Loaded(object sender, RoutedEventArgs e) { MyListBox.ScrollIntoView(MyListBox.Items[MyListBox.Items.Count - 1]); } ``` When I debug it, I see that the this is the sequence of events fired: 1. `CurrentChanged` wiht the last item of the collection 2. `Loaded` handler **in the code-behind** 3. `LoadCommand` in the ViewModel and only **after** that 4. `ScrollIntoView` from the `AssociatedObject_SelectionChanged` So basically I'm suggesting a couple of things: * add (another) `ScrollIntoView` (for the last item of the collection) from the `Loaded` handler in the code-behind * for whatever action you need to perform *when you have to detect if some items are already visible to user*, first check `IsLoaded` to exclude any transient effect
Why not simply scroll to the last value in your collection after you set the DataContext or ItemSource? No data will render until you set your data context, and until you exit the constructor. To my understanding if you do the following to steps in sequence in the constructor, it should work as expected. ```cs listBox.DataContext = _items; listBox.ScrollIntoView(_items.Last()); ```
8,029,363
I am new to django. I have version 1.3.1 installed. I have created two projects: **projectone** and **projecttwo** using django-admin.py And in **projectone** I have an app called **blog** created using python manage.py startapp In **projecttwo** setings.py file when put the following in installed\_apps: ``` INSTALLED_APPS = ( other code goes here... 'projectone.blog' ) ``` And then when I run projecttwo using manage.py I get: ``` Error: No module named projectone.blog ``` I have \_\_ init \_\_.py files placed correctly. I just cannot figure out why. Maybe because projectone is not in pythonpath? Is that what django-admin.py does? and not doing it on mine for some reason? I am not sure.
2011/11/06
[ "https://Stackoverflow.com/questions/8029363", "https://Stackoverflow.com", "https://Stackoverflow.com/users/614954/" ]
Look at what manage.py does: <https://docs.djangoproject.com/en/dev/ref/django-admin/#django-admin-py-and-manage-py> It dynamically adds your apps to the python path when you use it - i.e. when you are using **runserver** during development. You have two separate projects so when you run either one you will only have the apps from that particular project on the python path. To use an app from one project 'outside', you need to manually add these apps to your global python path if you want to use them outside of the current project
You are trying to install a **Project** in your INSTALLED\_APPS on settings.py, those are different projects. Instead you need to create just one project and create differents apps. Remember that apps are meant to be reusable so if you need the blog app in other project just reuse it. If you are new to Django you should read the Tutorial in the [documentation](https://docs.djangoproject.com/en/1.3/)
7,459,766
I got the below failure while trying to get MySQL-python installed on my Ubuntu/Linux Box.From the below it seem like the issue is `sh: mysql_config: not found` Could someone advice me on what to do? ``` rmicro@ubuntu:~$ pip install MySQL-python Downloading/unpacking MySQL-python Downloading MySQL-python-1.2.3.tar.gz (70Kb): 70Kb downloaded Running setup.py egg_info for package MySQL-python sh: mysql_config: not found Traceback (most recent call last): File "<string>", line 14, in <module> File "/home/rmicro/build/MySQL-python/setup.py", line 15, in <module> metadata, options = get_config() File "setup_posix.py", line 43, in get_config libs = mysql_config("libs_r") File "setup_posix.py", line 24, in mysql_config raise EnvironmentError("%s not found" % (mysql_config.path,)) EnvironmentError: mysql_config not found Complete output from command python setup.py egg_info: sh: mysql_config: not found Traceback (most recent call last): File "<string>", line 14, in <module> File "/home/rmicro/build/MySQL-python/setup.py", line 15, in <module> metadata, options = get_config() File "setup_posix.py", line 43, in get_config libs = mysql_config("libs_r") File "setup_posix.py", line 24, in mysql_config raise EnvironmentError("%s not found" % (mysql_config.path,)) EnvironmentError: mysql_config not found ---------------------------------------- Command python setup.py egg_info failed with error code 1 ```
2011/09/18
[ "https://Stackoverflow.com/questions/7459766", "https://Stackoverflow.com", "https://Stackoverflow.com/users/618677/" ]
On Ubuntu it is advised to use the distributions repository. So installing python-mysqldb should be straight forward: ``` sudo apt-get install python-mysqldb ``` If you actually want to use pip to install, which is as mentioned before not the suggested path but possible, please have a look at this previously asked question and answer: [pip install mysql-python fails with EnvironmentError: mysql\_config not found](https://stackoverflow.com/questions/5178292/pip-install-mysql-python-show-error) Here is a very comprehensive guide by the developer: <http://mysql-python.blogspot.no/2012/11/is-mysqldb-hard-to-install.html> To get all the prerequisites for python-mysqld to install it using pip (which you will want to do if you are using virtualenv), run this: ``` sudo apt-get install build-essential python-dev libmysqlclient-dev ```
In python3 with virtualenv on a Ubuntu Bionic machine the following commands worked for me: ``` sudo apt install build-essential python-dev libmysqlclient-dev sudo apt-get install libssl-dev pip install mysqlclient ```
7,459,766
I got the below failure while trying to get MySQL-python installed on my Ubuntu/Linux Box.From the below it seem like the issue is `sh: mysql_config: not found` Could someone advice me on what to do? ``` rmicro@ubuntu:~$ pip install MySQL-python Downloading/unpacking MySQL-python Downloading MySQL-python-1.2.3.tar.gz (70Kb): 70Kb downloaded Running setup.py egg_info for package MySQL-python sh: mysql_config: not found Traceback (most recent call last): File "<string>", line 14, in <module> File "/home/rmicro/build/MySQL-python/setup.py", line 15, in <module> metadata, options = get_config() File "setup_posix.py", line 43, in get_config libs = mysql_config("libs_r") File "setup_posix.py", line 24, in mysql_config raise EnvironmentError("%s not found" % (mysql_config.path,)) EnvironmentError: mysql_config not found Complete output from command python setup.py egg_info: sh: mysql_config: not found Traceback (most recent call last): File "<string>", line 14, in <module> File "/home/rmicro/build/MySQL-python/setup.py", line 15, in <module> metadata, options = get_config() File "setup_posix.py", line 43, in get_config libs = mysql_config("libs_r") File "setup_posix.py", line 24, in mysql_config raise EnvironmentError("%s not found" % (mysql_config.path,)) EnvironmentError: mysql_config not found ---------------------------------------- Command python setup.py egg_info failed with error code 1 ```
2011/09/18
[ "https://Stackoverflow.com/questions/7459766", "https://Stackoverflow.com", "https://Stackoverflow.com/users/618677/" ]
Python or Python3 with MySQL, you will need these. These libraries use MySQL's connector for C and Python (you need the C libraries installed as well), which overcome some of the limitations of the mysqldb libraries. ``` sudo apt-get install libmysqlclient-dev sudo apt-get install python-mysql.connector sudo apt-get install python3-mysql.connector ```
this worked for me on python 3 pip install mysqlclient -----------------------
7,459,766
I got the below failure while trying to get MySQL-python installed on my Ubuntu/Linux Box.From the below it seem like the issue is `sh: mysql_config: not found` Could someone advice me on what to do? ``` rmicro@ubuntu:~$ pip install MySQL-python Downloading/unpacking MySQL-python Downloading MySQL-python-1.2.3.tar.gz (70Kb): 70Kb downloaded Running setup.py egg_info for package MySQL-python sh: mysql_config: not found Traceback (most recent call last): File "<string>", line 14, in <module> File "/home/rmicro/build/MySQL-python/setup.py", line 15, in <module> metadata, options = get_config() File "setup_posix.py", line 43, in get_config libs = mysql_config("libs_r") File "setup_posix.py", line 24, in mysql_config raise EnvironmentError("%s not found" % (mysql_config.path,)) EnvironmentError: mysql_config not found Complete output from command python setup.py egg_info: sh: mysql_config: not found Traceback (most recent call last): File "<string>", line 14, in <module> File "/home/rmicro/build/MySQL-python/setup.py", line 15, in <module> metadata, options = get_config() File "setup_posix.py", line 43, in get_config libs = mysql_config("libs_r") File "setup_posix.py", line 24, in mysql_config raise EnvironmentError("%s not found" % (mysql_config.path,)) EnvironmentError: mysql_config not found ---------------------------------------- Command python setup.py egg_info failed with error code 1 ```
2011/09/18
[ "https://Stackoverflow.com/questions/7459766", "https://Stackoverflow.com", "https://Stackoverflow.com/users/618677/" ]
On Ubuntu it is advised to use the distributions repository. So installing python-mysqldb should be straight forward: ``` sudo apt-get install python-mysqldb ``` If you actually want to use pip to install, which is as mentioned before not the suggested path but possible, please have a look at this previously asked question and answer: [pip install mysql-python fails with EnvironmentError: mysql\_config not found](https://stackoverflow.com/questions/5178292/pip-install-mysql-python-show-error) Here is a very comprehensive guide by the developer: <http://mysql-python.blogspot.no/2012/11/is-mysqldb-hard-to-install.html> To get all the prerequisites for python-mysqld to install it using pip (which you will want to do if you are using virtualenv), run this: ``` sudo apt-get install build-essential python-dev libmysqlclient-dev ```
Reread the error message. It says: > > sh: mysql\_config: not found > > > If you are on Ubuntu Natty, `mysql_config` belongs to package [libmysqlclient-dev](http://packages.ubuntu.com/search?searchon=contents&keywords=mysql_config&mode=exactfilename&suite=natty&arch=any)
7,459,766
I got the below failure while trying to get MySQL-python installed on my Ubuntu/Linux Box.From the below it seem like the issue is `sh: mysql_config: not found` Could someone advice me on what to do? ``` rmicro@ubuntu:~$ pip install MySQL-python Downloading/unpacking MySQL-python Downloading MySQL-python-1.2.3.tar.gz (70Kb): 70Kb downloaded Running setup.py egg_info for package MySQL-python sh: mysql_config: not found Traceback (most recent call last): File "<string>", line 14, in <module> File "/home/rmicro/build/MySQL-python/setup.py", line 15, in <module> metadata, options = get_config() File "setup_posix.py", line 43, in get_config libs = mysql_config("libs_r") File "setup_posix.py", line 24, in mysql_config raise EnvironmentError("%s not found" % (mysql_config.path,)) EnvironmentError: mysql_config not found Complete output from command python setup.py egg_info: sh: mysql_config: not found Traceback (most recent call last): File "<string>", line 14, in <module> File "/home/rmicro/build/MySQL-python/setup.py", line 15, in <module> metadata, options = get_config() File "setup_posix.py", line 43, in get_config libs = mysql_config("libs_r") File "setup_posix.py", line 24, in mysql_config raise EnvironmentError("%s not found" % (mysql_config.path,)) EnvironmentError: mysql_config not found ---------------------------------------- Command python setup.py egg_info failed with error code 1 ```
2011/09/18
[ "https://Stackoverflow.com/questions/7459766", "https://Stackoverflow.com", "https://Stackoverflow.com/users/618677/" ]
Reread the error message. It says: > > sh: mysql\_config: not found > > > If you are on Ubuntu Natty, `mysql_config` belongs to package [libmysqlclient-dev](http://packages.ubuntu.com/search?searchon=contents&keywords=mysql_config&mode=exactfilename&suite=natty&arch=any)
In python3 with virtualenv on a Ubuntu Bionic machine the following commands worked for me: ``` sudo apt install build-essential python-dev libmysqlclient-dev sudo apt-get install libssl-dev pip install mysqlclient ```
7,459,766
I got the below failure while trying to get MySQL-python installed on my Ubuntu/Linux Box.From the below it seem like the issue is `sh: mysql_config: not found` Could someone advice me on what to do? ``` rmicro@ubuntu:~$ pip install MySQL-python Downloading/unpacking MySQL-python Downloading MySQL-python-1.2.3.tar.gz (70Kb): 70Kb downloaded Running setup.py egg_info for package MySQL-python sh: mysql_config: not found Traceback (most recent call last): File "<string>", line 14, in <module> File "/home/rmicro/build/MySQL-python/setup.py", line 15, in <module> metadata, options = get_config() File "setup_posix.py", line 43, in get_config libs = mysql_config("libs_r") File "setup_posix.py", line 24, in mysql_config raise EnvironmentError("%s not found" % (mysql_config.path,)) EnvironmentError: mysql_config not found Complete output from command python setup.py egg_info: sh: mysql_config: not found Traceback (most recent call last): File "<string>", line 14, in <module> File "/home/rmicro/build/MySQL-python/setup.py", line 15, in <module> metadata, options = get_config() File "setup_posix.py", line 43, in get_config libs = mysql_config("libs_r") File "setup_posix.py", line 24, in mysql_config raise EnvironmentError("%s not found" % (mysql_config.path,)) EnvironmentError: mysql_config not found ---------------------------------------- Command python setup.py egg_info failed with error code 1 ```
2011/09/18
[ "https://Stackoverflow.com/questions/7459766", "https://Stackoverflow.com", "https://Stackoverflow.com/users/618677/" ]
``` yum install mysql-devel ``` It worked for me.
1. find the folder: `sudo find / -name "mysql_config"` (assume it's `"/opt/local/lib/mysql5/bin"`) 2. add it into PATH:`export PATH:export PATH=/opt/local/lib/mysql5/bin:$PATH` 3. install it again
7,459,766
I got the below failure while trying to get MySQL-python installed on my Ubuntu/Linux Box.From the below it seem like the issue is `sh: mysql_config: not found` Could someone advice me on what to do? ``` rmicro@ubuntu:~$ pip install MySQL-python Downloading/unpacking MySQL-python Downloading MySQL-python-1.2.3.tar.gz (70Kb): 70Kb downloaded Running setup.py egg_info for package MySQL-python sh: mysql_config: not found Traceback (most recent call last): File "<string>", line 14, in <module> File "/home/rmicro/build/MySQL-python/setup.py", line 15, in <module> metadata, options = get_config() File "setup_posix.py", line 43, in get_config libs = mysql_config("libs_r") File "setup_posix.py", line 24, in mysql_config raise EnvironmentError("%s not found" % (mysql_config.path,)) EnvironmentError: mysql_config not found Complete output from command python setup.py egg_info: sh: mysql_config: not found Traceback (most recent call last): File "<string>", line 14, in <module> File "/home/rmicro/build/MySQL-python/setup.py", line 15, in <module> metadata, options = get_config() File "setup_posix.py", line 43, in get_config libs = mysql_config("libs_r") File "setup_posix.py", line 24, in mysql_config raise EnvironmentError("%s not found" % (mysql_config.path,)) EnvironmentError: mysql_config not found ---------------------------------------- Command python setup.py egg_info failed with error code 1 ```
2011/09/18
[ "https://Stackoverflow.com/questions/7459766", "https://Stackoverflow.com", "https://Stackoverflow.com/users/618677/" ]
You have 2 options, as described bellow: --- Distribution package like Glaslos suggested: ``` # sudo apt-get install python-mysqldb ``` In this case you can't use virtualenv no-site-packages (default option) but must use: ``` # virtualenv --system-site-packages myenv ``` --- Use clean virtualenv and build your own python-mysql package. First create virtualenv: ``` # virtualenv myvirtualenv # source myvirtualenv/bin/activate ``` Then install build dependencies: ``` # sudo apt-get build-dep python-mysqldb ``` Now you can install python-mysql ``` # pip install mysql-python ``` --- **NOTE** Ubuntu package is python-mysql\**db*\* , python pypi package is python-mysql (without **db**)
this worked for me on python 3 pip install mysqlclient -----------------------
7,459,766
I got the below failure while trying to get MySQL-python installed on my Ubuntu/Linux Box.From the below it seem like the issue is `sh: mysql_config: not found` Could someone advice me on what to do? ``` rmicro@ubuntu:~$ pip install MySQL-python Downloading/unpacking MySQL-python Downloading MySQL-python-1.2.3.tar.gz (70Kb): 70Kb downloaded Running setup.py egg_info for package MySQL-python sh: mysql_config: not found Traceback (most recent call last): File "<string>", line 14, in <module> File "/home/rmicro/build/MySQL-python/setup.py", line 15, in <module> metadata, options = get_config() File "setup_posix.py", line 43, in get_config libs = mysql_config("libs_r") File "setup_posix.py", line 24, in mysql_config raise EnvironmentError("%s not found" % (mysql_config.path,)) EnvironmentError: mysql_config not found Complete output from command python setup.py egg_info: sh: mysql_config: not found Traceback (most recent call last): File "<string>", line 14, in <module> File "/home/rmicro/build/MySQL-python/setup.py", line 15, in <module> metadata, options = get_config() File "setup_posix.py", line 43, in get_config libs = mysql_config("libs_r") File "setup_posix.py", line 24, in mysql_config raise EnvironmentError("%s not found" % (mysql_config.path,)) EnvironmentError: mysql_config not found ---------------------------------------- Command python setup.py egg_info failed with error code 1 ```
2011/09/18
[ "https://Stackoverflow.com/questions/7459766", "https://Stackoverflow.com", "https://Stackoverflow.com/users/618677/" ]
On Ubuntu it is advised to use the distributions repository. So installing python-mysqldb should be straight forward: ``` sudo apt-get install python-mysqldb ``` If you actually want to use pip to install, which is as mentioned before not the suggested path but possible, please have a look at this previously asked question and answer: [pip install mysql-python fails with EnvironmentError: mysql\_config not found](https://stackoverflow.com/questions/5178292/pip-install-mysql-python-show-error) Here is a very comprehensive guide by the developer: <http://mysql-python.blogspot.no/2012/11/is-mysqldb-hard-to-install.html> To get all the prerequisites for python-mysqld to install it using pip (which you will want to do if you are using virtualenv), run this: ``` sudo apt-get install build-essential python-dev libmysqlclient-dev ```
``` yum install mysql-devel ``` It worked for me.