qid
int64
46k
74.7M
question
stringlengths
54
37.8k
date
stringlengths
10
10
metadata
listlengths
3
3
response_j
stringlengths
17
26k
response_k
stringlengths
26
26k
11,459,861
I am a molecular biologist using Biopython to analyze mutations in genes and my problem is this: I have a file containing many different sequences (millions), most of which are duplicates. I need to find the duplicates and discard them, keeping one copy of each unique sequence. I was planning on using the module editdist to calculate the edit distance between them all to determine which ones the duplicates are, but editdist can only work with 2 strings, not files. Anyone know how I can use that module with files instead of strings?
2012/07/12
[ "https://Stackoverflow.com/questions/11459861", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1513202/" ]
Assuming your file consists solely of sequences arranged one sequence per line, I would suggest the following: ``` seq_file = open(#your file) sequences = [seq for seq in seq_file] uniques = list(set(sequences)) ``` Assuming you have the memory for it. How many millions? ETA: Was reading the comments above (but don't have comment privs) - assuming the sequence IDs are the same for any duplicates, this will work. If duplicate sequences can different sequence IDs, then would to know which comes first and what is between them in the file.
Four things come to mind: 1. You can use a set(), as described by F.X. - assuming the unique strings will all fit in memory 2. You can use one file per sequence, and feed the files to a program like equivs3e: <http://stromberg.dnsalias.org/~strombrg/equivalence-classes.html#python-3e> 3. You could perhaps use a gdbm as a set, instead of its usual key-value store use. This is good if you need something that's 100% accurate, but you have too much data to fit all the uniques in Virtual Memory. 4. You could perhaps use a bloom filter to cut down the data to more manageable sizes, if you have a truly huge number of strings to check and lots of duplicates. Basically a bloom filter can say "This is definitely not in the set" or "This is almost definitely in the set". In this way, you can eliminate most of the obvious duplicates before using a more common means to operate on the remaining elements. <http://stromberg.dnsalias.org/~strombrg/drs-bloom-filter/>
11,459,861
I am a molecular biologist using Biopython to analyze mutations in genes and my problem is this: I have a file containing many different sequences (millions), most of which are duplicates. I need to find the duplicates and discard them, keeping one copy of each unique sequence. I was planning on using the module editdist to calculate the edit distance between them all to determine which ones the duplicates are, but editdist can only work with 2 strings, not files. Anyone know how I can use that module with files instead of strings?
2012/07/12
[ "https://Stackoverflow.com/questions/11459861", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1513202/" ]
Assuming your file consists solely of sequences arranged one sequence per line, I would suggest the following: ``` seq_file = open(#your file) sequences = [seq for seq in seq_file] uniques = list(set(sequences)) ``` Assuming you have the memory for it. How many millions? ETA: Was reading the comments above (but don't have comment privs) - assuming the sequence IDs are the same for any duplicates, this will work. If duplicate sequences can different sequence IDs, then would to know which comes first and what is between them in the file.
Does it have to be Python? If the sequences are simply text strings one per line then a shell script will be very efficient: ``` sort input-file-name | uniq > output-file-name ``` This will do the job on files up to 2GB on 32 bit Linux. If you are on Windows then install the GNU utils <http://gnuwin32.sourceforge.net/summary.html>.
11,459,861
I am a molecular biologist using Biopython to analyze mutations in genes and my problem is this: I have a file containing many different sequences (millions), most of which are duplicates. I need to find the duplicates and discard them, keeping one copy of each unique sequence. I was planning on using the module editdist to calculate the edit distance between them all to determine which ones the duplicates are, but editdist can only work with 2 strings, not files. Anyone know how I can use that module with files instead of strings?
2012/07/12
[ "https://Stackoverflow.com/questions/11459861", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1513202/" ]
Does it have to be Python? If the sequences are simply text strings one per line then a shell script will be very efficient: ``` sort input-file-name | uniq > output-file-name ``` This will do the job on files up to 2GB on 32 bit Linux. If you are on Windows then install the GNU utils <http://gnuwin32.sourceforge.net/summary.html>.
Four things come to mind: 1. You can use a set(), as described by F.X. - assuming the unique strings will all fit in memory 2. You can use one file per sequence, and feed the files to a program like equivs3e: <http://stromberg.dnsalias.org/~strombrg/equivalence-classes.html#python-3e> 3. You could perhaps use a gdbm as a set, instead of its usual key-value store use. This is good if you need something that's 100% accurate, but you have too much data to fit all the uniques in Virtual Memory. 4. You could perhaps use a bloom filter to cut down the data to more manageable sizes, if you have a truly huge number of strings to check and lots of duplicates. Basically a bloom filter can say "This is definitely not in the set" or "This is almost definitely in the set". In this way, you can eliminate most of the obvious duplicates before using a more common means to operate on the remaining elements. <http://stromberg.dnsalias.org/~strombrg/drs-bloom-filter/>
2,396,382
this is the script >> ``` import ClientForm import urllib2 request = urllib2.Request("http://ritaj.birzeit.edu") response = urllib2.urlopen(request) forms = ClientForm.ParseResponse(response, backwards_compat=False) response.close() form = forms[0] print form sooform = str(raw_input("Form Name: ")) username = str(raw_input("Username: ")) password = str(raw_input("Password: ")) form[sooform] = [username, password] request2 = form.click() try: response2 = urllib2.urlopen(request2) except urllib2.HTTPError, response2: pass print response2.geturl() print response2.info() # headers print response2.read() # body response2.close() ``` when start the script ,, i got this ``` Traceback (most recent call last): File "C:/Python26/ritaj2.py", line 9, in <module> form = forms[0] IndexError: list index out of range ``` what is th problem,, i running on windows, python 2.6.4 **Update:** I want a script that login this site, and print the response :)
2010/03/07
[ "https://Stackoverflow.com/questions/2396382", "https://Stackoverflow.com", "https://Stackoverflow.com/users/288208/" ]
The only `<form>` tag in the HTML served at that URL (save it to a file and look for yourself!) is: ``` <form method="GET" action="http://www.google.com/u/ritaj"> ``` which does a customized Google search and has nothing to do with logging in (plus, for some reason, ClientForm has some problem identifying that specific form -- but that form is no use to you anyway, so I didn't explore that issue further). You can still get at the **controls** in the page by using ``` forms = ClientForms.ParseResponseEx(response) ``` which makes `forms[0]` an artificial one containing all controls that aren't within a form. Specifically, this approach identifies controls with the following names, in order (again there's a bit of parsing confusion here, but hopefully not a killer for you...): ``` >>> f = forms[0] >>> [c.name for c in f.controls] ['q', 'sitesearch', 'sa', 'domains', 'form:mode', 'form:id', '__confirmed_p', '__refreshing_p', 'return_url', 'time', 'token_id', 'hash', 'username', 'password', 'persistent_p', 'formbutton:ok'] ``` so you should be able to set the `username` and `password` controls of the "non-form form" `f`, and proceed from there. (A side bit: `raw_input` already returns a string, lose those redundant `str()` calls around it).
the actual address seems to be using `https` instead of `http`. check the [urllib2](http://docs.python.org/library/urllib2.html) doc to see if it handles HTTPS( i believe you need ssl)
25,240,268
Say for example, I have two text files containing the following: **File 1** > > "key\_one" = "String value for key one" > > "key\_two" = "String value for key two" > > // COMMENT // > > "key\_three" = "String value for key two" > > > **File 2** > > // COMMENT > > "key\_one" = "key\_one" > > // COMMENT > > "key\_two" = "key\_two" > > > Now, I want to loop through **File 1** and get out each key and string value (if its not a comment line). I then want to search **File 2** for the key and if its found, replace its string value with the string value from **File 1** I'd guess using some regex would be good here but thats where my plan fails. I don't really have a great understanding of regex although I am getting better. Heres the regex I came up with to match the keys: `"^\"\w*\""` And heres the regex I was trying to match the string: `"= [\"a-zA-Z0-9 ]*"` These may not be right or the best so feel free to correct me. I am looking to complete this task using either a bash script or a python script. I did try in python to use the regex search and match functions but with little success.
2014/08/11
[ "https://Stackoverflow.com/questions/25240268", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1813167/" ]
There is a quote that I heard from somewhere: "If you have a problem and you try to solve it with regular expressions, you now have two problems". What you want to achieve can be easily done with just a few inbuilt Python string methods such as `startswith()` and `split()`, without using any regex. In short you can do the following: ``` For each line of File 1 Check if it's a comment line by checking that it starts with '//' If not a comment line, split it to `key` and `value` Store the key/value in a dictionary For each line of File 2 Check if it's a comment line by checking that it starts with '//' If not a comment line, split it to `key` and `value` Check the dictionary to see if the key exists Output to the file as necessary ```
``` import pprint ``` def get\_values(f): file1 = open(f,"r").readlines() values = {} for line in file1: if line[:2] !="//" and "=" in line: #print line key, value = line.split("=") #print key, value values[key]=value ``` return values ``` def replace\_values(v1, v2): for key in v1: v = v1[key] if key in v2: v2[key]=v file1\_values = get\_values("file1.txt") file2\_values = get\_values("file2.txt") print "BEFORE" print pprint.pprint(file1\_values) print pprint.pprint(file2\_values) replace\_values(file1\_values, file2\_values) print "AFTER" print pprint.pprint(file1\_values) print pprint.pprint(file2\_values) If the text files are that predictable then you could use something like that. The above code will do what you want and replace the values with the following output: ``` BEFORE {'"key_one" ': ' "String value for key one"\n', '"key_three" ': ' "String value for key two"', '"key_two" ': ' "String value for key two"\n'} {'"key_one" ': ' "key_one"\n', '"key_two" ': ' "key_two"'} AFTER {'"key_one" ': ' "String value for key one"\n', '"key_three" ': ' "String value for key two"', '"key_two" ': ' "String value for key two"\n'} {'"key_one" ': ' "String value for key one"\n', '"key_two" ': ' "String value for key two"\n'} ```
25,240,268
Say for example, I have two text files containing the following: **File 1** > > "key\_one" = "String value for key one" > > "key\_two" = "String value for key two" > > // COMMENT // > > "key\_three" = "String value for key two" > > > **File 2** > > // COMMENT > > "key\_one" = "key\_one" > > // COMMENT > > "key\_two" = "key\_two" > > > Now, I want to loop through **File 1** and get out each key and string value (if its not a comment line). I then want to search **File 2** for the key and if its found, replace its string value with the string value from **File 1** I'd guess using some regex would be good here but thats where my plan fails. I don't really have a great understanding of regex although I am getting better. Heres the regex I came up with to match the keys: `"^\"\w*\""` And heres the regex I was trying to match the string: `"= [\"a-zA-Z0-9 ]*"` These may not be right or the best so feel free to correct me. I am looking to complete this task using either a bash script or a python script. I did try in python to use the regex search and match functions but with little success.
2014/08/11
[ "https://Stackoverflow.com/questions/25240268", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1813167/" ]
There is a quote that I heard from somewhere: "If you have a problem and you try to solve it with regular expressions, you now have two problems". What you want to achieve can be easily done with just a few inbuilt Python string methods such as `startswith()` and `split()`, without using any regex. In short you can do the following: ``` For each line of File 1 Check if it's a comment line by checking that it starts with '//' If not a comment line, split it to `key` and `value` Store the key/value in a dictionary For each line of File 2 Check if it's a comment line by checking that it starts with '//' If not a comment line, split it to `key` and `value` Check the dictionary to see if the key exists Output to the file as necessary ```
Using some of the tips given here I coded my own solution. It could probably be improved in a few places but I am pleased with myself for creating the solution without just copying and pasting someone else's answer. So, my solution: ``` import fileinput translations = {} with open('file1.txt', 'r') as fileOne: trans = fileOne.readlines() for line in trans: if (line.startswith("\"")): key, value = line.strip().split(" = ") translations[key] = value for line in fileinput.input('file2.txt', inplace=True): if (line.startswith("\"")): key, value = line.strip().split(" = ") if key in translations: line = "{} = {}".format(key, translations[key]) print line.strip() ``` I will give some upvotes to the useful answers still, if I can.
25,240,268
Say for example, I have two text files containing the following: **File 1** > > "key\_one" = "String value for key one" > > "key\_two" = "String value for key two" > > // COMMENT // > > "key\_three" = "String value for key two" > > > **File 2** > > // COMMENT > > "key\_one" = "key\_one" > > // COMMENT > > "key\_two" = "key\_two" > > > Now, I want to loop through **File 1** and get out each key and string value (if its not a comment line). I then want to search **File 2** for the key and if its found, replace its string value with the string value from **File 1** I'd guess using some regex would be good here but thats where my plan fails. I don't really have a great understanding of regex although I am getting better. Heres the regex I came up with to match the keys: `"^\"\w*\""` And heres the regex I was trying to match the string: `"= [\"a-zA-Z0-9 ]*"` These may not be right or the best so feel free to correct me. I am looking to complete this task using either a bash script or a python script. I did try in python to use the regex search and match functions but with little success.
2014/08/11
[ "https://Stackoverflow.com/questions/25240268", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1813167/" ]
You can create a dictionary from the `FILE1` and then use it to replace values in `FILE2` ``` import fileinput import re pattern = re.compile(r'"(.*?)"\s+=\s+"(.*?)"') with open('FILE1', 'r') as f: values = dict(pattern.findall(f.read())) for line in fileinput.input('FILE2', inplace=True): match = pattern.match(line) if match: line = '"%s" = "%s"' % (match.group(1), values[match.group(1)]) print line.strip() ```
``` import pprint ``` def get\_values(f): file1 = open(f,"r").readlines() values = {} for line in file1: if line[:2] !="//" and "=" in line: #print line key, value = line.split("=") #print key, value values[key]=value ``` return values ``` def replace\_values(v1, v2): for key in v1: v = v1[key] if key in v2: v2[key]=v file1\_values = get\_values("file1.txt") file2\_values = get\_values("file2.txt") print "BEFORE" print pprint.pprint(file1\_values) print pprint.pprint(file2\_values) replace\_values(file1\_values, file2\_values) print "AFTER" print pprint.pprint(file1\_values) print pprint.pprint(file2\_values) If the text files are that predictable then you could use something like that. The above code will do what you want and replace the values with the following output: ``` BEFORE {'"key_one" ': ' "String value for key one"\n', '"key_three" ': ' "String value for key two"', '"key_two" ': ' "String value for key two"\n'} {'"key_one" ': ' "key_one"\n', '"key_two" ': ' "key_two"'} AFTER {'"key_one" ': ' "String value for key one"\n', '"key_three" ': ' "String value for key two"', '"key_two" ': ' "String value for key two"\n'} {'"key_one" ': ' "String value for key one"\n', '"key_two" ': ' "String value for key two"\n'} ```
25,240,268
Say for example, I have two text files containing the following: **File 1** > > "key\_one" = "String value for key one" > > "key\_two" = "String value for key two" > > // COMMENT // > > "key\_three" = "String value for key two" > > > **File 2** > > // COMMENT > > "key\_one" = "key\_one" > > // COMMENT > > "key\_two" = "key\_two" > > > Now, I want to loop through **File 1** and get out each key and string value (if its not a comment line). I then want to search **File 2** for the key and if its found, replace its string value with the string value from **File 1** I'd guess using some regex would be good here but thats where my plan fails. I don't really have a great understanding of regex although I am getting better. Heres the regex I came up with to match the keys: `"^\"\w*\""` And heres the regex I was trying to match the string: `"= [\"a-zA-Z0-9 ]*"` These may not be right or the best so feel free to correct me. I am looking to complete this task using either a bash script or a python script. I did try in python to use the regex search and match functions but with little success.
2014/08/11
[ "https://Stackoverflow.com/questions/25240268", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1813167/" ]
You can create a dictionary from the `FILE1` and then use it to replace values in `FILE2` ``` import fileinput import re pattern = re.compile(r'"(.*?)"\s+=\s+"(.*?)"') with open('FILE1', 'r') as f: values = dict(pattern.findall(f.read())) for line in fileinput.input('FILE2', inplace=True): match = pattern.match(line) if match: line = '"%s" = "%s"' % (match.group(1), values[match.group(1)]) print line.strip() ```
Using some of the tips given here I coded my own solution. It could probably be improved in a few places but I am pleased with myself for creating the solution without just copying and pasting someone else's answer. So, my solution: ``` import fileinput translations = {} with open('file1.txt', 'r') as fileOne: trans = fileOne.readlines() for line in trans: if (line.startswith("\"")): key, value = line.strip().split(" = ") translations[key] = value for line in fileinput.input('file2.txt', inplace=True): if (line.startswith("\"")): key, value = line.strip().split(" = ") if key in translations: line = "{} = {}".format(key, translations[key]) print line.strip() ``` I will give some upvotes to the useful answers still, if I can.
25,240,268
Say for example, I have two text files containing the following: **File 1** > > "key\_one" = "String value for key one" > > "key\_two" = "String value for key two" > > // COMMENT // > > "key\_three" = "String value for key two" > > > **File 2** > > // COMMENT > > "key\_one" = "key\_one" > > // COMMENT > > "key\_two" = "key\_two" > > > Now, I want to loop through **File 1** and get out each key and string value (if its not a comment line). I then want to search **File 2** for the key and if its found, replace its string value with the string value from **File 1** I'd guess using some regex would be good here but thats where my plan fails. I don't really have a great understanding of regex although I am getting better. Heres the regex I came up with to match the keys: `"^\"\w*\""` And heres the regex I was trying to match the string: `"= [\"a-zA-Z0-9 ]*"` These may not be right or the best so feel free to correct me. I am looking to complete this task using either a bash script or a python script. I did try in python to use the regex search and match functions but with little success.
2014/08/11
[ "https://Stackoverflow.com/questions/25240268", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1813167/" ]
``` import pprint ``` def get\_values(f): file1 = open(f,"r").readlines() values = {} for line in file1: if line[:2] !="//" and "=" in line: #print line key, value = line.split("=") #print key, value values[key]=value ``` return values ``` def replace\_values(v1, v2): for key in v1: v = v1[key] if key in v2: v2[key]=v file1\_values = get\_values("file1.txt") file2\_values = get\_values("file2.txt") print "BEFORE" print pprint.pprint(file1\_values) print pprint.pprint(file2\_values) replace\_values(file1\_values, file2\_values) print "AFTER" print pprint.pprint(file1\_values) print pprint.pprint(file2\_values) If the text files are that predictable then you could use something like that. The above code will do what you want and replace the values with the following output: ``` BEFORE {'"key_one" ': ' "String value for key one"\n', '"key_three" ': ' "String value for key two"', '"key_two" ': ' "String value for key two"\n'} {'"key_one" ': ' "key_one"\n', '"key_two" ': ' "key_two"'} AFTER {'"key_one" ': ' "String value for key one"\n', '"key_three" ': ' "String value for key two"', '"key_two" ': ' "String value for key two"\n'} {'"key_one" ': ' "String value for key one"\n', '"key_two" ': ' "String value for key two"\n'} ```
Using some of the tips given here I coded my own solution. It could probably be improved in a few places but I am pleased with myself for creating the solution without just copying and pasting someone else's answer. So, my solution: ``` import fileinput translations = {} with open('file1.txt', 'r') as fileOne: trans = fileOne.readlines() for line in trans: if (line.startswith("\"")): key, value = line.strip().split(" = ") translations[key] = value for line in fileinput.input('file2.txt', inplace=True): if (line.startswith("\"")): key, value = line.strip().split(" = ") if key in translations: line = "{} = {}".format(key, translations[key]) print line.strip() ``` I will give some upvotes to the useful answers still, if I can.
50,201,607
TL;DR When updating from CMake 3.10 to CMake 3.11.1 on archlinux, the following configuration line: find\_package(Boost COMPONENTS python3 COMPONENTS numpy3 REQUIRED) leads to CMake linking against 3 different libraries ``` -- Boost version: 1.66.0 -- Found the following Boost libraries: -- python3 -- numpy3 -- python ``` instead of the previous behaviour: ``` -- Boost version: 1.66.0 -- Found the following Boost libraries: -- python3 -- numpy3 ``` resulting in a linker error. --- I use CMake to build a piece of software that relies on Boost python, and, since a couple of days ago, it seems that the line ``` find_package(Boost COMPONENTS numpy3 REQUIRED) ``` is no longer sufficient for CMake to understand that it should link the program against the Boost `python3` library, and it uses the Boost library `python` instead. Here is a minimal working example to reproduce what I am talking about. **test.cpp** ``` #include <iostream> using namespace std; int main() { cout << "Hello, world!" << endl; } ``` **CMakeList.txt** ``` set(CMAKE_VERBOSE_MAKEFILE ON) find_package(PythonLibs 3 REQUIRED) find_package(Boost COMPONENTS numpy3 REQUIRED) add_executable (test test.cpp) target_link_libraries(test ${Boost_LIBRARIES} ${PYTHON_LIBRARIES}) ``` With this configuration of CMake, a linker error will occur, and the error persists when I change the line adding numpy to ``` find_package(Boost COMPONENTS python3 COMPONENTS numpy3 REQUIRED) ``` Here is the result of `cmake . && make`: ``` /home/rastapopoulos/test $ cmake . -- Boost version: 1.66.0 -- Found the following Boost libraries: -- numpy3 -- python CMake Warning (dev) in CMakeLists.txt: No cmake_minimum_required command is present. A line of code such as cmake_minimum_required(VERSION 3.11) should be added at the top of the file. The version specified may be lower if you wish to support older CMake versions for this project. For more information run "cmake --help-policy CMP0000". This warning is for project developers. Use -Wno-dev to suppress it. -- Configuring done -- Generating done -- Build files have been written to: /home/rastapopoulos/test /home/rastapopoulos/test $ make /usr/bin/cmake -H/home/rastapopoulos/test -B/home/rastapopoulos/test --check-build-system CMakeFiles/Makefile.cmake 0 /usr/bin/cmake -E cmake_progress_start /home/rastapopoulos/test/CMakeFiles /home/rastapopoulos/test/CMakeFiles/progress.marks make -f CMakeFiles/Makefile2 all make[1]: Entering directory '/home/rastapopoulos/test' make -f CMakeFiles/test.dir/build.make CMakeFiles/test.dir/depend make[2]: Entering directory '/home/rastapopoulos/test' cd /home/rastapopoulos/test && /usr/bin/cmake -E cmake_depends "Unix Makefiles" /home/rastapopoulos/test /home/rastapopoulos/test /home/rastapopoulos/test /home/rastapopoulos/test /home/rastapopoulos/test/CMakeFi les/test.dir/DependInfo.cmake --color= make[2]: Leaving directory '/home/rastapopoulos/test' make -f CMakeFiles/test.dir/build.make CMakeFiles/test.dir/build make[2]: Entering directory '/home/rastapopoulos/test' [ 50%] Linking CXX executable test /usr/bin/cmake -E cmake_link_script CMakeFiles/test.dir/link.txt --verbose=1 /usr/bin/c++ -rdynamic CMakeFiles/test.dir/test.o -o test -lboost_numpy3 -lboost_python -lpython3.6m /usr/lib/gcc/x86_64-pc-linux-gnu/7.3.1/../../../../lib/libboost_python.so: undefined reference to `PyString_Size' /usr/lib/gcc/x86_64-pc-linux-gnu/7.3.1/../../../../lib/libboost_python.so: undefined reference to `PyUnicodeUCS4_FromEncodedObject' /usr/lib/gcc/x86_64-pc-linux-gnu/7.3.1/../../../../lib/libboost_python.so: undefined reference to `PyFile_FromString' /usr/lib/gcc/x86_64-pc-linux-gnu/7.3.1/../../../../lib/libboost_python.so: undefined reference to `PyString_Type' /usr/lib/gcc/x86_64-pc-linux-gnu/7.3.1/../../../../lib/libboost_python.so: undefined reference to `PyInt_Type' /usr/lib/gcc/x86_64-pc-linux-gnu/7.3.1/../../../../lib/libboost_python.so: undefined reference to `PyString_FromString' /usr/lib/gcc/x86_64-pc-linux-gnu/7.3.1/../../../../lib/libboost_python.so: undefined reference to `PyUnicodeUCS4_AsWideChar' /usr/lib/gcc/x86_64-pc-linux-gnu/7.3.1/../../../../lib/libboost_python.so: undefined reference to `PyString_FromStringAndSize' /usr/lib/gcc/x86_64-pc-linux-gnu/7.3.1/../../../../lib/libboost_python.so: undefined reference to `Py_InitModule4_64' /usr/lib/gcc/x86_64-pc-linux-gnu/7.3.1/../../../../lib/libboost_python.so: undefined reference to `PyString_FromFormat' /usr/lib/gcc/x86_64-pc-linux-gnu/7.3.1/../../../../lib/libboost_python.so: undefined reference to `PyNumber_Divide' /usr/lib/gcc/x86_64-pc-linux-gnu/7.3.1/../../../../lib/libboost_python.so: undefined reference to `PyNumber_InPlaceDivide' /usr/lib/gcc/x86_64-pc-linux-gnu/7.3.1/../../../../lib/libboost_python.so: undefined reference to `PyInt_AsLong' /usr/lib/gcc/x86_64-pc-linux-gnu/7.3.1/../../../../lib/libboost_python.so: undefined reference to `PyString_InternFromString' /usr/lib/gcc/x86_64-pc-linux-gnu/7.3.1/../../../../lib/libboost_python.so: undefined reference to `PyClass_Type' /usr/lib/gcc/x86_64-pc-linux-gnu/7.3.1/../../../../lib/libboost_python.so: undefined reference to `PyString_AsString' /usr/lib/gcc/x86_64-pc-linux-gnu/7.3.1/../../../../lib/libboost_python.so: undefined reference to `PyInt_FromLong' /usr/lib/gcc/x86_64-pc-linux-gnu/7.3.1/../../../../lib/libboost_python.so: undefined reference to `PyFile_AsFile' collect2: error: ld returned 1 exit status make[2]: *** [CMakeFiles/test.dir/build.make:90: test] Error 1 make[2]: Leaving directory '/home/rastapopoulos/test' make[1]: *** [CMakeFiles/Makefile2:71: CMakeFiles/test.dir/all] Error 2 make[1]: Leaving directory '/home/rastapopoulos/test' make: *** [Makefile:87: all] Error 2 ``` Has anyone experienced a similar problem and managed to solve it? I use `cmake 3.11.1`, `boost 1.66.0-2`, and run an updated version of Archlinux.
2018/05/06
[ "https://Stackoverflow.com/questions/50201607", "https://Stackoverflow.com", "https://Stackoverflow.com/users/7141288/" ]
This bug is due to an invalid dependency description in `FindBoost.cmake` ``` set(_Boost_NUMPY_DEPENDENCIES python) ``` This has been fixed at <https://github.com/Kitware/CMake/commit/c747d4ccb349f87963a8d1da69394bc4db6b74ed> Please use latest one, or you can rewrite it manually: ``` set(_Boost_NUMPY_DEPENDENCIES python${component_python_version}) ```
[CMake 3.10 does not properly support Boost 1.66](https://stackoverflow.com/a/42124857/2799037). The Boost dependencies are hard-coded and if they chance, CMake has to adopt. Delete the build directory and reconfigure. The configure step uses cached variables which prevents re-detection with the newer routines.
7,092,407
Im working a mongodb database using pymongo python module. I have a function in my code which when called updates the records in the collection as follows. ``` for record in coll.find(<some query here>): #Code here #... #... coll.update({ '_id' : record['_id'] },record) ``` Now, if i modify the code as follows : ``` for record in coll.find(<some query here>): try: #Code here #... #... coll.update({ '_id' : record['_id'] },record,safe=True) except: #Handle exception here ``` Does this mean an exception will be thrown when update fails or no exception will be thrown and update will just skip the record causing a problem ? Please Help Thank you
2011/08/17
[ "https://Stackoverflow.com/questions/7092407", "https://Stackoverflow.com", "https://Stackoverflow.com/users/898562/" ]
Increase your memory buffer size `php_value memory_limit 64M` in your .htacess or `ini_set('memory_limit','64M');` in your php file
It depends your implimentation. last time when I was working on csv file with more then 500000 records, I got the same message. Later I introduce classes and try to close the open objects. it reduces it memeory consumption. if you are opening an image and editing it. it means it is loading in a memory. in that case size really matter. if you are operating multiple images. I will record to one per image and then close that image. In my experience when I was working on pdf artwork files to check the crop marks. I was having the same error. ``` //you can set the memory limits values // in htaccess php_value memory_limit 64M //or in you using following in php ini_set('memory_limit', '128M'); //or update it in your php.ini file ``` but if you optimize your code. and use object oriented aproach then you memory consumption will be very less. because in that every object has its own scope and out of that scope it is destroyed.
7,092,407
Im working a mongodb database using pymongo python module. I have a function in my code which when called updates the records in the collection as follows. ``` for record in coll.find(<some query here>): #Code here #... #... coll.update({ '_id' : record['_id'] },record) ``` Now, if i modify the code as follows : ``` for record in coll.find(<some query here>): try: #Code here #... #... coll.update({ '_id' : record['_id'] },record,safe=True) except: #Handle exception here ``` Does this mean an exception will be thrown when update fails or no exception will be thrown and update will just skip the record causing a problem ? Please Help Thank you
2011/08/17
[ "https://Stackoverflow.com/questions/7092407", "https://Stackoverflow.com", "https://Stackoverflow.com/users/898562/" ]
As riky said, set the memory limit higher if you can. Also realize that the dimensions are more important than the file size (as the file size is for a compressed image). When you open an image in GD, every pixel gets 3-4 bytes allocated to it, RGB and possibly A. Thus, your 4912px x 3264px image needs to use 48,098,304 to 64,131,072 bytes of memory, plus there is overhead and any other memory your script is using.
Increase your memory buffer size `php_value memory_limit 64M` in your .htacess or `ini_set('memory_limit','64M');` in your php file
7,092,407
Im working a mongodb database using pymongo python module. I have a function in my code which when called updates the records in the collection as follows. ``` for record in coll.find(<some query here>): #Code here #... #... coll.update({ '_id' : record['_id'] },record) ``` Now, if i modify the code as follows : ``` for record in coll.find(<some query here>): try: #Code here #... #... coll.update({ '_id' : record['_id'] },record,safe=True) except: #Handle exception here ``` Does this mean an exception will be thrown when update fails or no exception will be thrown and update will just skip the record causing a problem ? Please Help Thank you
2011/08/17
[ "https://Stackoverflow.com/questions/7092407", "https://Stackoverflow.com", "https://Stackoverflow.com/users/898562/" ]
Increase your memory buffer size `php_value memory_limit 64M` in your .htacess or `ini_set('memory_limit','64M');` in your php file
The size of the used memory depends on the dimension and the color bit depth. I also ran in to that problem a few years ago, while building a portfolio-website for photographers. The only way to properly solve this is to switch your image library from GD to imagick. Imagick consumes far less memory, and is not tied to the PHP memory limit. I have to say that the images the photographers uploaded were up to 30MP. And setting the memory limit to over 1024MB makes no sense in my eyes.
7,092,407
Im working a mongodb database using pymongo python module. I have a function in my code which when called updates the records in the collection as follows. ``` for record in coll.find(<some query here>): #Code here #... #... coll.update({ '_id' : record['_id'] },record) ``` Now, if i modify the code as follows : ``` for record in coll.find(<some query here>): try: #Code here #... #... coll.update({ '_id' : record['_id'] },record,safe=True) except: #Handle exception here ``` Does this mean an exception will be thrown when update fails or no exception will be thrown and update will just skip the record causing a problem ? Please Help Thank you
2011/08/17
[ "https://Stackoverflow.com/questions/7092407", "https://Stackoverflow.com", "https://Stackoverflow.com/users/898562/" ]
As riky said, set the memory limit higher if you can. Also realize that the dimensions are more important than the file size (as the file size is for a compressed image). When you open an image in GD, every pixel gets 3-4 bytes allocated to it, RGB and possibly A. Thus, your 4912px x 3264px image needs to use 48,098,304 to 64,131,072 bytes of memory, plus there is overhead and any other memory your script is using.
It depends your implimentation. last time when I was working on csv file with more then 500000 records, I got the same message. Later I introduce classes and try to close the open objects. it reduces it memeory consumption. if you are opening an image and editing it. it means it is loading in a memory. in that case size really matter. if you are operating multiple images. I will record to one per image and then close that image. In my experience when I was working on pdf artwork files to check the crop marks. I was having the same error. ``` //you can set the memory limits values // in htaccess php_value memory_limit 64M //or in you using following in php ini_set('memory_limit', '128M'); //or update it in your php.ini file ``` but if you optimize your code. and use object oriented aproach then you memory consumption will be very less. because in that every object has its own scope and out of that scope it is destroyed.
7,092,407
Im working a mongodb database using pymongo python module. I have a function in my code which when called updates the records in the collection as follows. ``` for record in coll.find(<some query here>): #Code here #... #... coll.update({ '_id' : record['_id'] },record) ``` Now, if i modify the code as follows : ``` for record in coll.find(<some query here>): try: #Code here #... #... coll.update({ '_id' : record['_id'] },record,safe=True) except: #Handle exception here ``` Does this mean an exception will be thrown when update fails or no exception will be thrown and update will just skip the record causing a problem ? Please Help Thank you
2011/08/17
[ "https://Stackoverflow.com/questions/7092407", "https://Stackoverflow.com", "https://Stackoverflow.com/users/898562/" ]
As riky said, set the memory limit higher if you can. Also realize that the dimensions are more important than the file size (as the file size is for a compressed image). When you open an image in GD, every pixel gets 3-4 bytes allocated to it, RGB and possibly A. Thus, your 4912px x 3264px image needs to use 48,098,304 to 64,131,072 bytes of memory, plus there is overhead and any other memory your script is using.
The size of the used memory depends on the dimension and the color bit depth. I also ran in to that problem a few years ago, while building a portfolio-website for photographers. The only way to properly solve this is to switch your image library from GD to imagick. Imagick consumes far less memory, and is not tied to the PHP memory limit. I have to say that the images the photographers uploaded were up to 30MP. And setting the memory limit to over 1024MB makes no sense in my eyes.
41,504,340
This question [explains](https://stackoverflow.com/questions/7300321/how-to-use-pythons-pip-to-download-and-keep-the-zipped-files-for-a-package) how to make pip download and save packages. If I follow this formula, Pip will download wheel (.whl) files if available. ``` (venv) [user@host glances]$ pip download -d wheelhouse -r build_requirements.txt Collecting wheel (from -r build_requirements.txt (line 1)) File was already downloaded /usr_data/tmp/glances/wheelhouse/wheel-0.29.0-py2.py3-none-any.whl Collecting pex (from -r build_requirements.txt (line 2)) File was already downloaded /usr_data/tmp/glances/wheelhouse/pex-1.1.18-py2.py3-none-any.whl Collecting requests (from -r build_requirements.txt (line 3)) File was already downloaded /usr_data/tmp/glances/wheelhouse/requests-2.12.4-py2.py3-none-any.whl Collecting pip (from -r build_requirements.txt (line 4)) File was already downloaded /usr_data/tmp/glances/wheelhouse/pip-9.0.1-py2.py3-none-any.whl Collecting setuptools (from -r build_requirements.txt (line 5)) File was already downloaded /usr_data/tmp/glances/wheelhouse/setuptools-32.3.1-py2.py3-none-any.whl Successfully downloaded wheel pex requests pip setuptools ``` Every single file that it downloaded was a Wheel - but what if I want to get a different kind of file? I actually want to download the sdist (.tar.gz) files in preference to .whl files? Is there a way to tell Pip what kinds of files I actually want it to get? So instead of getting a directory full of wheels I might want a bunch of tar.gz files.
2017/01/06
[ "https://Stackoverflow.com/questions/41504340", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1179137/" ]
According to `pip install -h`: > > --no-use-wheel Do not Find and prefer wheel archives when searching indexes and find-links locations. DEPRECATED in favour of --no-binary. > > > And > > --no-binary Do not use binary packages. Can be supplied multiple times, and each time adds to the existing value. Accepts either :all: to disable all binary packages, :none: to empty the set, or one or more package > > > You may need to upgrade pip with `pip install -U pip` if your version is too old.
use `pip download --no-binary=:all: -r requirements.txt` According to the pip documentation: **--no-binary:** > > Do not use binary packages. Can be supplied multiple times, and each > time adds to the existing value. Accepts either :all: to disable all > binary packages, :none: to empty the set, or one or more package names > with commas between them. Note that some packages are tricky to > compile and may fail to install when this option is used on them. > > > It worked for me!
6,022,450
I'm using Scrapy to scrape a website. The item page that I want to scrape looks like: <http://www.somepage.com/itempage/&page=x>. Where `x` is any number from `1` to `100`. Thus, I have an `SgmlLinkExractor` Rule with a callback function specified for any page resembling this. The website does not have a listpage with all the items, so I want to somehow well scrapy to scrape those urls (from `1` to `100`). This guy [here](https://stackoverflow.com/questions/4640804/python-scrapy-how-to-fetch-an-url-not-via-following-links-inside-a-spider) seemed to have the same issue, but couldn't figure it out. Does anyone have a solution?
2011/05/16
[ "https://Stackoverflow.com/questions/6022450", "https://Stackoverflow.com", "https://Stackoverflow.com/users/648121/" ]
You could list all the known URLs in your [`Spider`](http://doc.scrapy.org/topics/spiders.html#spiders) class' [start\_urls](http://doc.scrapy.org/topics/spiders.html#scrapy.spider.BaseSpider.start_urls) attribute: ``` class SomepageSpider(BaseSpider): name = 'somepage.com' allowed_domains = ['somepage.com'] start_urls = ['http://www.somepage.com/itempage/&page=%s' % page for page in xrange(1, 101)] def parse(self, response): # ... ```
If it's just a one time thing, you can create a local html file `file:///c:/somefile.html` with all the links. Start scraping that file and add `somepage.com` to allowed domains. Alternately, in the parse function, you can return a new Request which is the next url to be scraped.
58,635,279
I have created a brand new [Python repository](https://github.com/neuropsychology/NeuroKit) based on a cookie-cutter template. Everything looks okay, so I am trying now to set the testing and testing coverage using travis and codecov. I am new to pytest but I am trying to do things right. After looking on the internet, I ended up with this setup: In [`.travis.yml`](https://github.com/neuropsychology/NeuroKit/blob/master/.travis.yml), I have added the following: ``` install: - pip install -U tox-travis - pip install coverage - pip install codecov script: - python setup.py install - tox - coverage run tests/test_foo.py ``` In my [`tox.ini`](https://github.com/neuropsychology/NeuroKit/blob/master/tox.ini) file: ``` [testenv] passenv = CI TRAVIS TRAVIS_* setenv = PYTHONPATH = {toxinidir} PIPENV_IGNORE_VIRTUALENVS=1 deps = pipenv codecov pytest {py27}: pathlib2 commands_pre = pipenv install --dev --skip-lock codecov ``` I have created a minimal [`tests/test_foo.py`](https://github.com/neuropsychology/NeuroKit/blob/master/tests/test_foo.py) file with the following (`foo()` is the only function currently present in the package). ```py import pytest import doctest import neurokit2 as nk if __name__ == '__main__': doctest.testmod() pytest.main() def test_foo(): assert nk.foo() == 4 ``` I have It seems that codecov, triggered by travis does not run through the test. Moreover, on travis, it says [`Error: No coverage report found`](https://travis-ci.org/neuropsychology/NeuroKit/jobs/604805529#L332) I wonder what am I doing wrong?
2019/10/31
[ "https://Stackoverflow.com/questions/58635279", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4198688/" ]
1) create pytest.ini file in your project directory and add the following lines ``` [pytest] testpaths = tests python_files = *.py python_functions = test_* ``` 2) create .coveragerc file in project directory and add the following lines ``` [report] fail_under = 90 show_missing = True ``` 3) pytest for code coverage ``` pytest --verbose --color=yes --cov=Name of directory for which you need code coverage --assert=plain ``` Note: Name of the directory for which you need code coverage must be inside the project directory
Looks like you're missing `coverage` on your installs. You have it on scripts but it might not be running. Try adding `pip install coverage` in your travis.yml file. Have a go at this too: [codecov](https://github.com/codecov/example-python)
44,492,238
I am learning python & trying to scrape a website, having 10 listing of properties on each page. I want to extract information from each listing on each page. My code for first 5 pages is as follows :- ``` import requests from bs4 import BeautifulSoup urls = [] for i in range(1,5): pages = "http://www.realcommercial.com.au/sold/property-offices-retail-showrooms+bulky+goods-land+development-hotel+leisure-medical+consulting-other-in-vic/list-{0}?includePropertiesWithin=includesurrounding&activeSort=list-date&autoSuggest=true".format(i) urls.append(pages) for info in urls: page = requests.get(info) soup = BeautifulSoup(page.content, 'html.parser') links = soup.find_all('a', attrs ={'class' :'details-panel'}) hrefs = [link['href'] for link in links] Data = [] for urls in hrefs: pages = requests.get(urls) soup_2 =BeautifulSoup(pages.content, 'html.parser') Address_1 = soup_2.find_all('p', attrs={'class' :'full-address'}) Address = [Address.text.strip() for Address in Address_1] Date = soup_2.find_all('li', attrs ={'class' :'sold-date'}) Sold_Date = [Sold_Date.text.strip() for Sold_Date in Date] Area_1 =soup_2.find_all('ul', attrs={'class' :'summaryList'}) Area = [Area.text.strip() for Area in Area_1] Agency_1=soup_2.find_all('div', attrs={'class' :'agencyName ellipsis'}) Agency_Name=[Agency_Name.text.strip() for Agency_Name in Agency_1] Agent_1=soup_2.find_all('div', attrs={'class' :'agentName ellipsis'}) Agent_Name=[Agent_Name.text.strip() for Agent_Name in Agent_1] Data.append(Sold_Date+Address+Area+Agency_Name+Agent_Name) ``` The above code is not working for me. Please let me know the correct coding to achieve the purpose.
2017/06/12
[ "https://Stackoverflow.com/questions/44492238", "https://Stackoverflow.com", "https://Stackoverflow.com/users/7961265/" ]
There is one problem in your code is that you declared the variable "urls" twice. You need to update the code like below: ``` import requests from bs4 import BeautifulSoup urls = [] for i in range(1,6): pages = "http://www.realcommercial.com.au/sold/property-offices-retail-showrooms+bulky+goods-land+development-hotel+leisure-medical+consulting-other-in-vic/list-{0}?includePropertiesWithin=includesurrounding&activeSort=list-date&autoSuggest=true".format(i) urls.append(pages) Data = [] for info in urls: page = requests.get(info) soup = BeautifulSoup(page.content, 'html.parser') links = soup.find_all('a', attrs ={'class' :'details-panel'}) hrefs = [link['href'] for link in links] for href in hrefs: pages = requests.get(href) soup_2 =BeautifulSoup(pages.content, 'html.parser') Address_1 = soup_2.find_all('p', attrs={'class' :'full-address'}) Address = [Address.text.strip() for Address in Address_1] Date = soup_2.find_all('li', attrs ={'class' :'sold-date'}) Sold_Date = [Sold_Date.text.strip() for Sold_Date in Date] Area_1 =soup_2.find_all('ul', attrs={'class' :'summaryList'}) Area = [Area.text.strip() for Area in Area_1] Agency_1=soup_2.find_all('div', attrs={'class' :'agencyName ellipsis'}) Agency_Name=[Agency_Name.text.strip() for Agency_Name in Agency_1] Agent_1=soup_2.find_all('div', attrs={'class' :'agentName ellipsis'}) Agent_Name=[Agent_Name.text.strip() for Agent_Name in Agent_1] Data.append(Sold_Date+Address+Area+Agency_Name+Agent_Name) print Data ```
Use headers in the code and use string concatenation instead of .format(i) The code looks like this ``` import requests from bs4 import BeautifulSoup urls = [] for i in range(1,6): pages = 'http://www.realcommercial.com.au/sold/property-offices-retail-showrooms+bulky+goods-land+development-hotel+leisure-medical+consulting-other-in-vic/list-'i+'?includePropertiesWithin=includesurrounding&activeSort=list-date&autoSuggest=true' urls.append(pages) Data = [] for info in urls: headers = {'User-agent':'Mozilla/5.0'} page = requests.get(info,headers=headers) soup = BeautifulSoup(page.content, 'html.parser') links = soup.find_all('a', attrs ={'class' :'details-panel'}) hrefs = [link['href'] for link in links] for href in hrefs: pages = requests.get(href) soup_2 =BeautifulSoup(pages.content, 'html.parser') Address_1 = soup_2.find_all('p', attrs={'class' :'full-address'}) Address = [Address.text.strip() for Address in Address_1] Date = soup_2.find_all('li', attrs ={'class' :'sold-date'}) Sold_Date = [Sold_Date.text.strip() for Sold_Date in Date] Area_1 =soup_2.find_all('ul', attrs={'class' :'summaryList'}) Area = [Area.text.strip() for Area in Area_1] Agency_1=soup_2.find_all('div', attrs={'class' :'agencyName ellipsis'}) Agency_Name=[Agency_Name.text.strip() for Agency_Name in Agency_1] Agent_1=soup_2.find_all('div', attrs={'class' :'agentName ellipsis'}) Agent_Name=[Agent_Name.text.strip() for Agent_Name in Agent_1] Data.append(Sold_Date+Address+Area+Agency_Name+Agent_Name) print Data ```
44,492,238
I am learning python & trying to scrape a website, having 10 listing of properties on each page. I want to extract information from each listing on each page. My code for first 5 pages is as follows :- ``` import requests from bs4 import BeautifulSoup urls = [] for i in range(1,5): pages = "http://www.realcommercial.com.au/sold/property-offices-retail-showrooms+bulky+goods-land+development-hotel+leisure-medical+consulting-other-in-vic/list-{0}?includePropertiesWithin=includesurrounding&activeSort=list-date&autoSuggest=true".format(i) urls.append(pages) for info in urls: page = requests.get(info) soup = BeautifulSoup(page.content, 'html.parser') links = soup.find_all('a', attrs ={'class' :'details-panel'}) hrefs = [link['href'] for link in links] Data = [] for urls in hrefs: pages = requests.get(urls) soup_2 =BeautifulSoup(pages.content, 'html.parser') Address_1 = soup_2.find_all('p', attrs={'class' :'full-address'}) Address = [Address.text.strip() for Address in Address_1] Date = soup_2.find_all('li', attrs ={'class' :'sold-date'}) Sold_Date = [Sold_Date.text.strip() for Sold_Date in Date] Area_1 =soup_2.find_all('ul', attrs={'class' :'summaryList'}) Area = [Area.text.strip() for Area in Area_1] Agency_1=soup_2.find_all('div', attrs={'class' :'agencyName ellipsis'}) Agency_Name=[Agency_Name.text.strip() for Agency_Name in Agency_1] Agent_1=soup_2.find_all('div', attrs={'class' :'agentName ellipsis'}) Agent_Name=[Agent_Name.text.strip() for Agent_Name in Agent_1] Data.append(Sold_Date+Address+Area+Agency_Name+Agent_Name) ``` The above code is not working for me. Please let me know the correct coding to achieve the purpose.
2017/06/12
[ "https://Stackoverflow.com/questions/44492238", "https://Stackoverflow.com", "https://Stackoverflow.com/users/7961265/" ]
There is one problem in your code is that you declared the variable "urls" twice. You need to update the code like below: ``` import requests from bs4 import BeautifulSoup urls = [] for i in range(1,6): pages = "http://www.realcommercial.com.au/sold/property-offices-retail-showrooms+bulky+goods-land+development-hotel+leisure-medical+consulting-other-in-vic/list-{0}?includePropertiesWithin=includesurrounding&activeSort=list-date&autoSuggest=true".format(i) urls.append(pages) Data = [] for info in urls: page = requests.get(info) soup = BeautifulSoup(page.content, 'html.parser') links = soup.find_all('a', attrs ={'class' :'details-panel'}) hrefs = [link['href'] for link in links] for href in hrefs: pages = requests.get(href) soup_2 =BeautifulSoup(pages.content, 'html.parser') Address_1 = soup_2.find_all('p', attrs={'class' :'full-address'}) Address = [Address.text.strip() for Address in Address_1] Date = soup_2.find_all('li', attrs ={'class' :'sold-date'}) Sold_Date = [Sold_Date.text.strip() for Sold_Date in Date] Area_1 =soup_2.find_all('ul', attrs={'class' :'summaryList'}) Area = [Area.text.strip() for Area in Area_1] Agency_1=soup_2.find_all('div', attrs={'class' :'agencyName ellipsis'}) Agency_Name=[Agency_Name.text.strip() for Agency_Name in Agency_1] Agent_1=soup_2.find_all('div', attrs={'class' :'agentName ellipsis'}) Agent_Name=[Agent_Name.text.strip() for Agent_Name in Agent_1] Data.append(Sold_Date+Address+Area+Agency_Name+Agent_Name) print Data ```
You can tell BeautifulSoup to only give you links containing a `href` to make your code safer. Also, rather than modifying your URL to include a page number, you could extract the `next >` link at the bottom. This would also then automatically stop when the final page has been returned: ``` import requests from bs4 import BeautifulSoup base_url = r"http://www.realcommercial.com.au" url = base_url + "/sold/property-offices-retail-showrooms+bulky+goods-land+development-hotel+leisure-medical+consulting-other-in-vic/list-1?includePropertiesWithin=includesurrounding&activeSort=list-date&autoSuggest=true" data = [] for _ in range(10): print(url) page = requests.get(url) soup = BeautifulSoup(page.content, 'html.parser') hrefs = [link['href'] for link in soup.find_all('a', attrs={'class' : 'details-panel'}, href=True)] for href in hrefs: pages = requests.get(href) soup_2 = BeautifulSoup(pages.content, 'html.parser') Address_1 = soup_2.find_all('p', attrs={'class' :'full-address'}) Address = [Address.text.strip() for Address in Address_1] Date = soup_2.find_all('li', attrs ={'class' :'sold-date'}) Sold_Date = [Sold_Date.text.strip() for Sold_Date in Date] Area_1 = soup_2.find_all('ul', attrs={'class' :'summaryList'}) Area = [Area.text.strip() for Area in Area_1] Agency_1 = soup_2.find_all('div', attrs={'class' :'agencyName ellipsis'}) Agency_Name = [Agency_Name.text.strip() for Agency_Name in Agency_1] Agent_1 = soup_2.find_all('div', attrs={'class' :'agentName ellipsis'}) Agent_Name = [Agent_Name.text.strip() for Agent_Name in Agent_1] data.append(Sold_Date+Address+Area+Agency_Name+Agent_Name) # Find next page (if any) next_button = soup.find('li', class_='rui-pagination-next') if next_button: url = base_url + next_button.parent['href'] else: break for entry in data: print(entry) print("---------") ```
21,778,187
I would like to find text in file with regular expression and after replace it to another name. I have to read file line by line at first because in other way re.match(...) can`t find text. My test file where I would like to make modyfications is (no all, I removed some code): ``` //... #include <boost/test/included/unit_test.hpp> #ifndef FUNCTIONS_TESTSUITE_H #define FUNCTIONS_TESTSUITE_H //... BOOST_AUTO_TEST_SUITE(FunctionsTS) BOOST_AUTO_TEST_CASE(test) { std::string l_dbConfigDataFileName = "../../Config/configDB.cfg"; DB::FUNCTIONS::DBConfigData l_dbConfigData; //... } BOOST_AUTO_TEST_SUITE_END() //... ``` Now python code which replace the configDB name to another. I have to find configDB.cfg name by regular expression because all the time the name is changing. Only the name, extension not needed. Code: ``` import fileinput import re myfile = "Tset.cpp" #first search expression - ok. working good find and print configDB with open(myfile) as f: for line in f: matchObj = re.match( r'(.*)../Config/(.*).cfg(.*)', line, re.M|re.I) if matchObj: print "Search : ", matchObj.group(2) #now replace searched expression to another name - so one more time find and replace - another way - not working - file after run this code is empty?!!! for line in fileinput.FileInput(myfile, inplace=1): matchObj = re.match( r'(.*)../Config/(.*).cfg(.*)', line, re.M|re.I) if matchObj: line = line.replace("Config","AnotherConfig") ```
2014/02/14
[ "https://Stackoverflow.com/questions/21778187", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1693143/" ]
Looks like this isn't possible to do. To cut down on duplicate code, simply declare the error handling function separately and reuse it inside the response and responseError functions. ``` $httpProvider.interceptors.push(function($q) { var handleError = function (rejection) { ... } return { response: function (response) { if (response.data.error) { return handleError(response); } return response; }, responseError: handleError } }); ```
To add to this answer: rejecting the promise in the response interceptor DOES do something. Although one would expect it to call the responseError in first glance, this would not make a lot of sense: the request is fulfilled with succes. But rejecting it in the response interceptor will make the caller of the promise go into error handling. So when doing this ``` $http.get('some_url') .then(succes) .catch(err) ``` Rejecting the promise will call the catch function. So you don't have you proper generic error handling, but your promise IS rejected, and that's useful :-)
21,778,187
I would like to find text in file with regular expression and after replace it to another name. I have to read file line by line at first because in other way re.match(...) can`t find text. My test file where I would like to make modyfications is (no all, I removed some code): ``` //... #include <boost/test/included/unit_test.hpp> #ifndef FUNCTIONS_TESTSUITE_H #define FUNCTIONS_TESTSUITE_H //... BOOST_AUTO_TEST_SUITE(FunctionsTS) BOOST_AUTO_TEST_CASE(test) { std::string l_dbConfigDataFileName = "../../Config/configDB.cfg"; DB::FUNCTIONS::DBConfigData l_dbConfigData; //... } BOOST_AUTO_TEST_SUITE_END() //... ``` Now python code which replace the configDB name to another. I have to find configDB.cfg name by regular expression because all the time the name is changing. Only the name, extension not needed. Code: ``` import fileinput import re myfile = "Tset.cpp" #first search expression - ok. working good find and print configDB with open(myfile) as f: for line in f: matchObj = re.match( r'(.*)../Config/(.*).cfg(.*)', line, re.M|re.I) if matchObj: print "Search : ", matchObj.group(2) #now replace searched expression to another name - so one more time find and replace - another way - not working - file after run this code is empty?!!! for line in fileinput.FileInput(myfile, inplace=1): matchObj = re.match( r'(.*)../Config/(.*).cfg(.*)', line, re.M|re.I) if matchObj: line = line.replace("Config","AnotherConfig") ```
2014/02/14
[ "https://Stackoverflow.com/questions/21778187", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1693143/" ]
Looks like this isn't possible to do. To cut down on duplicate code, simply declare the error handling function separately and reuse it inside the response and responseError functions. ``` $httpProvider.interceptors.push(function($q) { var handleError = function (rejection) { ... } return { response: function (response) { if (response.data.error) { return handleError(response); } return response; }, responseError: handleError } }); ```
Should you want to pass the http response to the responseError handler, you could do it like this: ``` $httpProvider.interceptors.push(function($q) { var self = { response: function (response) { if (response.data.error) { return self.responseError(response); } return response; }, responseError: function(response) { // ... do things with the response } } return self; }); ```
21,778,187
I would like to find text in file with regular expression and after replace it to another name. I have to read file line by line at first because in other way re.match(...) can`t find text. My test file where I would like to make modyfications is (no all, I removed some code): ``` //... #include <boost/test/included/unit_test.hpp> #ifndef FUNCTIONS_TESTSUITE_H #define FUNCTIONS_TESTSUITE_H //... BOOST_AUTO_TEST_SUITE(FunctionsTS) BOOST_AUTO_TEST_CASE(test) { std::string l_dbConfigDataFileName = "../../Config/configDB.cfg"; DB::FUNCTIONS::DBConfigData l_dbConfigData; //... } BOOST_AUTO_TEST_SUITE_END() //... ``` Now python code which replace the configDB name to another. I have to find configDB.cfg name by regular expression because all the time the name is changing. Only the name, extension not needed. Code: ``` import fileinput import re myfile = "Tset.cpp" #first search expression - ok. working good find and print configDB with open(myfile) as f: for line in f: matchObj = re.match( r'(.*)../Config/(.*).cfg(.*)', line, re.M|re.I) if matchObj: print "Search : ", matchObj.group(2) #now replace searched expression to another name - so one more time find and replace - another way - not working - file after run this code is empty?!!! for line in fileinput.FileInput(myfile, inplace=1): matchObj = re.match( r'(.*)../Config/(.*).cfg(.*)', line, re.M|re.I) if matchObj: line = line.replace("Config","AnotherConfig") ```
2014/02/14
[ "https://Stackoverflow.com/questions/21778187", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1693143/" ]
To add to this answer: rejecting the promise in the response interceptor DOES do something. Although one would expect it to call the responseError in first glance, this would not make a lot of sense: the request is fulfilled with succes. But rejecting it in the response interceptor will make the caller of the promise go into error handling. So when doing this ``` $http.get('some_url') .then(succes) .catch(err) ``` Rejecting the promise will call the catch function. So you don't have you proper generic error handling, but your promise IS rejected, and that's useful :-)
Should you want to pass the http response to the responseError handler, you could do it like this: ``` $httpProvider.interceptors.push(function($q) { var self = { response: function (response) { if (response.data.error) { return self.responseError(response); } return response; }, responseError: function(response) { // ... do things with the response } } return self; }); ```
49,773,418
after writing import tensorflow\_hub, the following error emerges: ``` class LatestModuleExporter(tf.estimator.Exporter): ``` AttributeError: module 'tensorflow.python.estimator.estimator\_lib' has no attribute 'Exporter' I'm using python 3.6 with tensorflow 1.7 on Windows 10 thanks!
2018/04/11
[ "https://Stackoverflow.com/questions/49773418", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2393805/" ]
You can reinstall TensorFlow\_hub: ``` pip install ipykernel pip install tensorflow_hub ```
I believe your python3 runtime is not really running with tensorflow 1.7. That attribute exists since tensorflow 1.4. I suspect some mismatch between python2/3 environment, mismatch installing with pip/pip3 or an issue with installing both tensorflow and tf-nightly pip packages. You can double check with: ``` $ python3 -c "import tensorflow as tf; print(tf.__version__)" ```
49,773,418
after writing import tensorflow\_hub, the following error emerges: ``` class LatestModuleExporter(tf.estimator.Exporter): ``` AttributeError: module 'tensorflow.python.estimator.estimator\_lib' has no attribute 'Exporter' I'm using python 3.6 with tensorflow 1.7 on Windows 10 thanks!
2018/04/11
[ "https://Stackoverflow.com/questions/49773418", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2393805/" ]
You should have at least tensorflow 1.7.0, upgrate by: ``` pip install "tensorflow>=1.7.0" pip install tensorflow-hub ``` and then: ``` pip install tensorflow-hub ``` source: [here](https://www.tensorflow.org/hub/installation)
I believe your python3 runtime is not really running with tensorflow 1.7. That attribute exists since tensorflow 1.4. I suspect some mismatch between python2/3 environment, mismatch installing with pip/pip3 or an issue with installing both tensorflow and tf-nightly pip packages. You can double check with: ``` $ python3 -c "import tensorflow as tf; print(tf.__version__)" ```
49,773,418
after writing import tensorflow\_hub, the following error emerges: ``` class LatestModuleExporter(tf.estimator.Exporter): ``` AttributeError: module 'tensorflow.python.estimator.estimator\_lib' has no attribute 'Exporter' I'm using python 3.6 with tensorflow 1.7 on Windows 10 thanks!
2018/04/11
[ "https://Stackoverflow.com/questions/49773418", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2393805/" ]
I believe your python3 runtime is not really running with tensorflow 1.7. That attribute exists since tensorflow 1.4. I suspect some mismatch between python2/3 environment, mismatch installing with pip/pip3 or an issue with installing both tensorflow and tf-nightly pip packages. You can double check with: ``` $ python3 -c "import tensorflow as tf; print(tf.__version__)" ```
Just simple run following line on cell. import tensorflow\_hub as hub
49,773,418
after writing import tensorflow\_hub, the following error emerges: ``` class LatestModuleExporter(tf.estimator.Exporter): ``` AttributeError: module 'tensorflow.python.estimator.estimator\_lib' has no attribute 'Exporter' I'm using python 3.6 with tensorflow 1.7 on Windows 10 thanks!
2018/04/11
[ "https://Stackoverflow.com/questions/49773418", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2393805/" ]
You should have at least tensorflow 1.7.0, upgrate by: ``` pip install "tensorflow>=1.7.0" pip install tensorflow-hub ``` and then: ``` pip install tensorflow-hub ``` source: [here](https://www.tensorflow.org/hub/installation)
You can reinstall TensorFlow\_hub: ``` pip install ipykernel pip install tensorflow_hub ```
49,773,418
after writing import tensorflow\_hub, the following error emerges: ``` class LatestModuleExporter(tf.estimator.Exporter): ``` AttributeError: module 'tensorflow.python.estimator.estimator\_lib' has no attribute 'Exporter' I'm using python 3.6 with tensorflow 1.7 on Windows 10 thanks!
2018/04/11
[ "https://Stackoverflow.com/questions/49773418", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2393805/" ]
You can reinstall TensorFlow\_hub: ``` pip install ipykernel pip install tensorflow_hub ```
Just simple run following line on cell. import tensorflow\_hub as hub
49,773,418
after writing import tensorflow\_hub, the following error emerges: ``` class LatestModuleExporter(tf.estimator.Exporter): ``` AttributeError: module 'tensorflow.python.estimator.estimator\_lib' has no attribute 'Exporter' I'm using python 3.6 with tensorflow 1.7 on Windows 10 thanks!
2018/04/11
[ "https://Stackoverflow.com/questions/49773418", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2393805/" ]
You should have at least tensorflow 1.7.0, upgrate by: ``` pip install "tensorflow>=1.7.0" pip install tensorflow-hub ``` and then: ``` pip install tensorflow-hub ``` source: [here](https://www.tensorflow.org/hub/installation)
Just simple run following line on cell. import tensorflow\_hub as hub
15,448,584
I have 2 lists `a = [2, 6, 12, 13, 1, 4, 5]` and `b = [12, 1]`. Elements in list `b` are a subset of list `a`. From the above pair of lists, I need to create a list of tuples as following : ``` [(12,6),(12,2),(1,13),(1,12),(1,6),(1,2)] ``` Basically, at the point of intersection of list `b` and list `a`, so from above e.g `a` and `b` first intersection point is at index `2`, the value `12`. So create a tuple with the first element from list `b` and the second element from list `a`. I am trying this in python, any suggestion for efficiently doing creating this tuple ? Please note, each list can have 100 elements, in it.
2013/03/16
[ "https://Stackoverflow.com/questions/15448584", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1923226/" ]
You typically need to use `glibtool` and `glibtoolize`, since `libtool` already exists on OS X as a binary tool for creating Mach-O dynamic libraries. So, that's how MacPorts installs it, using a program name transform, though the port itself is still named 'libtool'. Some `autogen.sh` scripts (or their equivalent) will honor the `LIBTOOL` / `LIBTOOLIZE` environment variables. I have a line in my own `autogen.sh` scripts: ``` case `uname` in Darwin*) glibtoolize --copy ;; *) libtoolize --copy ;; esac ``` You may or may not want the `--copy` flag. --- Note: If you've installed the autotools using MacPorts, a correctly written `configure.ac` with `Makefile.am` files should only require `autoreconf -fvi`. It should call `glibtoolize`, etc., as expected. Otherwise, some packages will distribute an `autogen.sh` or similar script.
I hope my answer is not too naive. I am a noob to OSX. [brew](http://brew.sh/) install libtool solved a similar issue for me.
15,448,584
I have 2 lists `a = [2, 6, 12, 13, 1, 4, 5]` and `b = [12, 1]`. Elements in list `b` are a subset of list `a`. From the above pair of lists, I need to create a list of tuples as following : ``` [(12,6),(12,2),(1,13),(1,12),(1,6),(1,2)] ``` Basically, at the point of intersection of list `b` and list `a`, so from above e.g `a` and `b` first intersection point is at index `2`, the value `12`. So create a tuple with the first element from list `b` and the second element from list `a`. I am trying this in python, any suggestion for efficiently doing creating this tuple ? Please note, each list can have 100 elements, in it.
2013/03/16
[ "https://Stackoverflow.com/questions/15448584", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1923226/" ]
You typically need to use `glibtool` and `glibtoolize`, since `libtool` already exists on OS X as a binary tool for creating Mach-O dynamic libraries. So, that's how MacPorts installs it, using a program name transform, though the port itself is still named 'libtool'. Some `autogen.sh` scripts (or their equivalent) will honor the `LIBTOOL` / `LIBTOOLIZE` environment variables. I have a line in my own `autogen.sh` scripts: ``` case `uname` in Darwin*) glibtoolize --copy ;; *) libtoolize --copy ;; esac ``` You may or may not want the `--copy` flag. --- Note: If you've installed the autotools using MacPorts, a correctly written `configure.ac` with `Makefile.am` files should only require `autoreconf -fvi`. It should call `glibtoolize`, etc., as expected. Otherwise, some packages will distribute an `autogen.sh` or similar script.
An alternative to Brew is to use `macports`. For example: ``` $ port info libtool libtool @2.4.6_5 (devel, sysutils) Variants: universal Description: GNU libtool is a generic library support script. Libtool hides the complexity of using shared libraries behind a consistent, portable interface. Homepage: https://www.gnu.org/software/libtool Build Dependencies: xattr Platforms: darwin, freebsd License: libtool Maintainers: Email: larryv@macports.org, GitHub: larryv ``` Then like Brew, you do: ``` $ sudo port install libtool Password: ---> Fetching archive for libtool ---> Attempting to fetch libtool-2.4.6_5.darwin_15.x86_64.tbz2 from https://packages.macports.org/libtool ---> Attempting to fetch libtool-2.4.6_5.darwin_15.x86_64.tbz2.rmd160 from https://packages.macports.org/libtool ---> Installing libtool @2.4.6_5 ---> Activating libtool @2.4.6_5 ---> Cleaning libtool ---> Updating database of binaries ---> Updating database of C++ stdlib usage ---> Scanning binaries for linking errors ---> No broken files found. ---> No broken ports found. ``` Then you can check where it lives ... btw, you can soft-link glibtoolize to libtoolize. For my needs either was okay ``` $ which glibtoolize /opt/local/bin/glibtoolize ```
15,448,584
I have 2 lists `a = [2, 6, 12, 13, 1, 4, 5]` and `b = [12, 1]`. Elements in list `b` are a subset of list `a`. From the above pair of lists, I need to create a list of tuples as following : ``` [(12,6),(12,2),(1,13),(1,12),(1,6),(1,2)] ``` Basically, at the point of intersection of list `b` and list `a`, so from above e.g `a` and `b` first intersection point is at index `2`, the value `12`. So create a tuple with the first element from list `b` and the second element from list `a`. I am trying this in python, any suggestion for efficiently doing creating this tuple ? Please note, each list can have 100 elements, in it.
2013/03/16
[ "https://Stackoverflow.com/questions/15448584", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1923226/" ]
You typically need to use `glibtool` and `glibtoolize`, since `libtool` already exists on OS X as a binary tool for creating Mach-O dynamic libraries. So, that's how MacPorts installs it, using a program name transform, though the port itself is still named 'libtool'. Some `autogen.sh` scripts (or their equivalent) will honor the `LIBTOOL` / `LIBTOOLIZE` environment variables. I have a line in my own `autogen.sh` scripts: ``` case `uname` in Darwin*) glibtoolize --copy ;; *) libtoolize --copy ;; esac ``` You may or may not want the `--copy` flag. --- Note: If you've installed the autotools using MacPorts, a correctly written `configure.ac` with `Makefile.am` files should only require `autoreconf -fvi`. It should call `glibtoolize`, etc., as expected. Otherwise, some packages will distribute an `autogen.sh` or similar script.
To bring together a few threads `libtoolize` is installed as `glibtoolize` when you install `libtool` using **brew**. This can be achieved as follows; install it and then create a softlink for libtoolize: ``` brew install libtool ln -s /usr/local/bin/glibtoolize /usr/local/bin/libtoolize ```
15,448,584
I have 2 lists `a = [2, 6, 12, 13, 1, 4, 5]` and `b = [12, 1]`. Elements in list `b` are a subset of list `a`. From the above pair of lists, I need to create a list of tuples as following : ``` [(12,6),(12,2),(1,13),(1,12),(1,6),(1,2)] ``` Basically, at the point of intersection of list `b` and list `a`, so from above e.g `a` and `b` first intersection point is at index `2`, the value `12`. So create a tuple with the first element from list `b` and the second element from list `a`. I am trying this in python, any suggestion for efficiently doing creating this tuple ? Please note, each list can have 100 elements, in it.
2013/03/16
[ "https://Stackoverflow.com/questions/15448584", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1923226/" ]
I hope my answer is not too naive. I am a noob to OSX. [brew](http://brew.sh/) install libtool solved a similar issue for me.
An alternative to Brew is to use `macports`. For example: ``` $ port info libtool libtool @2.4.6_5 (devel, sysutils) Variants: universal Description: GNU libtool is a generic library support script. Libtool hides the complexity of using shared libraries behind a consistent, portable interface. Homepage: https://www.gnu.org/software/libtool Build Dependencies: xattr Platforms: darwin, freebsd License: libtool Maintainers: Email: larryv@macports.org, GitHub: larryv ``` Then like Brew, you do: ``` $ sudo port install libtool Password: ---> Fetching archive for libtool ---> Attempting to fetch libtool-2.4.6_5.darwin_15.x86_64.tbz2 from https://packages.macports.org/libtool ---> Attempting to fetch libtool-2.4.6_5.darwin_15.x86_64.tbz2.rmd160 from https://packages.macports.org/libtool ---> Installing libtool @2.4.6_5 ---> Activating libtool @2.4.6_5 ---> Cleaning libtool ---> Updating database of binaries ---> Updating database of C++ stdlib usage ---> Scanning binaries for linking errors ---> No broken files found. ---> No broken ports found. ``` Then you can check where it lives ... btw, you can soft-link glibtoolize to libtoolize. For my needs either was okay ``` $ which glibtoolize /opt/local/bin/glibtoolize ```
15,448,584
I have 2 lists `a = [2, 6, 12, 13, 1, 4, 5]` and `b = [12, 1]`. Elements in list `b` are a subset of list `a`. From the above pair of lists, I need to create a list of tuples as following : ``` [(12,6),(12,2),(1,13),(1,12),(1,6),(1,2)] ``` Basically, at the point of intersection of list `b` and list `a`, so from above e.g `a` and `b` first intersection point is at index `2`, the value `12`. So create a tuple with the first element from list `b` and the second element from list `a`. I am trying this in python, any suggestion for efficiently doing creating this tuple ? Please note, each list can have 100 elements, in it.
2013/03/16
[ "https://Stackoverflow.com/questions/15448584", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1923226/" ]
I hope my answer is not too naive. I am a noob to OSX. [brew](http://brew.sh/) install libtool solved a similar issue for me.
To bring together a few threads `libtoolize` is installed as `glibtoolize` when you install `libtool` using **brew**. This can be achieved as follows; install it and then create a softlink for libtoolize: ``` brew install libtool ln -s /usr/local/bin/glibtoolize /usr/local/bin/libtoolize ```
15,448,584
I have 2 lists `a = [2, 6, 12, 13, 1, 4, 5]` and `b = [12, 1]`. Elements in list `b` are a subset of list `a`. From the above pair of lists, I need to create a list of tuples as following : ``` [(12,6),(12,2),(1,13),(1,12),(1,6),(1,2)] ``` Basically, at the point of intersection of list `b` and list `a`, so from above e.g `a` and `b` first intersection point is at index `2`, the value `12`. So create a tuple with the first element from list `b` and the second element from list `a`. I am trying this in python, any suggestion for efficiently doing creating this tuple ? Please note, each list can have 100 elements, in it.
2013/03/16
[ "https://Stackoverflow.com/questions/15448584", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1923226/" ]
To bring together a few threads `libtoolize` is installed as `glibtoolize` when you install `libtool` using **brew**. This can be achieved as follows; install it and then create a softlink for libtoolize: ``` brew install libtool ln -s /usr/local/bin/glibtoolize /usr/local/bin/libtoolize ```
An alternative to Brew is to use `macports`. For example: ``` $ port info libtool libtool @2.4.6_5 (devel, sysutils) Variants: universal Description: GNU libtool is a generic library support script. Libtool hides the complexity of using shared libraries behind a consistent, portable interface. Homepage: https://www.gnu.org/software/libtool Build Dependencies: xattr Platforms: darwin, freebsd License: libtool Maintainers: Email: larryv@macports.org, GitHub: larryv ``` Then like Brew, you do: ``` $ sudo port install libtool Password: ---> Fetching archive for libtool ---> Attempting to fetch libtool-2.4.6_5.darwin_15.x86_64.tbz2 from https://packages.macports.org/libtool ---> Attempting to fetch libtool-2.4.6_5.darwin_15.x86_64.tbz2.rmd160 from https://packages.macports.org/libtool ---> Installing libtool @2.4.6_5 ---> Activating libtool @2.4.6_5 ---> Cleaning libtool ---> Updating database of binaries ---> Updating database of C++ stdlib usage ---> Scanning binaries for linking errors ---> No broken files found. ---> No broken ports found. ``` Then you can check where it lives ... btw, you can soft-link glibtoolize to libtoolize. For my needs either was okay ``` $ which glibtoolize /opt/local/bin/glibtoolize ```
68,438,620
I am trying to build and run the sample `python` application from AWS SAM. I just installed python, below is what command lines gives.. ``` D:\Udemy Work>python Python 3.9.6 (tags/v3.9.6:db3ff76, Jun 28 2021, 15:26:21) [MSC v.1929 64 bit (AMD64)] on win32 Type "help", "copyright", "credits" or "license" for more information. >>> D:\Udemy Work>pip -V pip 21.1.3 from c:\users\user\appdata\local\programs\python\python39\lib\site-packages\pip (python 3.9) ``` When I run `sam build`, I get the following error ``` Build Failed Error: PythonPipBuilder:Validation - Binary validation failed for python, searched for python in following locations : ['C:\\Users\\User\\AppData\\Local\\Programs\\Python\\Python39\\python.EXE', 'C:\\Users\\User\\AppData\\Local\\Microsoft\\WindowsApps\\python.EXE'] which did not satisfy constraints for runtime: python3.8. Do you have python for runtime: python3.8 on your PATH? ``` Below is my code **template.yaml** ``` AWSTemplateFormatVersion: '2010-09-09' Transform: AWS::Serverless-2016-10-31 Description: > python-test Sample SAM Template for python-test # More info about Globals: https://github.com/awslabs/serverless-application-model/blob/master/docs/globals.rst Globals: Function: Timeout: 3 Resources: HelloWorldFunction: Type: AWS::Serverless::Function # More info about Function Resource: https://github.com/awslabs/serverless-application-model/blob/master/versions/2016-10-31.md#awsserverlessfunction Properties: CodeUri: hello_world/ Handler: app.lambda_handler Runtime: python3.8 Events: HelloWorld: Type: Api # More info about API Event Source: https://github.com/awslabs/serverless-application-model/blob/master/versions/2016-10-31.md#api Properties: Path: /hello Method: get ``` **app.py** ``` AWSTemplateFormatVersion: '2010-09-09' Transform: AWS::Serverless-2016-10-31 Description: > python-test Sample SAM Template for python-test # More info about Globals: https://github.com/awslabs/serverless-application-model/blob/master/docs/globals.rst Globals: Function: Timeout: 3 Resources: HelloWorldFunction: Type: AWS::Serverless::Function # More info about Function Resource: https://github.com/awslabs/serverless-application-model/blob/master/versions/2016-10-31.md#awsserverlessfunction Properties: CodeUri: hello_world/ Handler: app.lambda_handler Runtime: python3.9 Events: HelloWorld: Type: Api # More info about API Event Source: https://github.com/awslabs/serverless-application-model/blob/master/versions/2016-10-31.md#api Properties: Path: /hello Method: get ``` If I change the run time in yaml, then I get the following error ``` PS D:\Udemy Work\awslambda\python-test> sam build Building codeuri: D:\Udemy Work\awslambda\python-test\hello_world runtime: python3.9 metadata: {} functions: ['HelloWorldFunction'] Build Failed Error: 'python3.9' runtime is not supported ``` What is the solution here?
2021/07/19
[ "https://Stackoverflow.com/questions/68438620", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1379286/" ]
You basically need unpivot or melt: <https://pandas.pydata.org/docs/reference/api/pandas.melt.html> ``` pd.melt(df, id_vars=['Number','From','To'], value_vars = ['D1_value','D2_value'])\ .rename({'variable':'Type'},axis=1)\ .dropna(subset=['value'],axis=0) ```
You can also use `pd.wide_to_long`, after reordering the column positions: ``` temp = df.rename(columns = lambda col: "_".join(col.split("_")[::-1]) if col.endswith("value") else col) pd.wide_to_long(temp, stubnames = 'value', i=['Number', 'From', 'To'], j='Type', sep='_', suffix=".+").dropna().reset_index() Out[19]: Number From To Type value 0 111 A B D1 10.0 1 111 A B D2 12.0 2 222 B A D2 4.0 3 222 B A D3 6.0 ``` You could also use `pivot_longer` from `pyjanitor` : ``` # pip install pyjanitor import janitor import pandas as pd df.pivot_longer(index = slice('Number', 'To'), #.value keeps column labels associated with it # as column headers names_to=('Type', '.value'), names_sep='_').dropna() Out[22]: Number From To Type value 0 111 A B D1 10.0 2 111 A B D2 12.0 3 222 B A D2 4.0 5 222 B A D3 6.0 ``` You can also use `stack`: ``` df = df.set_index(['Number', 'From', 'To']) # this creates a MultiIndex column df.columns = df.columns.str.split("_", expand = True) df.columns.names = ['Type', None] # stack has dropna=True as default df.stack(level = 0).reset_index() Number From To Type value 0 111 A B D1 10.0 1 111 A B D2 12.0 2 222 B A D2 4.0 3 222 B A D3 6.0 ```
28,967,976
I'm reading a pcap file in python using scapy which contains Ethernet packets that have trailer. How can I remove these trailers? P.S: Ethernet packets can not be less than 64 bytes (including FCS).Network adapters add padding zero bytes to end of the packet to overcome this problem. These padding bytes called "Trailer". See [here](https://wiki.wireshark.org/Ethernet#Allowed_Packet_Lengths) for more information.
2015/03/10
[ "https://Stackoverflow.com/questions/28967976", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2133144/" ]
It seems there is no official way to remove it. This work on frames that have IPv4 as network layer protocol: ``` packet_without_trailer=IP(str(packet[IP])[0:packet[IP].len]) ```
Just use the upper layers and ignore the Ethernet layer: `packet = eval(originalPacket[IP])`
5,127,860
When I have lots of different modules using the standard python logging module, the following stack trace does little to help me find out where, exactly, I had a badly formed log statement: ``` Traceback (most recent call last): File "/usr/lib/python2.6/logging/__init__.py", line 768, in emit msg = self.format(record) File "/usr/lib/python2.6/logging/__init__.py", line 648, in format return fmt.format(record) File "/usr/lib/python2.6/logging/__init__.py", line 436, in format record.message = record.getMessage() File "/usr/lib/python2.6/logging/__init__.py", line 306, in getMessage msg = msg % self.args TypeError: not all arguments converted during string formatting ``` I'm only starting to use python's logging module, so maybe I am overlooking something obvious. I'm not sure if the stack-trace is useless because I am using greenlets, or if this is normal for the logging module, but any help would be appreciated. I'd be willing to modify the source, anything to make the logging library actually give a clue as to where the problem lies.
2011/02/26
[ "https://Stackoverflow.com/questions/5127860", "https://Stackoverflow.com", "https://Stackoverflow.com/users/254704/" ]
The logging module is designed to stop bad log messages from killing the rest of the code, so the `emit` method catches errors and passes them to a method `handleError`. The easiest thing for you to do would be to temporarily edit `/usr/lib/python2.6/logging/__init__.py`, and find `handleError`. It looks something like this: ``` def handleError(self, record): """ Handle errors which occur during an emit() call. This method should be called from handlers when an exception is encountered during an emit() call. If raiseExceptions is false, exceptions get silently ignored. This is what is mostly wanted for a logging system - most users will not care about errors in the logging system, they are more interested in application errors. You could, however, replace this with a custom handler if you wish. The record which was being processed is passed in to this method. """ if raiseExceptions: ei = sys.exc_info() try: traceback.print_exception(ei[0], ei[1], ei[2], None, sys.stderr) sys.stderr.write('Logged from file %s, line %s\n' % ( record.filename, record.lineno)) except IOError: pass # see issue 5971 finally: del ei ``` Now temporarily edit it. Inserting a simple `raise` at the start should ensure the error gets propogated up your code instead of being swallowed. Once you've fixed the problem just restore the logging code to what it was.
Rather than editing installed python code, you can also find the errors like this: ``` def handleError(record): raise RuntimeError(record) handler.handleError = handleError ``` where handler is one of the handlers that is giving the problem. Now when the format error occurs you'll see the location.
5,127,860
When I have lots of different modules using the standard python logging module, the following stack trace does little to help me find out where, exactly, I had a badly formed log statement: ``` Traceback (most recent call last): File "/usr/lib/python2.6/logging/__init__.py", line 768, in emit msg = self.format(record) File "/usr/lib/python2.6/logging/__init__.py", line 648, in format return fmt.format(record) File "/usr/lib/python2.6/logging/__init__.py", line 436, in format record.message = record.getMessage() File "/usr/lib/python2.6/logging/__init__.py", line 306, in getMessage msg = msg % self.args TypeError: not all arguments converted during string formatting ``` I'm only starting to use python's logging module, so maybe I am overlooking something obvious. I'm not sure if the stack-trace is useless because I am using greenlets, or if this is normal for the logging module, but any help would be appreciated. I'd be willing to modify the source, anything to make the logging library actually give a clue as to where the problem lies.
2011/02/26
[ "https://Stackoverflow.com/questions/5127860", "https://Stackoverflow.com", "https://Stackoverflow.com/users/254704/" ]
The logging module is designed to stop bad log messages from killing the rest of the code, so the `emit` method catches errors and passes them to a method `handleError`. The easiest thing for you to do would be to temporarily edit `/usr/lib/python2.6/logging/__init__.py`, and find `handleError`. It looks something like this: ``` def handleError(self, record): """ Handle errors which occur during an emit() call. This method should be called from handlers when an exception is encountered during an emit() call. If raiseExceptions is false, exceptions get silently ignored. This is what is mostly wanted for a logging system - most users will not care about errors in the logging system, they are more interested in application errors. You could, however, replace this with a custom handler if you wish. The record which was being processed is passed in to this method. """ if raiseExceptions: ei = sys.exc_info() try: traceback.print_exception(ei[0], ei[1], ei[2], None, sys.stderr) sys.stderr.write('Logged from file %s, line %s\n' % ( record.filename, record.lineno)) except IOError: pass # see issue 5971 finally: del ei ``` Now temporarily edit it. Inserting a simple `raise` at the start should ensure the error gets propogated up your code instead of being swallowed. Once you've fixed the problem just restore the logging code to what it was.
Alternatively you can create a formatter of your own, but then you have to include it everywhere. ``` class DebugFormatter(logging.Formatter): def format(self, record): try: return super(DebugFormatter, self).format(record) except: print "Unable to format record" print "record.filename ", record.filename print "record.lineno ", record.lineno print "record.msg ", record.msg print "record.args: ",record.args raise FORMAT = '%(levelname)s %(filename)s:%(lineno)d %(message)s' formatter = DebugFormatter(FORMAT) handler = logging.StreamHandler() handler.setLevel(logging.DEBUG) handler.setFormatter(formatter) logger = logging.getLogger(__name__) logger.setLevel(logging.DEBUG) logger.addHandler(handler) ```
5,127,860
When I have lots of different modules using the standard python logging module, the following stack trace does little to help me find out where, exactly, I had a badly formed log statement: ``` Traceback (most recent call last): File "/usr/lib/python2.6/logging/__init__.py", line 768, in emit msg = self.format(record) File "/usr/lib/python2.6/logging/__init__.py", line 648, in format return fmt.format(record) File "/usr/lib/python2.6/logging/__init__.py", line 436, in format record.message = record.getMessage() File "/usr/lib/python2.6/logging/__init__.py", line 306, in getMessage msg = msg % self.args TypeError: not all arguments converted during string formatting ``` I'm only starting to use python's logging module, so maybe I am overlooking something obvious. I'm not sure if the stack-trace is useless because I am using greenlets, or if this is normal for the logging module, but any help would be appreciated. I'd be willing to modify the source, anything to make the logging library actually give a clue as to where the problem lies.
2011/02/26
[ "https://Stackoverflow.com/questions/5127860", "https://Stackoverflow.com", "https://Stackoverflow.com/users/254704/" ]
The logging module is designed to stop bad log messages from killing the rest of the code, so the `emit` method catches errors and passes them to a method `handleError`. The easiest thing for you to do would be to temporarily edit `/usr/lib/python2.6/logging/__init__.py`, and find `handleError`. It looks something like this: ``` def handleError(self, record): """ Handle errors which occur during an emit() call. This method should be called from handlers when an exception is encountered during an emit() call. If raiseExceptions is false, exceptions get silently ignored. This is what is mostly wanted for a logging system - most users will not care about errors in the logging system, they are more interested in application errors. You could, however, replace this with a custom handler if you wish. The record which was being processed is passed in to this method. """ if raiseExceptions: ei = sys.exc_info() try: traceback.print_exception(ei[0], ei[1], ei[2], None, sys.stderr) sys.stderr.write('Logged from file %s, line %s\n' % ( record.filename, record.lineno)) except IOError: pass # see issue 5971 finally: del ei ``` Now temporarily edit it. Inserting a simple `raise` at the start should ensure the error gets propogated up your code instead of being swallowed. Once you've fixed the problem just restore the logging code to what it was.
It's not really an answer to the question, but hopefully it will be other beginners with the logging module like me. My problem was that I replaced all occurrences of print with logging.info , so a valid line like `print('a',a)` became `logging.info('a',a)` (but it should be `logging.info('a %s'%a)` instead. This was also hinted in [How to traceback logging errors?](https://stackoverflow.com/questions/13459085/how-to-traceback-logging-errors) , but it doesn't come up in the research
5,127,860
When I have lots of different modules using the standard python logging module, the following stack trace does little to help me find out where, exactly, I had a badly formed log statement: ``` Traceback (most recent call last): File "/usr/lib/python2.6/logging/__init__.py", line 768, in emit msg = self.format(record) File "/usr/lib/python2.6/logging/__init__.py", line 648, in format return fmt.format(record) File "/usr/lib/python2.6/logging/__init__.py", line 436, in format record.message = record.getMessage() File "/usr/lib/python2.6/logging/__init__.py", line 306, in getMessage msg = msg % self.args TypeError: not all arguments converted during string formatting ``` I'm only starting to use python's logging module, so maybe I am overlooking something obvious. I'm not sure if the stack-trace is useless because I am using greenlets, or if this is normal for the logging module, but any help would be appreciated. I'd be willing to modify the source, anything to make the logging library actually give a clue as to where the problem lies.
2011/02/26
[ "https://Stackoverflow.com/questions/5127860", "https://Stackoverflow.com", "https://Stackoverflow.com/users/254704/" ]
The logging module is designed to stop bad log messages from killing the rest of the code, so the `emit` method catches errors and passes them to a method `handleError`. The easiest thing for you to do would be to temporarily edit `/usr/lib/python2.6/logging/__init__.py`, and find `handleError`. It looks something like this: ``` def handleError(self, record): """ Handle errors which occur during an emit() call. This method should be called from handlers when an exception is encountered during an emit() call. If raiseExceptions is false, exceptions get silently ignored. This is what is mostly wanted for a logging system - most users will not care about errors in the logging system, they are more interested in application errors. You could, however, replace this with a custom handler if you wish. The record which was being processed is passed in to this method. """ if raiseExceptions: ei = sys.exc_info() try: traceback.print_exception(ei[0], ei[1], ei[2], None, sys.stderr) sys.stderr.write('Logged from file %s, line %s\n' % ( record.filename, record.lineno)) except IOError: pass # see issue 5971 finally: del ei ``` Now temporarily edit it. Inserting a simple `raise` at the start should ensure the error gets propogated up your code instead of being swallowed. Once you've fixed the problem just restore the logging code to what it was.
**Had same problem** Such a Traceback arises due to the wrong format name. So while creating a format for a log file, check the format name once in python documentation: "<https://docs.python.org/3/library/logging.html#formatter-objects>"
5,127,860
When I have lots of different modules using the standard python logging module, the following stack trace does little to help me find out where, exactly, I had a badly formed log statement: ``` Traceback (most recent call last): File "/usr/lib/python2.6/logging/__init__.py", line 768, in emit msg = self.format(record) File "/usr/lib/python2.6/logging/__init__.py", line 648, in format return fmt.format(record) File "/usr/lib/python2.6/logging/__init__.py", line 436, in format record.message = record.getMessage() File "/usr/lib/python2.6/logging/__init__.py", line 306, in getMessage msg = msg % self.args TypeError: not all arguments converted during string formatting ``` I'm only starting to use python's logging module, so maybe I am overlooking something obvious. I'm not sure if the stack-trace is useless because I am using greenlets, or if this is normal for the logging module, but any help would be appreciated. I'd be willing to modify the source, anything to make the logging library actually give a clue as to where the problem lies.
2011/02/26
[ "https://Stackoverflow.com/questions/5127860", "https://Stackoverflow.com", "https://Stackoverflow.com/users/254704/" ]
Rather than editing installed python code, you can also find the errors like this: ``` def handleError(record): raise RuntimeError(record) handler.handleError = handleError ``` where handler is one of the handlers that is giving the problem. Now when the format error occurs you'll see the location.
Alternatively you can create a formatter of your own, but then you have to include it everywhere. ``` class DebugFormatter(logging.Formatter): def format(self, record): try: return super(DebugFormatter, self).format(record) except: print "Unable to format record" print "record.filename ", record.filename print "record.lineno ", record.lineno print "record.msg ", record.msg print "record.args: ",record.args raise FORMAT = '%(levelname)s %(filename)s:%(lineno)d %(message)s' formatter = DebugFormatter(FORMAT) handler = logging.StreamHandler() handler.setLevel(logging.DEBUG) handler.setFormatter(formatter) logger = logging.getLogger(__name__) logger.setLevel(logging.DEBUG) logger.addHandler(handler) ```
5,127,860
When I have lots of different modules using the standard python logging module, the following stack trace does little to help me find out where, exactly, I had a badly formed log statement: ``` Traceback (most recent call last): File "/usr/lib/python2.6/logging/__init__.py", line 768, in emit msg = self.format(record) File "/usr/lib/python2.6/logging/__init__.py", line 648, in format return fmt.format(record) File "/usr/lib/python2.6/logging/__init__.py", line 436, in format record.message = record.getMessage() File "/usr/lib/python2.6/logging/__init__.py", line 306, in getMessage msg = msg % self.args TypeError: not all arguments converted during string formatting ``` I'm only starting to use python's logging module, so maybe I am overlooking something obvious. I'm not sure if the stack-trace is useless because I am using greenlets, or if this is normal for the logging module, but any help would be appreciated. I'd be willing to modify the source, anything to make the logging library actually give a clue as to where the problem lies.
2011/02/26
[ "https://Stackoverflow.com/questions/5127860", "https://Stackoverflow.com", "https://Stackoverflow.com/users/254704/" ]
Rather than editing installed python code, you can also find the errors like this: ``` def handleError(record): raise RuntimeError(record) handler.handleError = handleError ``` where handler is one of the handlers that is giving the problem. Now when the format error occurs you'll see the location.
It's not really an answer to the question, but hopefully it will be other beginners with the logging module like me. My problem was that I replaced all occurrences of print with logging.info , so a valid line like `print('a',a)` became `logging.info('a',a)` (but it should be `logging.info('a %s'%a)` instead. This was also hinted in [How to traceback logging errors?](https://stackoverflow.com/questions/13459085/how-to-traceback-logging-errors) , but it doesn't come up in the research
5,127,860
When I have lots of different modules using the standard python logging module, the following stack trace does little to help me find out where, exactly, I had a badly formed log statement: ``` Traceback (most recent call last): File "/usr/lib/python2.6/logging/__init__.py", line 768, in emit msg = self.format(record) File "/usr/lib/python2.6/logging/__init__.py", line 648, in format return fmt.format(record) File "/usr/lib/python2.6/logging/__init__.py", line 436, in format record.message = record.getMessage() File "/usr/lib/python2.6/logging/__init__.py", line 306, in getMessage msg = msg % self.args TypeError: not all arguments converted during string formatting ``` I'm only starting to use python's logging module, so maybe I am overlooking something obvious. I'm not sure if the stack-trace is useless because I am using greenlets, or if this is normal for the logging module, but any help would be appreciated. I'd be willing to modify the source, anything to make the logging library actually give a clue as to where the problem lies.
2011/02/26
[ "https://Stackoverflow.com/questions/5127860", "https://Stackoverflow.com", "https://Stackoverflow.com/users/254704/" ]
Rather than editing installed python code, you can also find the errors like this: ``` def handleError(record): raise RuntimeError(record) handler.handleError = handleError ``` where handler is one of the handlers that is giving the problem. Now when the format error occurs you'll see the location.
**Had same problem** Such a Traceback arises due to the wrong format name. So while creating a format for a log file, check the format name once in python documentation: "<https://docs.python.org/3/library/logging.html#formatter-objects>"
5,127,860
When I have lots of different modules using the standard python logging module, the following stack trace does little to help me find out where, exactly, I had a badly formed log statement: ``` Traceback (most recent call last): File "/usr/lib/python2.6/logging/__init__.py", line 768, in emit msg = self.format(record) File "/usr/lib/python2.6/logging/__init__.py", line 648, in format return fmt.format(record) File "/usr/lib/python2.6/logging/__init__.py", line 436, in format record.message = record.getMessage() File "/usr/lib/python2.6/logging/__init__.py", line 306, in getMessage msg = msg % self.args TypeError: not all arguments converted during string formatting ``` I'm only starting to use python's logging module, so maybe I am overlooking something obvious. I'm not sure if the stack-trace is useless because I am using greenlets, or if this is normal for the logging module, but any help would be appreciated. I'd be willing to modify the source, anything to make the logging library actually give a clue as to where the problem lies.
2011/02/26
[ "https://Stackoverflow.com/questions/5127860", "https://Stackoverflow.com", "https://Stackoverflow.com/users/254704/" ]
It's not really an answer to the question, but hopefully it will be other beginners with the logging module like me. My problem was that I replaced all occurrences of print with logging.info , so a valid line like `print('a',a)` became `logging.info('a',a)` (but it should be `logging.info('a %s'%a)` instead. This was also hinted in [How to traceback logging errors?](https://stackoverflow.com/questions/13459085/how-to-traceback-logging-errors) , but it doesn't come up in the research
Alternatively you can create a formatter of your own, but then you have to include it everywhere. ``` class DebugFormatter(logging.Formatter): def format(self, record): try: return super(DebugFormatter, self).format(record) except: print "Unable to format record" print "record.filename ", record.filename print "record.lineno ", record.lineno print "record.msg ", record.msg print "record.args: ",record.args raise FORMAT = '%(levelname)s %(filename)s:%(lineno)d %(message)s' formatter = DebugFormatter(FORMAT) handler = logging.StreamHandler() handler.setLevel(logging.DEBUG) handler.setFormatter(formatter) logger = logging.getLogger(__name__) logger.setLevel(logging.DEBUG) logger.addHandler(handler) ```
5,127,860
When I have lots of different modules using the standard python logging module, the following stack trace does little to help me find out where, exactly, I had a badly formed log statement: ``` Traceback (most recent call last): File "/usr/lib/python2.6/logging/__init__.py", line 768, in emit msg = self.format(record) File "/usr/lib/python2.6/logging/__init__.py", line 648, in format return fmt.format(record) File "/usr/lib/python2.6/logging/__init__.py", line 436, in format record.message = record.getMessage() File "/usr/lib/python2.6/logging/__init__.py", line 306, in getMessage msg = msg % self.args TypeError: not all arguments converted during string formatting ``` I'm only starting to use python's logging module, so maybe I am overlooking something obvious. I'm not sure if the stack-trace is useless because I am using greenlets, or if this is normal for the logging module, but any help would be appreciated. I'd be willing to modify the source, anything to make the logging library actually give a clue as to where the problem lies.
2011/02/26
[ "https://Stackoverflow.com/questions/5127860", "https://Stackoverflow.com", "https://Stackoverflow.com/users/254704/" ]
Alternatively you can create a formatter of your own, but then you have to include it everywhere. ``` class DebugFormatter(logging.Formatter): def format(self, record): try: return super(DebugFormatter, self).format(record) except: print "Unable to format record" print "record.filename ", record.filename print "record.lineno ", record.lineno print "record.msg ", record.msg print "record.args: ",record.args raise FORMAT = '%(levelname)s %(filename)s:%(lineno)d %(message)s' formatter = DebugFormatter(FORMAT) handler = logging.StreamHandler() handler.setLevel(logging.DEBUG) handler.setFormatter(formatter) logger = logging.getLogger(__name__) logger.setLevel(logging.DEBUG) logger.addHandler(handler) ```
**Had same problem** Such a Traceback arises due to the wrong format name. So while creating a format for a log file, check the format name once in python documentation: "<https://docs.python.org/3/library/logging.html#formatter-objects>"
5,127,860
When I have lots of different modules using the standard python logging module, the following stack trace does little to help me find out where, exactly, I had a badly formed log statement: ``` Traceback (most recent call last): File "/usr/lib/python2.6/logging/__init__.py", line 768, in emit msg = self.format(record) File "/usr/lib/python2.6/logging/__init__.py", line 648, in format return fmt.format(record) File "/usr/lib/python2.6/logging/__init__.py", line 436, in format record.message = record.getMessage() File "/usr/lib/python2.6/logging/__init__.py", line 306, in getMessage msg = msg % self.args TypeError: not all arguments converted during string formatting ``` I'm only starting to use python's logging module, so maybe I am overlooking something obvious. I'm not sure if the stack-trace is useless because I am using greenlets, or if this is normal for the logging module, but any help would be appreciated. I'd be willing to modify the source, anything to make the logging library actually give a clue as to where the problem lies.
2011/02/26
[ "https://Stackoverflow.com/questions/5127860", "https://Stackoverflow.com", "https://Stackoverflow.com/users/254704/" ]
It's not really an answer to the question, but hopefully it will be other beginners with the logging module like me. My problem was that I replaced all occurrences of print with logging.info , so a valid line like `print('a',a)` became `logging.info('a',a)` (but it should be `logging.info('a %s'%a)` instead. This was also hinted in [How to traceback logging errors?](https://stackoverflow.com/questions/13459085/how-to-traceback-logging-errors) , but it doesn't come up in the research
**Had same problem** Such a Traceback arises due to the wrong format name. So while creating a format for a log file, check the format name once in python documentation: "<https://docs.python.org/3/library/logging.html#formatter-objects>"
52,629,106
Hello everyone I have a file of which consist of some random information but I only want the part that is important to me. ``` name: Zack age: 17 As Mixed: Zack:17 Subjects opted : 3 Subject #1: Arts name: Mike age: 15 As Mixed: Mike:15 Subjects opted : 3 Subject #1: Arts ``` Above is a example of my text file I want **Zack:17** and **Mike:15** part to be written in a text file and everything else to be ignored. I watched some YouTube videos and came across split statement in python but it didn't work. My code example ``` with open("/home/ninja/Desktop/raw.txt","r") as raw: for rec in raw: print rec.split('As Mixed: ')[0] ``` This didn’t work. Any help will really help me to finish this project. Thanks.
2018/10/03
[ "https://Stackoverflow.com/questions/52629106", "https://Stackoverflow.com", "https://Stackoverflow.com/users/9606164/" ]
You can split the data at the `:` and grab only `As Mixed` parameter ``` content = [i.strip('\n').split(': ') for i in open('filename.txt')] results = [b for a, b in content if a.startswith('As Mixed')] ``` Output: ``` ['Zack:17', 'Mike:15'] ``` To write the results to a file: ``` with open('filename.txt', 'w') as f: for i in results: f.write(f'{i}\n') ```
Try this ``` import re found = [] match = re.compile('(Mike|Zack):(\w*)') with open('/hope/ninja/Destop/raw.twt', "r") as raw: for rec in raw: found.extend(match.find_all(rec)) print(found) #output: [('Mike', '15'), ('Zack', '17')] ``` This uses regular expressions to find the value needed, basically `(Mike|Zack):(\w*)` finds Mike or Zack and then a `:` character and then as many word as it can find. To learn more about regular expressions you can read from this website: <https://docs.python.org/3.4/library/re.html>
31,112,523
I am using this python script to download OSM data and convert it to an undirected networkx graph: <https://gist.github.com/rajanski/ccf65d4f5106c2cdc70e> However,in the ideal case, I would like to generate a directed graph from it in order to refelct the directionality of the osm street network. First of all, can you confirm that as stated [here](https://help.openstreetmap.org/answer_link/15463/) and [here](https://wiki.openstreetmap.org/wiki/Way) in OSM raw xml data, the order of the nd-entries in the way is what matters for the direction? And secondly, how would you suggest to implement the generation of a directed graph from the osm raw data, give the the above gist code snippet as a template? many thanks!
2015/06/29
[ "https://Stackoverflow.com/questions/31112523", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2772305/" ]
The order of the nodes only matters if the way is tagged with *[oneway](https://wiki.openstreetmap.org/wiki/Key:oneway)=yes* or *oneway=-1*. Otherwise the way is bidirectional. This applies only for vehicles of course. The only exception is *[highway=motorway](https://wiki.openstreetmap.org/wiki/Tag:highway%3Dmotorway)* which implies *oneway=yes*. You might also be interested in the [routing](https://wiki.openstreetmap.org/wiki/Routing) wiki page. It lists two routers implemented in python, and many others.
OK, I updated my script in order to enable directionality: <https://gist.github.com/rajanski/ccf65d4f5106c2cdc70e>
45,382,324
I will try to be very specific and informative. I want to create a Dockerfile with all the packages that are used in geosciences for the good of the geospatial/geoscientific community. The Dockerfile is built on top of the [scipy-notebook](https://github.com/jupyter/docker-stacks/tree/master/scipy-notebook) docker-stack. **The problem:** I am trying to build HPGL (a Python package for Geostatistics). For the dependencies: I build some packages using `apt-get` and for those packages that I couldn't install via `apt` I downloaded the .deb packages. The Dockerfile below shows the steps for building all the HPGL dependencies: ``` FROM jupyter/scipy-notebook ### ### HPGL - High Performance Geostatistics Library ### USER root RUN apt-get update && \ apt-get install -y \ gcc \ g++ \ libboost-all-dev RUN apt-get update && \ apt-get install -y \ liblapack-dev \ libblas-dev \ liblapacke-dev RUN apt-get update && \ apt-get install -y \ scons RUN wget http://ftp.us.debian.org/debian/pool/main/libf/libf2c2/libf2c2_20090411-2_amd64.deb && \ dpkg -i libf2c2_20090411-2_amd64.deb RUN wget http://ftp.us.debian.org/debian/pool/main/libf/libf2c2/libf2c2-dev_20090411-2_amd64.deb && \ dpkg -i libf2c2-dev_20090411-2_amd64.deb RUN wget http://mirrors.kernel.org/ubuntu/pool/universe/c/clapack/libcblas3_3.2.1+dfsg-1_amd64.deb && \ dpkg -i libcblas3_3.2.1+dfsg-1_amd64.deb RUN wget http://mirrors.kernel.org/ubuntu/pool/universe/c/clapack/libcblas-dev_3.2.1+dfsg-1_amd64.deb && \ dpkg -i libcblas-dev_3.2.1+dfsg-1_amd64.deb RUN wget http://ftp.us.debian.org/debian/pool/main/c/clapack/libclapack3_3.2.1+dfsg-1_amd64.deb && \ dpkg -i libclapack3_3.2.1+dfsg-1_amd64.deb RUN wget http://ftp.us.debian.org/debian/pool/main/c/clapack/libclapack-dev_3.2.1+dfsg-1_amd64.deb && \ dpkg -i libclapack-dev_3.2.1+dfsg-1_amd64.deb RUN wget https://mirror.kku.ac.th/ubuntu/ubuntu/pool/main/l/lapack/libtmglib3_3.7.1-1_amd64.deb && \ dpkg -i libtmglib3_3.7.1-1_amd64.deb RUN wget http://ftp.us.debian.org/debian/pool/main/l/lapack/libtmglib-dev_3.7.1-1_amd64.deb && \ dpkg -i libtmglib-dev_3.7.1-1_amd64.deb RUN git clone https://github.com/hpgl/hpgl.git RUN cd hpgl/src/ && \ bash -c "source activate python2 && scons -j 2" RUN cd hpgl/src/ && \ bash -c "source activate python2 && python2 setup.py install" RUN rm -rf hpgl \ scons-2.5.0* \ libf2c2_20090411-2_amd64.deb \ libf2c2-dev_20090411-2_amd64.deb \ libtmglib3_3.7.1-1_amd64.deb \ libtmglib-dev_3.7.1-1_amd64.deb \ libcblas3_3.2.1+dfsg-1_amd64.deb \ libcblas-dev_3.2.1+dfsg-1_amd64.deb \ libclapack3_3.2.1+dfsg-1_amd64.deb \ libclapack-dev_3.2.1+dfsg-1_amd64.deb USER $NB_USER ``` This runs smooth and I can run the Docker container and start notebooks, but when I import HPGL in Python I get this error that I have no idea what is happening or how to solve this: ``` --------------------------------------------------------------------------- OSError Traceback (most recent call last) <ipython-input-1-604a7d0744ab> in <module>() ----> 1 import geo_bsd /opt/conda/envs/python2/lib/python2.7/site-packages/HPGL_BSD-0.9.9-py2.7.egg/geo_bsd/__init__.py in <module>() 2 3 ----> 4 from geo import * 5 from sgs import sgs_simulation 6 from sis import sis_simulation /opt/conda/envs/python2/lib/python2.7/site-packages/HPGL_BSD-0.9.9-py2.7.egg/geo_bsd/geo.py in <module>() 3 import ctypes as C 4 ----> 5 from hpgl_wrap import _HPGL_SHAPE, _HPGL_CONT_MASKED_ARRAY, _HPGL_IND_MASKED_ARRAY, _HPGL_UBYTE_ARRAY, _HPGL_FLOAT_ARRAY, _HPGL_OK_PARAMS, _HPGL_SK_PARAMS, _HPGL_IK_PARAMS, _HPGL_MEDIAN_IK_PARAMS, __hpgl_cov_params_t, __hpgl_cockriging_m1_params_t, __hpgl_cockriging_m2_params_t, _hpgl_so 6 from hpgl_wrap import hpgl_output_handler, hpgl_progress_handler 7 /opt/conda/envs/python2/lib/python2.7/site-packages/HPGL_BSD-0.9.9-py2.7.egg/geo_bsd/hpgl_wrap.py in <module>() 144 _hpgl_so = NC.load_library('hpgl_d', __file__) 145 else: --> 146 _hpgl_so = NC.load_library('hpgl', __file__) 147 148 _hpgl_so.hpgl_set_output_handler.restype = None /opt/conda/envs/python2/lib/python2.7/site-packages/numpy/ctypeslib.pyc in load_library(libname, loader_path) 148 if os.path.exists(libpath): 149 try: --> 150 return ctypes.cdll[libpath] 151 except OSError: 152 ## defective lib file /opt/conda/envs/python2/lib/python2.7/ctypes/__init__.pyc in __getitem__(self, name) 435 436 def __getitem__(self, name): --> 437 return getattr(self, name) 438 439 def LoadLibrary(self, name): /opt/conda/envs/python2/lib/python2.7/ctypes/__init__.pyc in __getattr__(self, name) 430 if name[0] == '_': 431 raise AttributeError(name) --> 432 dll = self._dlltype(name) 433 setattr(self, name, dll) 434 return dll /opt/conda/envs/python2/lib/python2.7/ctypes/__init__.pyc in __init__(self, name, mode, handle, use_errno, use_last_error) 360 361 if handle is None: --> 362 self._handle = _dlopen(self._name, mode) 363 else: 364 self._handle = handle OSError: /usr/lib/libf2c.so.2: undefined symbol: MAIN__ ``` EDIT1: So apparently there is this very similar problem pointed by @Jean-François Fabre [Here!](https://stackoverflow.com/questions/8345725/linker-errors-with-fortran-to-c-library-usr-lib-libf2c-so-undefined-referenc). There, the problem was related to the file `libf2c.so` and was solved like this: ``` rm /usr/lib/libf2c.so && ln -s /usr/lib/libf2c.a /usr/lib/libf2c.so ``` This solution was explained by @p929 in the same thread: > > What it does is in fact is to delete the dynamic library and create an > alias to the static library. > > > Now, I understand that I have the same problem, but with a different file (`/usr/lib/libf2c.so.2`). The solution would be to "delete the dynamic library and create an alias to the static library". I tried that with the same static library `/usr/lib/libf2c.a` and had no success.
2017/07/28
[ "https://Stackoverflow.com/questions/45382324", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5361345/" ]
> > looks like it would work in the older groovy Jenkinsfiles > > > you can use the `script` step to enclose a block of code, and, inside this block, declarative pipelines basically act like scripted, so you can still use the technique described in the answer you referenced. welcome to stackoverflow. i hope you enjoy yourself here.
I was facing the same issue and found that instead of using the following avoids 'Requires approval of the script in my Jenkins server at Jenkins > Manage jenkins > In-process Script Approval'. Instead of: env['setup\_build\_number'] = setupResult.getNumber() (from code mentioned in Solution above) Use this: env.setup\_build\_number = setupResult.getNumber()
26,154,104
I'm trying to run the following Cypher query in neomodel: ``` MATCH (b1:Bal { text:'flame' }), (b2:Bal { text:'candle' }), p = shortestPath((b1)-[*..15]-(b2)) RETURN p ``` which works great on neo4j via the server console. It returns 3 nodes with two relationships connecting. However, when I attempt the following in python: ``` # Py2Neo version of cypher query in python from py2neo import neo4j graph_db = neo4j.GraphDatabaseService() shortest_path_text = "MATCH (b1:Bal { text:'flame' }), (b2:Bal { text:'candle' }), p = shortestPath((b1)-[*..15]-(b2)) RETURN p" results = neo4j.CypherQuery(graph_db, shortest_path_text).execute() ``` or ``` # neomodel version of cypher query in python from neomodel import db shortest_path_text = "MATCH (b1:Bal { text:'flame' }), (b2:Bal { text:'candle' }), p = shortestPath((b1)-[*..15]-(b2)) RETURN p" results, meta = db.cypher_query(shortest_path_text) ``` both give the following error: ``` /Library/Python/2.7/site-packages/neomodel-1.0.1-py2.7.egg/neomodel/util.py in _hydrated(data) 73 elif obj_type == 'relationship': 74 return Rel(data) ---> 75 raise NotImplemented("Don't know how to inflate: " + repr(data)) 76 elif neo4j.is_collection(data): 77 return type(data)([_hydrated(datum) for datum in data]) TypeError: 'NotImplementedType' object is not callable ``` which makes sense considering neomodel is based on py2neo. The main question is how to get a shortestPath query to work via either of these? Is there a better method within python? or is cypher the best way to do it? edit: I also tried the following from [here](https://stackoverflow.com/questions/19989994/cypher-query-in-py2neo) which gave the same error. ``` graph_db = neo4j.GraphDatabaseService() query_string = "START beginning=node(1), end=node(4) \ MATCH p = shortestPath(beginning-[*..500]-end) \ RETURN p" result = neo4j.CypherQuery(graph_db, query_string).execute() for r in result: print type(r) # r is a py2neo.util.Record object print type(r.p) # p is a py2neo.neo4j.Path object ```
2014/10/02
[ "https://Stackoverflow.com/questions/26154104", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4101066/" ]
Ok, I figured it out. I used the tutorial [here]( based on @nigel-small 's answer. ``` from py2neo import cypher session = cypher.Session("http://localhost:7474") tx = session.create_transaction() tx.append("START beginning=node(3), end=node(16) MATCH p = shortestPath(beginning-[*..500]-end) RETURN p") tx.execute() ``` which returned: ``` [[Record(columns=(u'p',), values=(Path(Node('http://localhost:7474/db/data/node/3'), ('threads', {}), Node('http://localhost:7474/db/data/node/1'), ('threads', {}), Node('http://localhost:7474/db/data/node/2'), ('threads', {}), Node('http://localhost:7474/db/data/node/16')),))]] ``` From here, I expect I'll inflate each of the values back to my neomodel objects and into django for easier manipulation. Will post that code as I get there.
The error message you provide is specific to neomodel and looks to have been raised as there is not yet any support for inflating py2neo Path objects in neomodel. This should however work fine in raw py2neo as paths are fully supported, so it may be worth trying that again. Py2neo certainly wouldn't raise an error from within the neomodel code. I've just tried a `shortestPath` query myself and it returns a value as expected.
70,929,680
I have a dataframe ``` import pandas as pd import numpy as np df1 = pd.DataFrame.from_dict( {"col1": [0, 0, 0, 0, 0], "col2": ["15", [10,15,20], "30", [20, 25], np.nan]}) ``` which looks like this | col1 | col2 | | --- | --- | | 0 | "15" | | 0 | [10,15,20] | | 0 | "30" | | 0 | [20,25] | | 0 | NaN | For col2, I need the highest value of each row, e.g. 15 for the first row and 20 for the second row, so that I end up with the following dataframe: ``` df2 = pd.DataFrame.from_dict( {"col1": [0, 0, 0, 0, 0], "col2": [15, 20, 30, 25, np.nan]}) ``` which should look like this | col1 | col2 | | --- | --- | | 0 | 15 | | 0 | 20 | | 0 | 30 | | 0 | 25 | | 0 | NaN | I tried using a for-loop that checks which type col2 for each row has, and then converts str to int, applies max() to lists and leaves nan's as they are but did not succeed. This is how I did tried (although I suggest to just ignore my attempt): ``` col = df1["col2"] coltypes = [] for i in col: #get type of each row coltype = type(i) coltypes.append(coltype) df1["coltypes"] = coltypes #assign value to col3 based on type df1["col3"] = np.where(df1["coltypes"] == str, df1["col1"].astype(int), np.where(df1["coltypes"] == list, max(df1["coltypes"]), np.nan)) ``` Giving the following error ``` --------------------------------------------------------------------------- TypeError Traceback (most recent call last) <ipython-input-10-b8eb266d5519> in <module> 9 10 df1["col3"] = np.where(df1["coltypes"] == str, df1["col1"].astype(int), ---> 11 np.where(df1["coltypes"] == list, max(df1["coltypes"]), np.nan)) TypeError: '>' not supported between instances of 'type' and 'type' ```
2022/01/31
[ "https://Stackoverflow.com/questions/70929680", "https://Stackoverflow.com", "https://Stackoverflow.com/users/15815734/" ]
Let us try `explode` then `groupby` with `max` ``` out = df1.col2.explode().groupby(level=0).max() Out[208]: 0 15 1 20 2 30 3 25 4 NaN Name: col2, dtype: object ```
``` import pandas as pd import numpy as np df1 = pd.DataFrame.from_dict( {"col1": [0, 0, 0, 0, 0], "col2": ["15", [10,15,20], "30", [20, 25], np.nan]}) res=df1['col2'] lis=[] for i in res: if type(i)==str: i=int(i) if type(i)==list: i=max(i) lis.append(i) else: lis.append(i) df1['col2']=lis df1 ``` I think you want this in answer.... [![enter image description here](https://i.stack.imgur.com/lk0Pb.png)](https://i.stack.imgur.com/lk0Pb.png)
70,929,680
I have a dataframe ``` import pandas as pd import numpy as np df1 = pd.DataFrame.from_dict( {"col1": [0, 0, 0, 0, 0], "col2": ["15", [10,15,20], "30", [20, 25], np.nan]}) ``` which looks like this | col1 | col2 | | --- | --- | | 0 | "15" | | 0 | [10,15,20] | | 0 | "30" | | 0 | [20,25] | | 0 | NaN | For col2, I need the highest value of each row, e.g. 15 for the first row and 20 for the second row, so that I end up with the following dataframe: ``` df2 = pd.DataFrame.from_dict( {"col1": [0, 0, 0, 0, 0], "col2": [15, 20, 30, 25, np.nan]}) ``` which should look like this | col1 | col2 | | --- | --- | | 0 | 15 | | 0 | 20 | | 0 | 30 | | 0 | 25 | | 0 | NaN | I tried using a for-loop that checks which type col2 for each row has, and then converts str to int, applies max() to lists and leaves nan's as they are but did not succeed. This is how I did tried (although I suggest to just ignore my attempt): ``` col = df1["col2"] coltypes = [] for i in col: #get type of each row coltype = type(i) coltypes.append(coltype) df1["coltypes"] = coltypes #assign value to col3 based on type df1["col3"] = np.where(df1["coltypes"] == str, df1["col1"].astype(int), np.where(df1["coltypes"] == list, max(df1["coltypes"]), np.nan)) ``` Giving the following error ``` --------------------------------------------------------------------------- TypeError Traceback (most recent call last) <ipython-input-10-b8eb266d5519> in <module> 9 10 df1["col3"] = np.where(df1["coltypes"] == str, df1["col1"].astype(int), ---> 11 np.where(df1["coltypes"] == list, max(df1["coltypes"]), np.nan)) TypeError: '>' not supported between instances of 'type' and 'type' ```
2022/01/31
[ "https://Stackoverflow.com/questions/70929680", "https://Stackoverflow.com", "https://Stackoverflow.com/users/15815734/" ]
Let us try `explode` then `groupby` with `max` ``` out = df1.col2.explode().groupby(level=0).max() Out[208]: 0 15 1 20 2 30 3 25 4 NaN Name: col2, dtype: object ```
Another approach that might be easier to understand would be using `apply()` with a simple function that returns the max depending on the type. ``` import pandas as pd import numpy as np df1 = pd.DataFrame.from_dict( {"col1": [0, 0, 0, 0, 0], "col2": ["15", [10,15,20], "30", [20, 25], np.nan]}) def get_max(x): if isinstance(x, list): return max(x) elif isinstance(x, str): return int(x) else: return x df1['max'] = df1['col2'].apply(get_max) print(df1) ``` Output would be: ``` col1 col2 max 0 0 15 15.0 1 0 [10, 15, 20] 20.0 2 0 30 30.0 3 0 [20, 25] 25.0 4 0 NaN NaN ```
70,929,680
I have a dataframe ``` import pandas as pd import numpy as np df1 = pd.DataFrame.from_dict( {"col1": [0, 0, 0, 0, 0], "col2": ["15", [10,15,20], "30", [20, 25], np.nan]}) ``` which looks like this | col1 | col2 | | --- | --- | | 0 | "15" | | 0 | [10,15,20] | | 0 | "30" | | 0 | [20,25] | | 0 | NaN | For col2, I need the highest value of each row, e.g. 15 for the first row and 20 for the second row, so that I end up with the following dataframe: ``` df2 = pd.DataFrame.from_dict( {"col1": [0, 0, 0, 0, 0], "col2": [15, 20, 30, 25, np.nan]}) ``` which should look like this | col1 | col2 | | --- | --- | | 0 | 15 | | 0 | 20 | | 0 | 30 | | 0 | 25 | | 0 | NaN | I tried using a for-loop that checks which type col2 for each row has, and then converts str to int, applies max() to lists and leaves nan's as they are but did not succeed. This is how I did tried (although I suggest to just ignore my attempt): ``` col = df1["col2"] coltypes = [] for i in col: #get type of each row coltype = type(i) coltypes.append(coltype) df1["coltypes"] = coltypes #assign value to col3 based on type df1["col3"] = np.where(df1["coltypes"] == str, df1["col1"].astype(int), np.where(df1["coltypes"] == list, max(df1["coltypes"]), np.nan)) ``` Giving the following error ``` --------------------------------------------------------------------------- TypeError Traceback (most recent call last) <ipython-input-10-b8eb266d5519> in <module> 9 10 df1["col3"] = np.where(df1["coltypes"] == str, df1["col1"].astype(int), ---> 11 np.where(df1["coltypes"] == list, max(df1["coltypes"]), np.nan)) TypeError: '>' not supported between instances of 'type' and 'type' ```
2022/01/31
[ "https://Stackoverflow.com/questions/70929680", "https://Stackoverflow.com", "https://Stackoverflow.com/users/15815734/" ]
Another approach that might be easier to understand would be using `apply()` with a simple function that returns the max depending on the type. ``` import pandas as pd import numpy as np df1 = pd.DataFrame.from_dict( {"col1": [0, 0, 0, 0, 0], "col2": ["15", [10,15,20], "30", [20, 25], np.nan]}) def get_max(x): if isinstance(x, list): return max(x) elif isinstance(x, str): return int(x) else: return x df1['max'] = df1['col2'].apply(get_max) print(df1) ``` Output would be: ``` col1 col2 max 0 0 15 15.0 1 0 [10, 15, 20] 20.0 2 0 30 30.0 3 0 [20, 25] 25.0 4 0 NaN NaN ```
``` import pandas as pd import numpy as np df1 = pd.DataFrame.from_dict( {"col1": [0, 0, 0, 0, 0], "col2": ["15", [10,15,20], "30", [20, 25], np.nan]}) res=df1['col2'] lis=[] for i in res: if type(i)==str: i=int(i) if type(i)==list: i=max(i) lis.append(i) else: lis.append(i) df1['col2']=lis df1 ``` I think you want this in answer.... [![enter image description here](https://i.stack.imgur.com/lk0Pb.png)](https://i.stack.imgur.com/lk0Pb.png)
28,708,752
I apologize for my ignorance of how python handles strings in advance. I have a .txt file that is at least 1000 lines long. It looks something like below ``` :dodge 1 6 some description string of unknown length E7 8 another description string 3445 0 oil temp something description voltage over limit etc :ford AF 4 description of stuff 0 8 string descritiopn ``` What I want to do is basically put a ";" before each string so what I will end up with is as follows ``` :dodge 1 6 ;some description string of unknown length E7 8 ;another description string 3445 0 ;oil temp something description voltage over limit etc :ford AF 4 ;description of stuff 0 8 ;string descritiopn ``` My idea is to open the file, search for ":" character, next line, goto " " character, goto next " " character and write a ";". Another thought was goto "/n" character in text file if next charachter != ":" then look for second space import sys import fileinput ``` with open("testDTC.txt", "r+") as f: for line in f: if ' ' in line: #read first space if ' ' in line: #read second space line.append(';') f.write(line) f.close() ``` I know its not close to getting what I need but its been a really long time since I did string manipulation in python.
2015/02/25
[ "https://Stackoverflow.com/questions/28708752", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2884999/" ]
You simply need to split twice on whitespace and join the string, you don't need a regex for a simple repeating pattern: ``` with open("testDTC.txt") as f: for line in f: if line.strip() and not line.startswith(":"): spl = line.split(None,2) print("{} ;{}".format(" ".join(spl[:2]),spl[2])) ``` To write the changes to the original file you can use `fileinput.input` with `inplace=True`: ``` from fileinput import input for line in input("testDTC.txt",inplace=True): if line.strip() and not line.startswith(":"): spl = line.split(None,2) print("{} ;{}".format(" ".join(spl[:2]),spl[2]),end="") else: print(line,end="") ``` Instead of indexing we can unpack: ``` a, b, c = line.split(None,2) print("{} {} ;{}".format(a, b, c),end="") ``` Output: ``` :dodge 1 6 ;some description string of unknown length E7 8 ;another description string 3445 0 ;oil temp something description voltage over limit etc :ford AF 4 ;description of stuff 0 8 ;string descritiopn ``` For python 2 you can remove the `end=""` and use a commas after the print statement instead i.e `print(line),` We avoid the starting paragraph lines with `line.startswith(":")` and the empty lines with `if line.strip()`.
Based on your example, it seems that in your second column you have a number or numbers separated by spaces, e.g. `8`, `6` followed by some description in third colum which seem not to have any numbers. If this is the case in general, not only for this example, you can use this fact to search for the number separated by the spaces and add `;` after it as follows: import re ``` rep = re.compile(r'(\s\d+\s)') out_lines = [] with open("file.txt", "r+") as f: for line in f: re_match = rep.search(line) if re_match: # append ; after the found expression. line = line.replace(re_match.group(1), re_match.group(1)+';') out_lines.append(line) with open('file2.txt', 'w') as f: f.writelines(out_lines) ``` The file2.txt obtained is as follows: ``` :dodge 1 6 ;some description string of unknown length E7 8 ;another description string 3445 0 ;oil temp something description voltage over limit etc :ford AF 4 ;description of stuff 0 8 ;string descritiopn ```
28,708,752
I apologize for my ignorance of how python handles strings in advance. I have a .txt file that is at least 1000 lines long. It looks something like below ``` :dodge 1 6 some description string of unknown length E7 8 another description string 3445 0 oil temp something description voltage over limit etc :ford AF 4 description of stuff 0 8 string descritiopn ``` What I want to do is basically put a ";" before each string so what I will end up with is as follows ``` :dodge 1 6 ;some description string of unknown length E7 8 ;another description string 3445 0 ;oil temp something description voltage over limit etc :ford AF 4 ;description of stuff 0 8 ;string descritiopn ``` My idea is to open the file, search for ":" character, next line, goto " " character, goto next " " character and write a ";". Another thought was goto "/n" character in text file if next charachter != ":" then look for second space import sys import fileinput ``` with open("testDTC.txt", "r+") as f: for line in f: if ' ' in line: #read first space if ' ' in line: #read second space line.append(';') f.write(line) f.close() ``` I know its not close to getting what I need but its been a really long time since I did string manipulation in python.
2015/02/25
[ "https://Stackoverflow.com/questions/28708752", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2884999/" ]
You can do this with a pretty simple algorithm without invoking regular expressions so you can see what's going on. ``` with open('test.txt') as infile: with open('out.txt', 'w') as outfile: for line in infile: if not line or line.startswith(':'): # Blank or : line outfile.write(line or '\n') # pass it through else: line_parts = line.split(None, 2) # split at most twice try: # try adding the semicolon after the 2nd space line_parts[2] = ';' + line_parts[2] except IndexError: pass outfile.write(' '.join(line_parts)) ``` If you actually want to read characters in a file one at a time, you end up using the `read` method along with `seek`, but that is unnecessary in Python since you have high-level constructs like file iteration and powerful string methods to help you.
Based on your example, it seems that in your second column you have a number or numbers separated by spaces, e.g. `8`, `6` followed by some description in third colum which seem not to have any numbers. If this is the case in general, not only for this example, you can use this fact to search for the number separated by the spaces and add `;` after it as follows: import re ``` rep = re.compile(r'(\s\d+\s)') out_lines = [] with open("file.txt", "r+") as f: for line in f: re_match = rep.search(line) if re_match: # append ; after the found expression. line = line.replace(re_match.group(1), re_match.group(1)+';') out_lines.append(line) with open('file2.txt', 'w') as f: f.writelines(out_lines) ``` The file2.txt obtained is as follows: ``` :dodge 1 6 ;some description string of unknown length E7 8 ;another description string 3445 0 ;oil temp something description voltage over limit etc :ford AF 4 ;description of stuff 0 8 ;string descritiopn ```
28,708,752
I apologize for my ignorance of how python handles strings in advance. I have a .txt file that is at least 1000 lines long. It looks something like below ``` :dodge 1 6 some description string of unknown length E7 8 another description string 3445 0 oil temp something description voltage over limit etc :ford AF 4 description of stuff 0 8 string descritiopn ``` What I want to do is basically put a ";" before each string so what I will end up with is as follows ``` :dodge 1 6 ;some description string of unknown length E7 8 ;another description string 3445 0 ;oil temp something description voltage over limit etc :ford AF 4 ;description of stuff 0 8 ;string descritiopn ``` My idea is to open the file, search for ":" character, next line, goto " " character, goto next " " character and write a ";". Another thought was goto "/n" character in text file if next charachter != ":" then look for second space import sys import fileinput ``` with open("testDTC.txt", "r+") as f: for line in f: if ' ' in line: #read first space if ' ' in line: #read second space line.append(';') f.write(line) f.close() ``` I know its not close to getting what I need but its been a really long time since I did string manipulation in python.
2015/02/25
[ "https://Stackoverflow.com/questions/28708752", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2884999/" ]
You simply need to split twice on whitespace and join the string, you don't need a regex for a simple repeating pattern: ``` with open("testDTC.txt") as f: for line in f: if line.strip() and not line.startswith(":"): spl = line.split(None,2) print("{} ;{}".format(" ".join(spl[:2]),spl[2])) ``` To write the changes to the original file you can use `fileinput.input` with `inplace=True`: ``` from fileinput import input for line in input("testDTC.txt",inplace=True): if line.strip() and not line.startswith(":"): spl = line.split(None,2) print("{} ;{}".format(" ".join(spl[:2]),spl[2]),end="") else: print(line,end="") ``` Instead of indexing we can unpack: ``` a, b, c = line.split(None,2) print("{} {} ;{}".format(a, b, c),end="") ``` Output: ``` :dodge 1 6 ;some description string of unknown length E7 8 ;another description string 3445 0 ;oil temp something description voltage over limit etc :ford AF 4 ;description of stuff 0 8 ;string descritiopn ``` For python 2 you can remove the `end=""` and use a commas after the print statement instead i.e `print(line),` We avoid the starting paragraph lines with `line.startswith(":")` and the empty lines with `if line.strip()`.
This is what I would do: ``` for line in f: if ' ' in line: sp = line.split(' ', 2) line = '%s %s ;%s' % (sp[0], sp[1], sp[2]) ```
28,708,752
I apologize for my ignorance of how python handles strings in advance. I have a .txt file that is at least 1000 lines long. It looks something like below ``` :dodge 1 6 some description string of unknown length E7 8 another description string 3445 0 oil temp something description voltage over limit etc :ford AF 4 description of stuff 0 8 string descritiopn ``` What I want to do is basically put a ";" before each string so what I will end up with is as follows ``` :dodge 1 6 ;some description string of unknown length E7 8 ;another description string 3445 0 ;oil temp something description voltage over limit etc :ford AF 4 ;description of stuff 0 8 ;string descritiopn ``` My idea is to open the file, search for ":" character, next line, goto " " character, goto next " " character and write a ";". Another thought was goto "/n" character in text file if next charachter != ":" then look for second space import sys import fileinput ``` with open("testDTC.txt", "r+") as f: for line in f: if ' ' in line: #read first space if ' ' in line: #read second space line.append(';') f.write(line) f.close() ``` I know its not close to getting what I need but its been a really long time since I did string manipulation in python.
2015/02/25
[ "https://Stackoverflow.com/questions/28708752", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2884999/" ]
You can do this with a pretty simple algorithm without invoking regular expressions so you can see what's going on. ``` with open('test.txt') as infile: with open('out.txt', 'w') as outfile: for line in infile: if not line or line.startswith(':'): # Blank or : line outfile.write(line or '\n') # pass it through else: line_parts = line.split(None, 2) # split at most twice try: # try adding the semicolon after the 2nd space line_parts[2] = ';' + line_parts[2] except IndexError: pass outfile.write(' '.join(line_parts)) ``` If you actually want to read characters in a file one at a time, you end up using the `read` method along with `seek`, but that is unnecessary in Python since you have high-level constructs like file iteration and powerful string methods to help you.
This is what I would do: ``` for line in f: if ' ' in line: sp = line.split(' ', 2) line = '%s %s ;%s' % (sp[0], sp[1], sp[2]) ```
28,708,752
I apologize for my ignorance of how python handles strings in advance. I have a .txt file that is at least 1000 lines long. It looks something like below ``` :dodge 1 6 some description string of unknown length E7 8 another description string 3445 0 oil temp something description voltage over limit etc :ford AF 4 description of stuff 0 8 string descritiopn ``` What I want to do is basically put a ";" before each string so what I will end up with is as follows ``` :dodge 1 6 ;some description string of unknown length E7 8 ;another description string 3445 0 ;oil temp something description voltage over limit etc :ford AF 4 ;description of stuff 0 8 ;string descritiopn ``` My idea is to open the file, search for ":" character, next line, goto " " character, goto next " " character and write a ";". Another thought was goto "/n" character in text file if next charachter != ":" then look for second space import sys import fileinput ``` with open("testDTC.txt", "r+") as f: for line in f: if ' ' in line: #read first space if ' ' in line: #read second space line.append(';') f.write(line) f.close() ``` I know its not close to getting what I need but its been a really long time since I did string manipulation in python.
2015/02/25
[ "https://Stackoverflow.com/questions/28708752", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2884999/" ]
You simply need to split twice on whitespace and join the string, you don't need a regex for a simple repeating pattern: ``` with open("testDTC.txt") as f: for line in f: if line.strip() and not line.startswith(":"): spl = line.split(None,2) print("{} ;{}".format(" ".join(spl[:2]),spl[2])) ``` To write the changes to the original file you can use `fileinput.input` with `inplace=True`: ``` from fileinput import input for line in input("testDTC.txt",inplace=True): if line.strip() and not line.startswith(":"): spl = line.split(None,2) print("{} ;{}".format(" ".join(spl[:2]),spl[2]),end="") else: print(line,end="") ``` Instead of indexing we can unpack: ``` a, b, c = line.split(None,2) print("{} {} ;{}".format(a, b, c),end="") ``` Output: ``` :dodge 1 6 ;some description string of unknown length E7 8 ;another description string 3445 0 ;oil temp something description voltage over limit etc :ford AF 4 ;description of stuff 0 8 ;string descritiopn ``` For python 2 you can remove the `end=""` and use a commas after the print statement instead i.e `print(line),` We avoid the starting paragraph lines with `line.startswith(":")` and the empty lines with `if line.strip()`.
You can do this with a pretty simple algorithm without invoking regular expressions so you can see what's going on. ``` with open('test.txt') as infile: with open('out.txt', 'w') as outfile: for line in infile: if not line or line.startswith(':'): # Blank or : line outfile.write(line or '\n') # pass it through else: line_parts = line.split(None, 2) # split at most twice try: # try adding the semicolon after the 2nd space line_parts[2] = ';' + line_parts[2] except IndexError: pass outfile.write(' '.join(line_parts)) ``` If you actually want to read characters in a file one at a time, you end up using the `read` method along with `seek`, but that is unnecessary in Python since you have high-level constructs like file iteration and powerful string methods to help you.
28,708,752
I apologize for my ignorance of how python handles strings in advance. I have a .txt file that is at least 1000 lines long. It looks something like below ``` :dodge 1 6 some description string of unknown length E7 8 another description string 3445 0 oil temp something description voltage over limit etc :ford AF 4 description of stuff 0 8 string descritiopn ``` What I want to do is basically put a ";" before each string so what I will end up with is as follows ``` :dodge 1 6 ;some description string of unknown length E7 8 ;another description string 3445 0 ;oil temp something description voltage over limit etc :ford AF 4 ;description of stuff 0 8 ;string descritiopn ``` My idea is to open the file, search for ":" character, next line, goto " " character, goto next " " character and write a ";". Another thought was goto "/n" character in text file if next charachter != ":" then look for second space import sys import fileinput ``` with open("testDTC.txt", "r+") as f: for line in f: if ' ' in line: #read first space if ' ' in line: #read second space line.append(';') f.write(line) f.close() ``` I know its not close to getting what I need but its been a really long time since I did string manipulation in python.
2015/02/25
[ "https://Stackoverflow.com/questions/28708752", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2884999/" ]
You simply need to split twice on whitespace and join the string, you don't need a regex for a simple repeating pattern: ``` with open("testDTC.txt") as f: for line in f: if line.strip() and not line.startswith(":"): spl = line.split(None,2) print("{} ;{}".format(" ".join(spl[:2]),spl[2])) ``` To write the changes to the original file you can use `fileinput.input` with `inplace=True`: ``` from fileinput import input for line in input("testDTC.txt",inplace=True): if line.strip() and not line.startswith(":"): spl = line.split(None,2) print("{} ;{}".format(" ".join(spl[:2]),spl[2]),end="") else: print(line,end="") ``` Instead of indexing we can unpack: ``` a, b, c = line.split(None,2) print("{} {} ;{}".format(a, b, c),end="") ``` Output: ``` :dodge 1 6 ;some description string of unknown length E7 8 ;another description string 3445 0 ;oil temp something description voltage over limit etc :ford AF 4 ;description of stuff 0 8 ;string descritiopn ``` For python 2 you can remove the `end=""` and use a commas after the print statement instead i.e `print(line),` We avoid the starting paragraph lines with `line.startswith(":")` and the empty lines with `if line.strip()`.
Since you only have 1000 lines or so, I think you can get away with readling it all at once with readlines() and using split for each line. If the line has only one element then print it, then call another loop that treats following lines with more than one element and replaces the third [2] element with the concatenation of the semicolon and the element. Then you have to do something to output the line nicely (here with join, but lots of other solutions for that) depending on what you want with it. ``` with open('testDTC.txt') as fp: lines = fp.readlines() for i in xrange(len(lines)): if len(lines[i].split()) == 1: print lines[i][:-1] i += 1 while len(lines[i].split()) > 0: spl = lines[i].split() spl[2] = ";"+spl[2] print " ".join(spl) i += 1 if i == len(lines): break print ```
28,708,752
I apologize for my ignorance of how python handles strings in advance. I have a .txt file that is at least 1000 lines long. It looks something like below ``` :dodge 1 6 some description string of unknown length E7 8 another description string 3445 0 oil temp something description voltage over limit etc :ford AF 4 description of stuff 0 8 string descritiopn ``` What I want to do is basically put a ";" before each string so what I will end up with is as follows ``` :dodge 1 6 ;some description string of unknown length E7 8 ;another description string 3445 0 ;oil temp something description voltage over limit etc :ford AF 4 ;description of stuff 0 8 ;string descritiopn ``` My idea is to open the file, search for ":" character, next line, goto " " character, goto next " " character and write a ";". Another thought was goto "/n" character in text file if next charachter != ":" then look for second space import sys import fileinput ``` with open("testDTC.txt", "r+") as f: for line in f: if ' ' in line: #read first space if ' ' in line: #read second space line.append(';') f.write(line) f.close() ``` I know its not close to getting what I need but its been a really long time since I did string manipulation in python.
2015/02/25
[ "https://Stackoverflow.com/questions/28708752", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2884999/" ]
You can do this with a pretty simple algorithm without invoking regular expressions so you can see what's going on. ``` with open('test.txt') as infile: with open('out.txt', 'w') as outfile: for line in infile: if not line or line.startswith(':'): # Blank or : line outfile.write(line or '\n') # pass it through else: line_parts = line.split(None, 2) # split at most twice try: # try adding the semicolon after the 2nd space line_parts[2] = ';' + line_parts[2] except IndexError: pass outfile.write(' '.join(line_parts)) ``` If you actually want to read characters in a file one at a time, you end up using the `read` method along with `seek`, but that is unnecessary in Python since you have high-level constructs like file iteration and powerful string methods to help you.
Since you only have 1000 lines or so, I think you can get away with readling it all at once with readlines() and using split for each line. If the line has only one element then print it, then call another loop that treats following lines with more than one element and replaces the third [2] element with the concatenation of the semicolon and the element. Then you have to do something to output the line nicely (here with join, but lots of other solutions for that) depending on what you want with it. ``` with open('testDTC.txt') as fp: lines = fp.readlines() for i in xrange(len(lines)): if len(lines[i].split()) == 1: print lines[i][:-1] i += 1 while len(lines[i].split()) > 0: spl = lines[i].split() spl[2] = ";"+spl[2] print " ".join(spl) i += 1 if i == len(lines): break print ```
42,409,365
I am trying to check a website for specific .js files and image files as part of a regular configuration management check. I am using python and selenium. My code is: ``` #!/usr/bin/env python #import modules required for the test to run import time from pyvirtualdisplay import Display from selenium import webdriver from selenium.webdriver.common.by import By #Start headless browser web = Display(visible=0, size=(1024, 768)) web.start() browser = webdriver.PhantomJS() browser.set_window_size(1024,768) #Navigate to the current URL browser.get("https://XXXXXXXX") time.sleep(2) page = browser.find_elements(By.TAG_NAME, 'script') for i in page: print(i) for j in page: print(j.text) browser.quit() web.stop ``` The array returned contains entries like ``` selenium.webdriver.remote.webelement.WebElement (session="238c4f20-f995-11e6-9445-570b2cf065ee", element=":wdc:1487832970059")> ``` which I get when I try print the array entries. I assume these are the files referenced with the script tag that I have found. I cannot access them in any way to check if the file name or path is correct. Any advice on how to do this? Thanks Rudi
2017/02/23
[ "https://Stackoverflow.com/questions/42409365", "https://Stackoverflow.com", "https://Stackoverflow.com/users/7609361/" ]
You need to use ``` for i in page: print(i.get_attribute('src')) ``` This should print `JavaScript` file name like `https://www.google-analytics.com/analytics.js` Also you should note that some `<script>` tags could contain just `JavaScript` code, but not reference to remote file. If you want to get this code you need `i.get_attribute('textContent')` **Update** If you want to get scripts from `iframe` also, try: ``` for frame in browser.find_elements_by_tag_name('iframe'): browser.switch_to.frame(frame) for i in browser.find_elements(By.TAG_NAME, 'script'): print(i.get_attribute('src')) browser.switch_to.default_content() ```
as you are using phantomJS, why not use its scripts to capture these data. You can use `netlog.js` to capture all network data loaded for a given page in HAR format. Later use a `python-HAR parser` to list down all your .js or img files. command line: ``` phantomjs --cookies-file=/tmp/foo netlog.js https://google.com ``` [netlog.js](https://github.com/ariya/phantomjs/blob/master/examples/netlog.js) [Har Parser for Python](https://pypi.python.org/pypi/haralyzer/1.4.10)
62,772,454
If given a year-week range e.g, start\_year, start\_week = (2019,45) and end\_year, end\_week = (2020,15) In python how can I check if Year-Week of interest is within the above range of not? For example, for Year = 2020 and Week = 5, I should get a 'True'.
2020/07/07
[ "https://Stackoverflow.com/questions/62772454", "https://Stackoverflow.com", "https://Stackoverflow.com/users/6870708/" ]
Assuming all Year-Week pairs are well-formed (so there's no such thing as `(2019-74)` you can just check with: ``` start_year_week = (2019, 45) end_year_week = (2020, 15) under_test_year_week = (2020, 5) in_range = start_year_week <= under_test_year_week < end_year_week # True ``` Python does tuple comparison by first comparing the first element, and if they're equal then compare the second and so on. And that is exactly what you want even without treating it as actual dates/weeks :D (using `<` or `<=` based on whether you want `(2019, 45)` or `(2020, 15)` to be included or not.)
You can parse year and week to a `datetime` object. If you do the same with your test-year /-week, you can use comparison operators to see if it falls within the range. ``` from datetime import datetime start_year, start_week = (2019, 45) end_year, end_week = (2020, 15) # start date, beginning of week date0 = datetime.strptime(f"{start_year} {start_week} 0", "%Y %W %w") # end date, end of week date1 = datetime.strptime(f"{end_year} {end_week} 6", "%Y %W %w") testyear, testweek = (2020, 5) testdate = datetime.strptime(f"{testyear} {testweek} 0", "%Y %W %w") date0 <= testdate < date1 # True ```
62,772,454
If given a year-week range e.g, start\_year, start\_week = (2019,45) and end\_year, end\_week = (2020,15) In python how can I check if Year-Week of interest is within the above range of not? For example, for Year = 2020 and Week = 5, I should get a 'True'.
2020/07/07
[ "https://Stackoverflow.com/questions/62772454", "https://Stackoverflow.com", "https://Stackoverflow.com/users/6870708/" ]
Assuming all Year-Week pairs are well-formed (so there's no such thing as `(2019-74)` you can just check with: ``` start_year_week = (2019, 45) end_year_week = (2020, 15) under_test_year_week = (2020, 5) in_range = start_year_week <= under_test_year_week < end_year_week # True ``` Python does tuple comparison by first comparing the first element, and if they're equal then compare the second and so on. And that is exactly what you want even without treating it as actual dates/weeks :D (using `<` or `<=` based on whether you want `(2019, 45)` or `(2020, 15)` to be included or not.)
``` start = (2019, 45) end = (2020, 15) def isin(q): if end[0] > q[0] > start[0]: return True elif end[0] == q[0]: if end[1] >= q[1]: return True else: return False elif q[0] == start[0]: if q[1] >= start[1]: return True else: return False else: return False ``` Try this: print(isin((2020, 5))) >>>>> True
62,772,454
If given a year-week range e.g, start\_year, start\_week = (2019,45) and end\_year, end\_week = (2020,15) In python how can I check if Year-Week of interest is within the above range of not? For example, for Year = 2020 and Week = 5, I should get a 'True'.
2020/07/07
[ "https://Stackoverflow.com/questions/62772454", "https://Stackoverflow.com", "https://Stackoverflow.com/users/6870708/" ]
You can parse year and week to a `datetime` object. If you do the same with your test-year /-week, you can use comparison operators to see if it falls within the range. ``` from datetime import datetime start_year, start_week = (2019, 45) end_year, end_week = (2020, 15) # start date, beginning of week date0 = datetime.strptime(f"{start_year} {start_week} 0", "%Y %W %w") # end date, end of week date1 = datetime.strptime(f"{end_year} {end_week} 6", "%Y %W %w") testyear, testweek = (2020, 5) testdate = datetime.strptime(f"{testyear} {testweek} 0", "%Y %W %w") date0 <= testdate < date1 # True ```
``` start = (2019, 45) end = (2020, 15) def isin(q): if end[0] > q[0] > start[0]: return True elif end[0] == q[0]: if end[1] >= q[1]: return True else: return False elif q[0] == start[0]: if q[1] >= start[1]: return True else: return False else: return False ``` Try this: print(isin((2020, 5))) >>>>> True
45,403,597
trying to deploy my app to uwsgi server my settings file: ``` STATIC_ROOT = "/home/root/djangoApp/staticRoot/" STATIC_URL = '/static/' STATICFILES_DIRS = [ os.path.join(BASE_DIR, "static"), '/home/root/djangoApp/static/', ] ``` and url file: ``` urlpatterns = [ #urls ] + static(settings.STATIC_URL, document_root=settings.STATIC_ROOT) ``` and if I try to execute the command: > > python manage.py collectstatic > > > Then some files are okay ( admin files ), but I see an error next to files from static folder. The error is like: > > Found another file with the destination path 'js/bootstrap.min.js'. It > will be ignored since only the first encountered file is collected. If > this is not what you want, make sure every static file has a unique > path. > > > and have no idea what can I do to solve it. Thanks in advance,
2017/07/30
[ "https://Stackoverflow.com/questions/45403597", "https://Stackoverflow.com", "https://Stackoverflow.com/users/8375888/" ]
The two paths you have in STATICFILES\_DIRS are the same. So Django copies the files from one of them, then goes on to the second and tries to copy them again, only to see the files already exist. Remove one of those entries, preferably the second.
do you have more than one application? If so, you should put any file on a subdirectory with a unique name (like the app name for example). collectstatic collects files from all the /static/ subdirectories, and if there is a duplication, it throw this error.
72,664,087
I'm using python3 tkinter to build a small GUI on Linux Centos I have my environment set up with all the dependencies installed (cython, numpy, panda, etc) When I go to install tkinter ``` pip3 install tk $ python3 Python 3.6.8 (default, Nov 16 2020, 16:55:22) [GCC 4.8.5 20150623 (Red Hat 4.8.5-44)] on linux Type "help", "copyright", "credits" or "license" for more information. >>> import tkinter as tk >>> No module found: tkinter ``` I get the above error despite 'pip list' displaying the 'tk' dependency, python still throws the error. The dependency correctly shows up in "site-packages" as well. But when I use yum to install tkinter ``` sudo yum install python3-tkinter ``` and do the same thing ``` python3 Python 3.6.8 (default, Nov 16 2020, 16:55:22) [GCC 4.8.5 20150623 (Red Hat 4.8.5-44)] on linux Type "help", "copyright", "credits" or "license" for more information. >>> import tkinter as tk >>> tkinter._test() ``` it works perfectly fine. The issue is that if I want to package all the dependencies together and share it, the working version of tkinter won't be in the package and other users will be confused when they build the project Why is 'pip install tk' not being recognized as a valid installation of tkinter but 'sudo yum install python3-tkinter' works? All the other dependencies work with pip, it's just tkinter that is broken. How can I make python recognize the pip installation?
2022/06/17
[ "https://Stackoverflow.com/questions/72664087", "https://Stackoverflow.com", "https://Stackoverflow.com/users/11760778/" ]
> > Why is 'pip install tk' not being recognized as a valid installation of tkinter but 'sudo yum install python3-tkinter' works? > > > Because `pip install tk` installs an old package called tensorkit, not tkinter. You can't install tkinter with pip.
so i don't know if centOS uses apt put you can try first uninstalling tinkter with pip and then use apt to install it ``` sudo apt-get install python3-tk ```
73,584,455
I am trying to create a diverging dot plot with python and I am using seaborn relplot to do the small multiples with one of the columns. The datasouce is MakeoverMonday 2018w18: [MOM2018w48](https://data.world/makeovermonday/2018w48) I got this far with this code: ``` sns.set_style("whitegrid") g=sns.relplot(x=cost ,y=city, col=item, s=120, size = cost, hue = cost, col_wrap= 2) sns.despine(left=True, bottom=True) ``` which generates this: [![relplot dot plot](https://i.stack.imgur.com/vmojM.png)](https://i.stack.imgur.com/vmojM.png) So, far, so good. Now, I want only horizontal gridlines, sort it and get rid of the column name ('item'=) in the small multiple charts. Any ideas? This is what I am trying to recreate: [![enter image description here](https://i.stack.imgur.com/CP5iE.png)](https://i.stack.imgur.com/CP5iE.png)
2022/09/02
[ "https://Stackoverflow.com/questions/73584455", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3433875/" ]
Like most query interfaces, the `Query()` function can only execute one SQL statement at a time. MySQL's prepared statements don't work with multi-query. You could solve this by executing the `SET` statement in one call, then the `SELECT` in a second call. But you'd have to take care to ensure they are executed on the same database connection, or else the connection pool is likely to run them on different connections. So you'd need to do something like: ``` conn, err := d.Conn(context.TODO()) conn.QueryContext(context.TODO(), "SET ...") conn.QueryContext(context.TODO(), "SELECT ...") ``` Alternatively, change the way you prepare the ORDER BY so you don't need user-defined variables. The way I'd do it is to build the ORDER BY statement in Go code instead of in SQL, using a string map to ensure a valid column and direction is used. If the input is not in the map, then set a default order to the primary key. ``` validOrders := map[string]string{ "type,asc": "type ASC", "type,desc": "type DESC", "visible,asc": "visible ASC", "visible,desc": "visible DESC", "create_date,asc": "create_date ASC", "create_date,desc": "create_date DESC", "update_date,asc": "update_date ASC", "update_date,desc": "update_date DESC", } orderBy, ok := validOrders[srt] if !ok { orderBy = "id ASC" } query := fmt.Sprintf(` SELECT ... WHERE user_id = ? ORDER BY %s LIMIT ?, ? `, orderBy) ``` This is safe with respect to SQL injection, because the function input is not interpolated into the query. It's the value from my map that is interpolated into the query, and the value is under my control. If someone tries to input some malicious value, it won't match any key in my map, so it'll just use the default sort order.
Unless drivers implement a special interface, the query is prepared on the server first before execution. Bindvars are therefore database specific: * MySQL: uses the ? variant shown above * PostgreSQL: uses an enumerated $1, $2, etc bindvar syntax * SQLite: accepts both ? and $1 syntax * Oracle: uses a :name syntax * MsSQL: @ (as you use) I guess that's why you can't do what you want with query().
73,584,455
I am trying to create a diverging dot plot with python and I am using seaborn relplot to do the small multiples with one of the columns. The datasouce is MakeoverMonday 2018w18: [MOM2018w48](https://data.world/makeovermonday/2018w48) I got this far with this code: ``` sns.set_style("whitegrid") g=sns.relplot(x=cost ,y=city, col=item, s=120, size = cost, hue = cost, col_wrap= 2) sns.despine(left=True, bottom=True) ``` which generates this: [![relplot dot plot](https://i.stack.imgur.com/vmojM.png)](https://i.stack.imgur.com/vmojM.png) So, far, so good. Now, I want only horizontal gridlines, sort it and get rid of the column name ('item'=) in the small multiple charts. Any ideas? This is what I am trying to recreate: [![enter image description here](https://i.stack.imgur.com/CP5iE.png)](https://i.stack.imgur.com/CP5iE.png)
2022/09/02
[ "https://Stackoverflow.com/questions/73584455", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3433875/" ]
For those interested I've solved my issue with a few updates. 1. There are settings on the DSN when connecting `?...&multiStatements=true&interpolateParams=true` 2. After adding the above I started getting a new error regarding the collation (`Illegal mix of collations (utf8mb4_0900_ai_ci,IMPLICIT) and (utf8mb4_general_ci,IMPLICIT) for operation '='`. I went through and converted the DB and the tables to `utf8mb4_general_ci` and everything is working as expected. Thank you to those that provided their solutions but this is the route we wound up taking.
Like most query interfaces, the `Query()` function can only execute one SQL statement at a time. MySQL's prepared statements don't work with multi-query. You could solve this by executing the `SET` statement in one call, then the `SELECT` in a second call. But you'd have to take care to ensure they are executed on the same database connection, or else the connection pool is likely to run them on different connections. So you'd need to do something like: ``` conn, err := d.Conn(context.TODO()) conn.QueryContext(context.TODO(), "SET ...") conn.QueryContext(context.TODO(), "SELECT ...") ``` Alternatively, change the way you prepare the ORDER BY so you don't need user-defined variables. The way I'd do it is to build the ORDER BY statement in Go code instead of in SQL, using a string map to ensure a valid column and direction is used. If the input is not in the map, then set a default order to the primary key. ``` validOrders := map[string]string{ "type,asc": "type ASC", "type,desc": "type DESC", "visible,asc": "visible ASC", "visible,desc": "visible DESC", "create_date,asc": "create_date ASC", "create_date,desc": "create_date DESC", "update_date,asc": "update_date ASC", "update_date,desc": "update_date DESC", } orderBy, ok := validOrders[srt] if !ok { orderBy = "id ASC" } query := fmt.Sprintf(` SELECT ... WHERE user_id = ? ORDER BY %s LIMIT ?, ? `, orderBy) ``` This is safe with respect to SQL injection, because the function input is not interpolated into the query. It's the value from my map that is interpolated into the query, and the value is under my control. If someone tries to input some malicious value, it won't match any key in my map, so it'll just use the default sort order.
73,584,455
I am trying to create a diverging dot plot with python and I am using seaborn relplot to do the small multiples with one of the columns. The datasouce is MakeoverMonday 2018w18: [MOM2018w48](https://data.world/makeovermonday/2018w48) I got this far with this code: ``` sns.set_style("whitegrid") g=sns.relplot(x=cost ,y=city, col=item, s=120, size = cost, hue = cost, col_wrap= 2) sns.despine(left=True, bottom=True) ``` which generates this: [![relplot dot plot](https://i.stack.imgur.com/vmojM.png)](https://i.stack.imgur.com/vmojM.png) So, far, so good. Now, I want only horizontal gridlines, sort it and get rid of the column name ('item'=) in the small multiple charts. Any ideas? This is what I am trying to recreate: [![enter image description here](https://i.stack.imgur.com/CP5iE.png)](https://i.stack.imgur.com/CP5iE.png)
2022/09/02
[ "https://Stackoverflow.com/questions/73584455", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3433875/" ]
For those interested I've solved my issue with a few updates. 1. There are settings on the DSN when connecting `?...&multiStatements=true&interpolateParams=true` 2. After adding the above I started getting a new error regarding the collation (`Illegal mix of collations (utf8mb4_0900_ai_ci,IMPLICIT) and (utf8mb4_general_ci,IMPLICIT) for operation '='`. I went through and converted the DB and the tables to `utf8mb4_general_ci` and everything is working as expected. Thank you to those that provided their solutions but this is the route we wound up taking.
Unless drivers implement a special interface, the query is prepared on the server first before execution. Bindvars are therefore database specific: * MySQL: uses the ? variant shown above * PostgreSQL: uses an enumerated $1, $2, etc bindvar syntax * SQLite: accepts both ? and $1 syntax * Oracle: uses a :name syntax * MsSQL: @ (as you use) I guess that's why you can't do what you want with query().
40,322,718
I'm new to getting data using API and Python. I want to pull data from my trading platform. They've provided the following instructions: <http://www.questrade.com/api/documentation/getting-started> I'm ok up to step 4 and have an access token. I need help with step 5. How do I translate this request: ``` GET /v1/accounts HTTP/1.1 Host: https://api01.iq.questrade.com Authorization: Bearer C3lTUKuNQrAAmSD/TPjuV/HI7aNrAwDp ``` into Python code? I've tried ``` import requests r = requests.get('https://api01.iq.questrade.com/v1/accounts', headers={'Authorization': 'access_token myToken'}) ``` I tried that after reading this: [python request with authentication (access\_token)](https://stackoverflow.com/questions/13825278/python-request-with-authentication-access-token) Any help would be appreciated. Thanks.
2016/10/29
[ "https://Stackoverflow.com/questions/40322718", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4838024/" ]
As you point out, after step 4 you should have received an access token as follows: ``` { “access_token”: ”C3lTUKuNQrAAmSD/TPjuV/HI7aNrAwDp”, “token_type”: ”Bearer”, “expires_in”: 300, “refresh_token”: ”aSBe7wAAdx88QTbwut0tiu3SYic3ox8F”, “api_server”: ”https://api01.iq.questrade.com” } ``` To make subsequent API calls, you will need to construct your URI as follows: ``` uri = [api_server]/v1/[rest_operation] e.g. uri = "https://api01.iq.questrade.com/v1/time" Note: Make sure you use the same [api_server] that you received in your json object from step 4, otherwise your calls will not work with the given access_token ``` Next, construct your headers as follows: ``` headers = {'Authorization': [token_type] + ' ' + [access_token]} e.g. headers = {'Authorization': 'Bearer C3lTUKuNQrAAmSD/TPjuV/HI7aNrAwDp'} ``` Finally, make your requests call as follows ``` r = requests.get(uri, headers=headers) response = r.json() ``` Hope this helps! Note: You can find a Questrade API Python wrapper on GitHub which handles all of the above for you. <https://github.com/pcinat/QuestradeAPI_PythonWrapper>
Improving a bit on Peter's reply (Thank you Peter!) start by using the token you got from the QT website to obtain an access\_token and get an api\_server assigned to handle your requests. ``` # replace XXXXXXXX with the token given to you in your questrade account import requests r = requests.get('https://login.questrade.com/oauth2/token?grant_type=refresh_token&refresh_token=XXXXXXXX') access_token = str(r.json()['access_token']) refresh_token= str(r.json()['refresh_token']) # you will need this refresh_token to obtain another access_token when it expires api_server= str(r.json()['api_server']) token_type= str(r.json()['token_type']) api_server= str(r.json()['api_server']) expires_in = str(r.json()['expires_in']) # uri = api_server+'v1/'+[action] - let's try checking the server's time: uri = api_server+'v1/'+'time' headers = {'Authorization': token_type +' '+access_token} # will look sth like this # headers will look sth like {'Authorization': 'Bearer ix7rAhcXx83judEVUa8egpK2JqhPD2_z0'} # uri will look sth like 'https://api05.iq.questrade.com/v1/time' # you can test now with r = requests.get(uri, headers=headers) response = r.json() print(response) ```
51,750,967
[![enter image description here](https://i.stack.imgur.com/qpDFX.jpg)](https://i.stack.imgur.com/qpDFX.jpg)I'm trying to control a relay board (USB RLY08) using a section of python code I found online (<https://github.com/jkesanen/usbrly08/blob/master/usbrly08.py>). It is currently returning an error which I'm not sure about.[![enter image description here](https://i.stack.imgur.com/6N1Ir.jpg)](https://i.stack.imgur.com/6N1Ir.jpg) Does anyone have any ideas? I'm not actually needing all the code and instead just wanting to turn a single relay on and off. Thanks.
2018/08/08
[ "https://Stackoverflow.com/questions/51750967", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4153219/" ]
You are getting this error probably because **pyserial** module is not installed on your system. Try installing pyserial package from PyPi index using below command : ``` python -m pip install pyserial ```
you need to install pyserial e.g. with ``` pip install pyserial ```
51,750,967
[![enter image description here](https://i.stack.imgur.com/qpDFX.jpg)](https://i.stack.imgur.com/qpDFX.jpg)I'm trying to control a relay board (USB RLY08) using a section of python code I found online (<https://github.com/jkesanen/usbrly08/blob/master/usbrly08.py>). It is currently returning an error which I'm not sure about.[![enter image description here](https://i.stack.imgur.com/6N1Ir.jpg)](https://i.stack.imgur.com/6N1Ir.jpg) Does anyone have any ideas? I'm not actually needing all the code and instead just wanting to turn a single relay on and off. Thanks.
2018/08/08
[ "https://Stackoverflow.com/questions/51750967", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4153219/" ]
You are getting this error probably because **pyserial** module is not installed on your system. Try installing pyserial package from PyPi index using below command : ``` python -m pip install pyserial ```
You need to [install pyserial](https://pythonhosted.org/pyserial/pyserial.html#installation).
30,316,639
I am looking for a way to calculate a square root with an arbitrary precision (something like 50 digits after the dot). In python, it is easily accessible with [Decimal](https://docs.python.org/2/library/decimal.html): ``` from decimal import * getcontext().prec = 50 Decimal(2).sqrt() # and here you go my 50 digits ``` After seeing the power of `math/big` I skimmed through the [documentation](https://golang.org/pkg/math/big/#Rat) but have not found anything similar. So is my only option is to write some sort of [numerical computing method](http://en.wikipedia.org/wiki/Methods_of_computing_square_roots) which will iteratively try to compute the answer?
2015/05/19
[ "https://Stackoverflow.com/questions/30316639", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1090562/" ]
This is my own implementation of square root calculation. While waiting for answers, I decided to give [methods of computing square roots](http://en.wikipedia.org/wiki/Methods_of_computing_square_roots) a try. It has a whole bunch of methods but at the very end I found a link to a [Square roots by subtraction](http://www.afjarvis.staff.shef.ac.uk/maths/jarvisspec02.pdf) pdf, which I really liked because the description of the algorithm is only a couple of lines (and I have not seen it before in comparison to Newton's method). So here is my implementation (bigint is not really nice to work with in go): ``` func square(n int64, precision int64) string{ ans_int := strconv.Itoa(int(math.Sqrt(float64(n)))) limit := new(big.Int).Exp(big.NewInt(10), big.NewInt(precision + 1), nil) a := big.NewInt(5 * n) b := big.NewInt(5) five := big.NewInt(5) ten := big.NewInt(10) hundred := big.NewInt(100) for b.Cmp(limit) < 0{ if a.Cmp(b) < 0{ a.Mul(a, hundred) tmp := new(big.Int).Div(b, ten) tmp.Mul(tmp, hundred) b.Add(tmp, five) } else { a.Sub(a, b) b.Add(b, ten) } } b.Div(b, hundred) ans_dec := b.String() return ans_dec[:len(ans_int)] + "." + ans_dec[len(ans_int):] } ``` **P.S.** thank you Nick Craig-Wood for making the code better with your amazing comment. And using it, one can find that `square(8537341, 50)` is: > > 2921.8728582879851242173838229735693053765773170487 > > > which is only by one last digit of from python's ``` getcontext().prec = 50 print str(Decimal(8537341).sqrt()) ``` > > 2921.8728582879851242173838229735693053765773170488 > > > This one digit is off because the last digit is not really precise. As always `[Go Playground](http://play.golang.org/p/u1CoB4cwXy)`. **P.S.** if someone would find a native way to do this, I would gladly give my accept and upvote.
Adding precision ---------------- There is probably a solution in go but as I don't code in go, here is a general solution. For instance if your selected language doesn't provide a solution to handle the precision of floats (already happened to me): If your language provides you N digits after the dot, you can, in the case of the square root, multiply the input, here `2`, by `10^(2*number_of_extra_digits)`. For instance if go would give you only `1.41` as an answer but you would want `1.4142`, then you ask it the square root of `2*10^(2*2) = 2*10000` instead and you get `141.42` as an answer. Now I leave it up to you to rectify the placement of the dot. **Explanation:** There is some math magic behind it. If you were to want to add some precision to a simple division, you would just need to multiply the input by `10^number_of_extra_digits`. The trick is to multiply the input to get more precision as we can't multiply the output (the lost of precision already happened). It does work because most languages cut more decimals after the dot, than before it. So we just need to change the *output equation* to the *input equation* (when possible): For simple division: `(a/b) * 10 = (a*10)/b` For square root: `sqrt(a) * 10 = sqrt(a) * sqrt(100) = sqrt(a*100)` Reducing precision ------------------ Some similar tinkering can also help to reduce the precision if needed. For instance if you were trying to calculate the progression of a download in percent, two digits after the dot. Let's say we downloaded 1 file on 3, then `1/3 * 100` would give us `33.33333333`. If there is no way to control the precision of this float, then you could do `cast_to_an_int(1/3 * 100 * 100) / 100` to return `33.33`.
51,395,535
I'm trying to get my head around \*\*kwargs in python 3 and am running into a strange error. Based on [this post](https://stackoverflow.com/questions/1769403/understanding-kwargs-in-python) on the matter, I tried to create my own version to confirm it worked for me. ``` table = {'Person A':'Age A','Person B':'Age B','Person C':'Age C'} def kw(**kwargs): for i,j in kwargs.items(): print(i,'is ',j) kw(table) ``` The strange thing is that I keep getting back `TypeError: kw() takes 0 positional arguments but 1 was given`. I have no idea why and can see no appreciable difference between my code and the code in the example at the provided link. Can someone help me determine what is causing this error?
2018/07/18
[ "https://Stackoverflow.com/questions/51395535", "https://Stackoverflow.com", "https://Stackoverflow.com/users/8032508/" ]
call kw function with `kw(**table)` Python 3 Doc: [link](https://docs.python.org/3.2/glossary.html)
There's no need to make `kwargs` a variable keyword argument here. By specifying `kwargs` with `**` you are defining the function with a variable number of keyword arguments but no positional argument, hence the error you're seeing. Instead, simply define your `kw` function with: ``` def kw(kwargs): ```
51,395,535
I'm trying to get my head around \*\*kwargs in python 3 and am running into a strange error. Based on [this post](https://stackoverflow.com/questions/1769403/understanding-kwargs-in-python) on the matter, I tried to create my own version to confirm it worked for me. ``` table = {'Person A':'Age A','Person B':'Age B','Person C':'Age C'} def kw(**kwargs): for i,j in kwargs.items(): print(i,'is ',j) kw(table) ``` The strange thing is that I keep getting back `TypeError: kw() takes 0 positional arguments but 1 was given`. I have no idea why and can see no appreciable difference between my code and the code in the example at the provided link. Can someone help me determine what is causing this error?
2018/07/18
[ "https://Stackoverflow.com/questions/51395535", "https://Stackoverflow.com", "https://Stackoverflow.com/users/8032508/" ]
call kw function with `kw(**table)` Python 3 Doc: [link](https://docs.python.org/3.2/glossary.html)
Writing a separate answer because I do not have enough reputation to comment. There is another error in the original post, this time in the function definition Once you have "opened" a dict with the \*\* operator in arguments, the dict does not exist any more inside the function. So in the function: ``` def kw(**kwargs): for i,j in kwargs.items(): print(i,'is ',j) ``` the local variables will be Bob, Franny and Ribbit, with their respective values
51,395,535
I'm trying to get my head around \*\*kwargs in python 3 and am running into a strange error. Based on [this post](https://stackoverflow.com/questions/1769403/understanding-kwargs-in-python) on the matter, I tried to create my own version to confirm it worked for me. ``` table = {'Person A':'Age A','Person B':'Age B','Person C':'Age C'} def kw(**kwargs): for i,j in kwargs.items(): print(i,'is ',j) kw(table) ``` The strange thing is that I keep getting back `TypeError: kw() takes 0 positional arguments but 1 was given`. I have no idea why and can see no appreciable difference between my code and the code in the example at the provided link. Can someone help me determine what is causing this error?
2018/07/18
[ "https://Stackoverflow.com/questions/51395535", "https://Stackoverflow.com", "https://Stackoverflow.com/users/8032508/" ]
call kw function with `kw(**table)` Python 3 Doc: [link](https://docs.python.org/3.2/glossary.html)
table = {'Bob':'Old','Franny':'Less Old, Still a little old though','Ribbit':'Only slightly old'} def kw(\*\*kwargs): for i,j in kwargs.items(): print(i,'is ',j) """Put the \*\* before the tablet, like this:""" kw(\*\*table)
51,395,535
I'm trying to get my head around \*\*kwargs in python 3 and am running into a strange error. Based on [this post](https://stackoverflow.com/questions/1769403/understanding-kwargs-in-python) on the matter, I tried to create my own version to confirm it worked for me. ``` table = {'Person A':'Age A','Person B':'Age B','Person C':'Age C'} def kw(**kwargs): for i,j in kwargs.items(): print(i,'is ',j) kw(table) ``` The strange thing is that I keep getting back `TypeError: kw() takes 0 positional arguments but 1 was given`. I have no idea why and can see no appreciable difference between my code and the code in the example at the provided link. Can someone help me determine what is causing this error?
2018/07/18
[ "https://Stackoverflow.com/questions/51395535", "https://Stackoverflow.com", "https://Stackoverflow.com/users/8032508/" ]
There's no need to make `kwargs` a variable keyword argument here. By specifying `kwargs` with `**` you are defining the function with a variable number of keyword arguments but no positional argument, hence the error you're seeing. Instead, simply define your `kw` function with: ``` def kw(kwargs): ```
Writing a separate answer because I do not have enough reputation to comment. There is another error in the original post, this time in the function definition Once you have "opened" a dict with the \*\* operator in arguments, the dict does not exist any more inside the function. So in the function: ``` def kw(**kwargs): for i,j in kwargs.items(): print(i,'is ',j) ``` the local variables will be Bob, Franny and Ribbit, with their respective values
51,395,535
I'm trying to get my head around \*\*kwargs in python 3 and am running into a strange error. Based on [this post](https://stackoverflow.com/questions/1769403/understanding-kwargs-in-python) on the matter, I tried to create my own version to confirm it worked for me. ``` table = {'Person A':'Age A','Person B':'Age B','Person C':'Age C'} def kw(**kwargs): for i,j in kwargs.items(): print(i,'is ',j) kw(table) ``` The strange thing is that I keep getting back `TypeError: kw() takes 0 positional arguments but 1 was given`. I have no idea why and can see no appreciable difference between my code and the code in the example at the provided link. Can someone help me determine what is causing this error?
2018/07/18
[ "https://Stackoverflow.com/questions/51395535", "https://Stackoverflow.com", "https://Stackoverflow.com/users/8032508/" ]
There's no need to make `kwargs` a variable keyword argument here. By specifying `kwargs` with `**` you are defining the function with a variable number of keyword arguments but no positional argument, hence the error you're seeing. Instead, simply define your `kw` function with: ``` def kw(kwargs): ```
table = {'Bob':'Old','Franny':'Less Old, Still a little old though','Ribbit':'Only slightly old'} def kw(\*\*kwargs): for i,j in kwargs.items(): print(i,'is ',j) """Put the \*\* before the tablet, like this:""" kw(\*\*table)
66,204,201
I'm trying to install pymatgen in Google colab via the following command: ``` !pip install pymatgen ``` This throws the following error: ``` Collecting pymatgen Using cached https://files.pythonhosted.org/packages/06/4f/9dc98ea1309012eafe518e32e91d2a55686341f3f4c1cdc19f1f64cb33d0/pymatgen-2021.2.14.tar.gz Installing build dependencies ... error ERROR: Command errored out with exit status 1: /usr/bin/python3 /usr/local/lib/python3.6/dist-packages/pip install --ignore-installed --no-user --prefix /tmp/pip-build-env-9j4h3p2n/overlay --no-warn-script-location -v --no-binary :none: --only-binary :none: -i https://pypi.org/simple -- 'numpy>=1.20.1' 'setuptools>=43.0.0' Check the logs for full command output. ``` Trying to install with following: ``` !pip install -vvv pymatgen ``` This throws the following error: ``` pip._internal.exceptions.InstallationError: Command errored out with exit status 1: /usr/bin/python3 /usr/local/lib/python3.6/dist-packages/pip install --ignore-installed --no-user --prefix /tmp/pip-build-env-g1m0e202/overlay --no-warn-script-location -v --no-binary :none: --only-binary :none: -i https://pypi.org/simple -- 'numpy>=1.20.1' 'setuptools>=43.0.0' Check the logs for full command output. ``` Please help solve this issue.
2021/02/15
[ "https://Stackoverflow.com/questions/66204201", "https://Stackoverflow.com", "https://Stackoverflow.com/users/15211978/" ]
You will need to validate within the addNewUser method. And then `throw` an exception when your validate hits. Example ```java if(username.length > 10) { throw new Exception("Username is too long"); } ``` it will then be catched by your try-catch statement.
There are a few things to consider here. With a try-catch block you can manage exceptions that occur in your program flow. When writing a program it's a good idea to make it as clear as possible so that other people reading it later can understand it better. To that end, consider refactoring the methods. For example ``` public void addNewUser(String username, String password) throws exceptions.InvalidUsernameException, exceptions.InvalidPasswordException, exceptions.DuplicateUserException { // Check if the username has a correct format if (EligibilityCheck.getInstance().checkingUserName(username)) { throw new InvalidUsernameException(); } // Check if the password has a correct format if (EligibilityCheck.getInstance().checkingPassword(password)) { throw new InvalidPasswordException(); } // Check if username is already being used if (loginVault.containsKey(username)) { throw new DuplicateUserException(); } //If not, success loginVault.put(username, CaesarCipher.getInstance().encrypt(password)); userLogin.put(username, null); } ``` This goes through all the checks in sequence just like the if/else branches did but it's clearer to read. The `addOneUser` method doesn't do anything (apparently) meaningful when an exception is raised. Consider using the exception handling to send meaningful messages to the user as appropriate. You mention the program is not printing the output you expect it to. Look into using a testing framework for your test cases, such as junit, so that you can make assertions, look at code coverage, etc. For a simple example in plain English, the first check is on the user name for the correct format. From the code it's not evident what that might be. If for instance a "valid username" is one with only lowercase a-z characters, then you could make the following test cases: > > When a username with anything other than a-z (such as A-Z, 0-9, special characters) is provided, an invalid user exception will be thrown > > > > > WHen a username with only a-z characters is provided, no user exception will be thrown > > > You can then write these tests and use assertions as appropriate. Consider also using a static analysis tool like sonar to help with code quality.
66,204,201
I'm trying to install pymatgen in Google colab via the following command: ``` !pip install pymatgen ``` This throws the following error: ``` Collecting pymatgen Using cached https://files.pythonhosted.org/packages/06/4f/9dc98ea1309012eafe518e32e91d2a55686341f3f4c1cdc19f1f64cb33d0/pymatgen-2021.2.14.tar.gz Installing build dependencies ... error ERROR: Command errored out with exit status 1: /usr/bin/python3 /usr/local/lib/python3.6/dist-packages/pip install --ignore-installed --no-user --prefix /tmp/pip-build-env-9j4h3p2n/overlay --no-warn-script-location -v --no-binary :none: --only-binary :none: -i https://pypi.org/simple -- 'numpy>=1.20.1' 'setuptools>=43.0.0' Check the logs for full command output. ``` Trying to install with following: ``` !pip install -vvv pymatgen ``` This throws the following error: ``` pip._internal.exceptions.InstallationError: Command errored out with exit status 1: /usr/bin/python3 /usr/local/lib/python3.6/dist-packages/pip install --ignore-installed --no-user --prefix /tmp/pip-build-env-g1m0e202/overlay --no-warn-script-location -v --no-binary :none: --only-binary :none: -i https://pypi.org/simple -- 'numpy>=1.20.1' 'setuptools>=43.0.0' Check the logs for full command output. ``` Please help solve this issue.
2021/02/15
[ "https://Stackoverflow.com/questions/66204201", "https://Stackoverflow.com", "https://Stackoverflow.com/users/15211978/" ]
You will need to validate within the addNewUser method. And then `throw` an exception when your validate hits. Example ```java if(username.length > 10) { throw new Exception("Username is too long"); } ``` it will then be catched by your try-catch statement.
In order to fix your problem, you should just provide exception messages. ```java public void addNewUser(String username, String password) throws exceptions.InvalidUsernameException, exceptions.InvalidPasswordException, exceptions.DuplicateUserException { // Check if the username has a correct format`enter code here` if (EligibilityCheck.getInstance().checkingUserName(username)) { // HERE throw new InvalidUsernameException("Error: The username is invalid; enter 6-12 lower-case letters."); } ... ... } ```
16,536,071
I was working on these functions (see [this](https://stackoverflow.com/questions/16525224/how-to-breakup-a-list-of-list-in-a-given-way-in-python)): ``` def removeFromList(elementsToRemove): def closure(list): for element in elementsToRemove: if list[0] != element: return else: list.pop(0) return closure def func(listOfLists): result = [] for i, thisList in enumerate(listOfLists): result.append(thisList) map(removeFromList(thisList), listOfLists[i+1:]) return result ``` I have a list which I want to pass as argument, but I want this list to remain intact. What I tried is: ``` my_list = [[1], [1, 2], [1, 2, 3]] print my_list #[[1], [1, 2], [1, 2, 3]] copy_my_list = list (my_list) #This also fails #copy_my_list = my_list [:] print id (my_list) == id (copy_my_list) #False print func (copy_my_list) #[[1], [2], [3]] print my_list #[[1], [2], [3]] ``` But it does change my original list. Any ideas?
2013/05/14
[ "https://Stackoverflow.com/questions/16536071", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2338725/" ]
Use `copy.deepcopy`: ``` from copy import deepcopy new_list = deepcopy([[1], [1, 2], [1, 2, 3]]) ``` Demo: ``` >>> lis = [[1], [1, 2], [1, 2, 3]] >>> new_lis = lis[:] # creates a shallow copy >>> [id(x)==id(y) for x,y in zip(lis,new_lis)] [True, True, True] #inner lists are still the same object >>> new_lis1 = deepcopy(lis) # create a deep copy >>> [id(x)==id(y) for x,y in zip(lis,new_lis1)] [False, False, False] #inner lists are now different object ```
both with `list(my_list)` and `my_list[:]` you get a shallow copy of the list. ``` id(copy_my_list[0]) == id(my_list[0]) # True ``` so use `copy.deepcopy` to avoid your problem: ``` copy_my_list = copy.deepcopy(my_list) id(copy_my_list[0]) == id(my_list[0]) # False ```
16,536,071
I was working on these functions (see [this](https://stackoverflow.com/questions/16525224/how-to-breakup-a-list-of-list-in-a-given-way-in-python)): ``` def removeFromList(elementsToRemove): def closure(list): for element in elementsToRemove: if list[0] != element: return else: list.pop(0) return closure def func(listOfLists): result = [] for i, thisList in enumerate(listOfLists): result.append(thisList) map(removeFromList(thisList), listOfLists[i+1:]) return result ``` I have a list which I want to pass as argument, but I want this list to remain intact. What I tried is: ``` my_list = [[1], [1, 2], [1, 2, 3]] print my_list #[[1], [1, 2], [1, 2, 3]] copy_my_list = list (my_list) #This also fails #copy_my_list = my_list [:] print id (my_list) == id (copy_my_list) #False print func (copy_my_list) #[[1], [2], [3]] print my_list #[[1], [2], [3]] ``` But it does change my original list. Any ideas?
2013/05/14
[ "https://Stackoverflow.com/questions/16536071", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2338725/" ]
Use `copy.deepcopy`: ``` from copy import deepcopy new_list = deepcopy([[1], [1, 2], [1, 2, 3]]) ``` Demo: ``` >>> lis = [[1], [1, 2], [1, 2, 3]] >>> new_lis = lis[:] # creates a shallow copy >>> [id(x)==id(y) for x,y in zip(lis,new_lis)] [True, True, True] #inner lists are still the same object >>> new_lis1 = deepcopy(lis) # create a deep copy >>> [id(x)==id(y) for x,y in zip(lis,new_lis1)] [False, False, False] #inner lists are now different object ```
Use a tuple. `my_list = ([1], [1, 2], [1, 2, 3])` `my_list` is now immutable, and anytime you want a mutable copy you can just use `list(my_list)` ``` >>> my_list = ([1], [1, 2], [1, 2, 3]) >>> def mutate(aList): aList.pop() return aList >>> mutate(list(my_list)) [[1], [1, 2]] >>> my_list ([1], [1, 2], [1, 2, 3]) >>> ``` As someone has brought to my attention, this solution is not foolproof. The tuple itself is not mutable, but it's elements are (if they are mutable objects - which lists are).
16,536,071
I was working on these functions (see [this](https://stackoverflow.com/questions/16525224/how-to-breakup-a-list-of-list-in-a-given-way-in-python)): ``` def removeFromList(elementsToRemove): def closure(list): for element in elementsToRemove: if list[0] != element: return else: list.pop(0) return closure def func(listOfLists): result = [] for i, thisList in enumerate(listOfLists): result.append(thisList) map(removeFromList(thisList), listOfLists[i+1:]) return result ``` I have a list which I want to pass as argument, but I want this list to remain intact. What I tried is: ``` my_list = [[1], [1, 2], [1, 2, 3]] print my_list #[[1], [1, 2], [1, 2, 3]] copy_my_list = list (my_list) #This also fails #copy_my_list = my_list [:] print id (my_list) == id (copy_my_list) #False print func (copy_my_list) #[[1], [2], [3]] print my_list #[[1], [2], [3]] ``` But it does change my original list. Any ideas?
2013/05/14
[ "https://Stackoverflow.com/questions/16536071", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2338725/" ]
both with `list(my_list)` and `my_list[:]` you get a shallow copy of the list. ``` id(copy_my_list[0]) == id(my_list[0]) # True ``` so use `copy.deepcopy` to avoid your problem: ``` copy_my_list = copy.deepcopy(my_list) id(copy_my_list[0]) == id(my_list[0]) # False ```
Use a tuple. `my_list = ([1], [1, 2], [1, 2, 3])` `my_list` is now immutable, and anytime you want a mutable copy you can just use `list(my_list)` ``` >>> my_list = ([1], [1, 2], [1, 2, 3]) >>> def mutate(aList): aList.pop() return aList >>> mutate(list(my_list)) [[1], [1, 2]] >>> my_list ([1], [1, 2], [1, 2, 3]) >>> ``` As someone has brought to my attention, this solution is not foolproof. The tuple itself is not mutable, but it's elements are (if they are mutable objects - which lists are).
29,813,423
the below python gui code i am trying to select the values from the drop down menu buttons(graph and density) and trying to pass them as command line arguments to os.system command in the readfile() function as shown below but I am having a problem in passing the values I have selected from the drop down menu to os.system command. import os import Tkinter as tk ``` def buttonClicked(btn): density= btn def graphselected(graphbtn): graph=graphbtn def readfile(): os.system( 'python C:Desktop/python/ABC.py graph density') root = tk.Tk() root.title("Dense Module Enumeration") btnList=[0.0,0.1,0.2,0.3,0.4,0.5,0.6,0.7,0.8,0.9,1.0] btnMenu = tk.Menubutton(root, text='Density') contentMenu = tk.Menu(btnMenu) btnMenu.config(menu=contentMenu) for btn in btnList: contentMenu.add_command(label=btn, command = lambda btn=btn: buttonClicked(btn)) btnMenu.pack() graph_list=['graph1.txt','graph2.txt','graph3.txt','graph.txt'] btnMenu = tk.Menubutton(root, text='graph') contentMenu = tk.Menu(btnMenu) btnMenu.config(menu=contentMenu) for btn in graph_list: contentMenu.add_command(label=btn, command =lambda btn= btn: graphselected(btn)) btnMenu.pack() button = tk.Button(root, text="DME", command=readfile) button.pack() root.mainloop() ```
2015/04/23
[ "https://Stackoverflow.com/questions/29813423", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2014111/" ]
It is easy to implement with [functools.partial](https://docs.python.org/2/library/functools.html#functools.partial) - apply needed value to your function for each button. Here is a sample: ``` from functools import partial import Tkinter as tk BTNLIST = [0.0, 0.1, 0.2] def btn_clicked(payload=None): """Just prints out given payload.""" print('Me was clicked. Payload: {}'.format(payload)) def init_controls(): """Prepares GUI controls and starts mainloop""" root = tk.Tk() menu = tk.Menu(root) root.config(menu=menu) sample_menu = tk.Menu(menu) menu.add_cascade(label="Destiny", menu=sample_menu) for btn_value in BTNLIST: sample_menu.add_command( label=btn_value, # Here is the trick with partial command=partial(btn_clicked, btn_value) ) root.mainloop() init_controls() ```
The way you have it, `graph` and `density` are local variables to `graphselected()` and `buttonClicked()`. Therefore, `readfile()` can never access these variables unless you declare them as global in all three functions. Then you want to format a string to incorporate the values in `graph` and `density`. You can do that using the strings [`.format` method](https://docs.python.org/2/library/stdtypes.html#str.format). Combining that your three functions become ``` def buttonClicked(btn): global density density = btn def graphselected(graphbtn): global graph graph = graphbtn def readfile(): global density, graph os.system('python C:Desktop/python/ABC.py {} {}'.format(graph, density)) ```
6,958,833
I'm trying to insert a string that was received as an argument into a sqlite db using python: ``` def addUser(self, name): cursor=self.conn.cursor() t = (name) cursor.execute("INSERT INTO users ( unique_key, name, is_online, translate) VALUES (NULL, ?, 1, 0);", t) self.conn.commit() ``` I don's want to use string concatenation because <http://docs.python.org/library/sqlite3.html> advises against it. However, when I run the code, I get the exception ``` cursor.execute("INSERT INTO users ( unique_key, name, is_online, translate) VALUES (NULL, ?, 1, 0);", t) pysqlite2.dbapi2.ProgrammingError: Incorrect number of bindings supplied. The current statement uses 1, and there are 7 supplied ``` Why is Python splitting the string by characters, and is there a way to prevent it from doing so? EDIT: changing to `t = (name,)` gives the following exception ``` print "INSERT INTO users ( unique_key, name, is_online, translate) VALUES (NULL, ?, 1, 0)" + t exceptions.TypeError: cannot concatenate 'str' and 'tuple' objects ```
2011/08/05
[ "https://Stackoverflow.com/questions/6958833", "https://Stackoverflow.com", "https://Stackoverflow.com/users/752462/" ]
You need this: ``` t = (name,) ``` to make a single-element tuple. Remember, it's **commas** that make a tuple, not brackets!
Your `t` variable isn't a tuple, i think it is a 7-length string. To make a tuple, don't forget to put a trailing coma: ``` t = (name,) ```
41,196,390
I have my `index.py` in `/var/www/cgi-bin` My `index.py` looks like this : ``` #!/usr/bin/python print "Content-type:text/html\r\n\r\n" print '<html>' print '<head>' print '<title>Hello Word - First CGI Program</title>' print '</head>' print '<body>' print '<h2>Hello Word! This is my first CGI program</h2>' print '</body>' print '</html>' ``` My apache2 `/etc/apache2/sites-enabled/000-default.conf` looks like this : ``` <VirtualHost *:80> <Directory /var/www/cgi-bin> Options +ExecCGI AddHandler cgi-script .cgi .py Order allow,deny Allow from all </Directory> </VirtualHost> ``` Let me know if anything else also require any modification, I have already enabled cgi. The problem is no matter whatever URL I visit I keep on getting error **Not Found** [localhost](http://localhost) , or [localhost/index.py](http://localhost/index.py)
2016/12/17
[ "https://Stackoverflow.com/questions/41196390", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3405554/" ]
Try this : Enable `CGI` `a2enmod cgid` `chmod a+x /var/www/cgi-bin/index.py` but check `cgi-bin` directory owner is `wwwdata` ? Need a `directory` definition on every `Virtualhost` ! Some time required `restart` for killing all `apache` threads ! ``` DocumentRoot /var/www/htdocs #A include B if owner are same ! <Directory /var/www/htdocs/cgi-bin/ > FallbackResource /index.py Options +ExecCGI -MultiViews -SymLinksIfOwnerMatch -Indexes Order allow,deny Allow from all AddHandler cgi-script .py </Directory> ```
use this file to run cgi script: ``` import cgi; import cgitb;cgitb.enable() ```
19,090,032
I need to scrape career pages of multiple companies(with their permission). Important Factors in deciding what do I use 1. I would be scraping around 2000 pages daily, so need a decently fast solution 2. Some of these pages populate data via ajax after page is loaded. 3. My webstack is Ruby/Rails with MySql etc. 4. I have written scrapers earlier using scrapy(python) (+ Selenium for ajax enabled pages). My doubts 1. I am confused whether I should go with python (i.e. scrapy + Selenium, I think this is the best alternative in python), or instead prefer something in ruby(as my entire codebase is in ruby). 2. Scrapy + selenium is often slow, are there faster alternatives in ruby?(this would make the decision easier) Most popular Ruby alternative with support for Ajax Loaded pages seems to be **Watir** Can anybody comment on its speed. Also are there any other ruby alternatives (e.g. **Mechanize/Nokogiri** + *something else for Ajax Loaded pages*) **EDIT** Ended up using Watir-webdriver + Nokogiri, so that I can take advantage of active record while storing data. Nokogiri is much faster than Watir-webdriver at extracting data. Scrapy would have been faster, but the speed tradeoff wasn't as significant as the complexity tradeoff in handling different kind of websites in scrapy (e.g. ajax-driven search on some target sites, which i have to necessarily go through). Hopefully this helps someone.
2013/09/30
[ "https://Stackoverflow.com/questions/19090032", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1549934/" ]
The real benefit of closures and higher-order functions is that they can represent what the programmer sometimes has in mind. If you as the programmer find that what you have in mind is a piece of code, a function, an instruction on how to compute something (or do something), then you should use a closure for this. If, on the other hand, what you have in mind is more like an object, a thing (which happens to have some properties, methods, instructions, capabilities, etc.), then you should program it as an object, a class. In your case I think the best way to implement this is neither ;-) I'd do this with a generator: ``` def incrX(i): while True: i += 1 i %= 10 yield i incr = incrX(10) print incr.next() print incr.next() ```
With a closure, one can save the `self` variable. In particular, when there are many variables to be passed, a closure could be more readable. ``` class Incr: """a class that increments internal variable""" def __init__(self, i): self._i = i def __call__(self): self._i = (self._i + 1) % 10 return self._i ``` ``` def incr(i): """closure that increments internal variable""" def incr(): nonlocal i i = (i + 1) % 10 return i return incr ``` ``` print('class...') a = Incr(10) print(a()) # 1 print(a()) # 2 print('closure...') b = incr(10) print(b()) # 1 print(b()) # 2 ```
19,090,032
I need to scrape career pages of multiple companies(with their permission). Important Factors in deciding what do I use 1. I would be scraping around 2000 pages daily, so need a decently fast solution 2. Some of these pages populate data via ajax after page is loaded. 3. My webstack is Ruby/Rails with MySql etc. 4. I have written scrapers earlier using scrapy(python) (+ Selenium for ajax enabled pages). My doubts 1. I am confused whether I should go with python (i.e. scrapy + Selenium, I think this is the best alternative in python), or instead prefer something in ruby(as my entire codebase is in ruby). 2. Scrapy + selenium is often slow, are there faster alternatives in ruby?(this would make the decision easier) Most popular Ruby alternative with support for Ajax Loaded pages seems to be **Watir** Can anybody comment on its speed. Also are there any other ruby alternatives (e.g. **Mechanize/Nokogiri** + *something else for Ajax Loaded pages*) **EDIT** Ended up using Watir-webdriver + Nokogiri, so that I can take advantage of active record while storing data. Nokogiri is much faster than Watir-webdriver at extracting data. Scrapy would have been faster, but the speed tradeoff wasn't as significant as the complexity tradeoff in handling different kind of websites in scrapy (e.g. ajax-driven search on some target sites, which i have to necessarily go through). Hopefully this helps someone.
2013/09/30
[ "https://Stackoverflow.com/questions/19090032", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1549934/" ]
The real benefit of closures and higher-order functions is that they can represent what the programmer sometimes has in mind. If you as the programmer find that what you have in mind is a piece of code, a function, an instruction on how to compute something (or do something), then you should use a closure for this. If, on the other hand, what you have in mind is more like an object, a thing (which happens to have some properties, methods, instructions, capabilities, etc.), then you should program it as an object, a class. In your case I think the best way to implement this is neither ;-) I'd do this with a generator: ``` def incrX(i): while True: i += 1 i %= 10 yield i incr = incrX(10) print incr.next() print incr.next() ```
closure binds data to functions in a weird way. on the other side, data classes try to separate data from functions. To me both of them are bad design. Never use them. OO(class) is a much more convient and natural way.
43,190,221
I have a training file in the following format: > > 0.086, 0.4343, 0.4212, ...., class1 > > > 0.086, 0.4343, 0.4212, ...., class2 > > > 0.086, 0.4343, 0.4212, ...., class5 > > > Where, each row is a one-dimensional vector and the last column is the class in which that vector represents. We can see that a vector repeats itself several times, since it has several classes. Reading this data is done by the python "Panda" library. That said, I need to conduct training with a convolutional network. I already researched some sites and did not get much success and also do not know if the network needs to be prepared for the "Multi-Class" form. I would like to know if someone knows a multi-class 1D classification approach with tensorflow or could guide me with an example, being that after training the network, I need to pass a template (which would be a vector) and the network output me Give the correct percentage of each class. Thank you!
2017/04/03
[ "https://Stackoverflow.com/questions/43190221", "https://Stackoverflow.com", "https://Stackoverflow.com/users/6363322/" ]
This is a pretty straight forward setup. First thing to know: Your labels need to be in "one hot encoding" format. That means, if you have 5 classes, class 1 is represented by the vector [1,0,0,0,0], class 2 by the vector [0,1,0,0,0], and so on. This is standard. Second, you mention that you want multi-class classification. But the example you gave is single-class classification. So this is probably a terminology clarification here. When you say multi-class classification it means that you want a single sample to belong to more than one class, let's say your first sample is part of both class 2 and class 3. But it doesn't look like that in your case. So for single-class classification with 5 classes you want to use cross entropy as your loss function. You can follow the cifar 10 tutorial. This is the same setup where each image is 1 of 10 classes. <https://www.tensorflow.org/tutorials/deep_cnn> You mentioned that your data is 1-dimensional. This is trivial to accomplish, just treat it like the cifar 10 2-dimensional data with one of those dimensions set to be 1. You don't need to change any other code. In the cifar 10 example your images will be 32x32, in your data your images will be maybe 32x1, or 10x1, whatever kernel you decide on (try different kernel sizes!). The same change will apply to stride. Just treat your problem as a 2D problem with a flat 2nd dimension, easy as pie.
From what I understand you have a multi-label problem. Meaning that a sample can belong to more than one classes Take a look at [sigmoid\_cross\_entropy\_with\_logits](https://www.tensorflow.org/api_docs/python/tf/nn/sigmoid_cross_entropy_with_logits) and use that as your loss function. You do not need to use one hot encoding or repeat your samples for each label they belong to for this loss function. Just use a label vector and set to one the classes that the sample belongs to.
69,262,618
So I just watched a tutorial that the author didn't need to `import sklearn` when using `predict` function of pickled model in anaconda environment (sklearn installed). I have tried to reproduce the minimal version of it in Google Colab. If you have a pickled-sklearn-model, the code below works in Colab (sklearn installed): ``` import pickle model = pickle.load(open("model.pkl", "rb"), encoding="bytes") out = model.predict([[20, 0, 1, 1, 0]]) print(out) ``` I realized that I still need the sklearn package installed. If I uninstall the sklearn, the `predict` function now is not working: ``` !pip uninstall scikit-learn import pickle model = pickle.load(open("model.pkl", "rb"), encoding="bytes") out = model.predict([[20, 0, 1, 1, 0]]) print(out) ``` the error: ``` WARNING: Skipping scikit-learn as it is not installed. --------------------------------------------------------------------------- ModuleNotFoundError Traceback (most recent call last) <ipython-input-1-dec96951ae29> in <module>() 1 get_ipython().system('pip uninstall scikit-learn') 2 import pickle ----> 3 model = pickle.load(open("model.pkl", "rb"), encoding="bytes") 4 out = model.predict([[20, 0, 1, 1, 0]]) 5 print(out) ModuleNotFoundError: No module named 'sklearn' ``` So, how does it work? as far as I understand pickle doesn't depend on scikit-learn. Does the serialized model do `import sklearn`? **Why can I use `predict` function without import scikit learn in the first code?**
2021/09/21
[ "https://Stackoverflow.com/questions/69262618", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2147347/" ]
There's a few questions being asked here, so let's go through them one by one: > > So, how does it work? as far as I understand pickle doesn't depend on scikit-learn. > > > There is nothing particular to scikit-learn going on here. Pickle will exhibit this behaviour for any module. Here's an example with Numpy: ``` will@will-desktop ~ $ python Python 3.9.6 (default, Aug 24 2021, 18:12:51) [GCC 9.3.0] on linux Type "help", "copyright", "credits" or "license" for more information. >>> import sys >>> 'numpy' in sys.modules False >>> import numpy >>> 'numpy' in sys.modules True >>> pickle.dumps(numpy.array([1, 2, 3])) b'\x80\x04\x95\xa0\x00\x00\x00\x00\x00\x00\x00\x8c\x15numpy.core.multiarray\x94\x8c\x0c_reconstruct\x94\x93\x94\x8c\x05numpy\x94\x8c\x07ndarray\x94\x93\x94K\x00\x85\x94C\x01b\x94\x87\x94R\x94(K\x01K\x03\x85\x94h\x03\x8c\x05dtype\x94\x93\x94\x8c\x02i8\x94\x89\x88\x87\x94R\x94(K\x03\x8c\x01<\x94NNNJ\xff\xff\xff\xffJ\xff\xff\xff\xffK\x00t\x94b\x89C\x18\x01\x00\x00\x00\x00\x00\x00\x00\x02\x00\x00\x00\x00\x00\x00\x00\x03\x00\x00\x00\x00\x00\x00\x00\x94t\x94b.' >>> exit() ``` So far what I've done is show that in a fresh Python process `'numpy'` is not in `sys.modules` (the dict of imported modules). Then we import Numpy, and pickle a Numpy array. Then in a new Python process shown below, we we see that before we unpickle the array Numpy has not been imported, but after we have Numpy has been imported. ``` will@will-desktop ~ $ python Python 3.9.6 (default, Aug 24 2021, 18:12:51) [GCC 9.3.0] on linux Type "help", "copyright", "credits" or "license" for more information. >>> import pickle >>> import sys >>> 'numpy' in sys.modules False >>> pickle.loads(b'\x80\x04\x95\xa0\x00\x00\x00\x00\x00\x00\x00\x8c\x15numpy.core.multiarray\x94\x8c\x0c_reconstruct\x94\x93\x94\x8c\x05numpy\x94\x8c\x07ndarray\x94\x93\x94K\x00\x85\x94C\x01b\x94\x87\x94R\x94(K\x01K\x03\x85\x94h\x03\x8c\x05dtype\x94\x93\x94\x8c\x02i8\x94\x89\x88\x87\x94R\x94(K\x03\x8c\x01<\x94NNNJ\xff\xff\xff\xffJ\xff\xff\xff\xffK\x00t\x94b\x89C\x18\x01\x00\x00\x00\x00\x00\x00\x00\x02\x00\x00\x00\x00\x00\x00\x00\x03\x00\x00\x00\x00\x00\x00\x00\x94t\x94b.') array([1, 2, 3]) >>> 'numpy' in sys.modules True >>> numpy Traceback (most recent call last): File "<stdin>", line 1, in <module> NameError: name 'numpy' is not defined ``` Despite being imported, however, `numpy` is still not a defined variable name. Imports in Python are global, but an import will only update the namespace of the module that actually did the import. If we want to access `numpy` we still need to write `import numpy`, but since Numpy was already imported elsewhere in the process this will not re-run Numpy's module initialization code. Instead it will create a `numpy` variable in our module's globals dictionary, and make it a reference to the Numpy module object that existed beforehand, and could be accessed through `sys.modules['numpy']`. So what is Pickle doing here? It embeds the information about what module was used to define whatever it is pickling within the pickle. Then when it unpickles something, it uses that information to import the module such that it can use the unpickle method of the class. We can look to the source code for the Pickle module we can see that's what's happening: In the [`_Pickler`](https://github.com/python/cpython/blob/7b88f63e1dd4006b1a08b9c9f087dd13449ecc76/Lib/pickle.py#L407) we see [`save`](https://github.com/python/cpython/blob/7b88f63e1dd4006b1a08b9c9f087dd13449ecc76/Lib/pickle.py#L535) method uses the [`save_global`](https://github.com/python/cpython/blob/7b88f63e1dd4006b1a08b9c9f087dd13449ecc76/Lib/pickle.py#L1056) method. This in turn uses the [`whichmodule`](https://github.com/python/cpython/blob/7b88f63e1dd4006b1a08b9c9f087dd13449ecc76/Lib/pickle.py#L335) function to obtain the module name (`'scikit-learn'`, in your case), which is then saved in the pickle. In the [`_UnPickler`](https://github.com/python/cpython/blob/7b88f63e1dd4006b1a08b9c9f087dd13449ecc76/Lib/pickle.py#L1137) we see the [`find_class`](https://github.com/python/cpython/blob/7b88f63e1dd4006b1a08b9c9f087dd13449ecc76/Lib/pickle.py#L1572) method uses [`__import__`](https://docs.python.org/3/library/functions.html#__import__) to import the module using the stored module name. The `find_class` method is used in a few of the `load_*` methods, such as [`load_inst`](https://github.com/python/cpython/blob/7b88f63e1dd4006b1a08b9c9f087dd13449ecc76/Lib/pickle.py#L1497), which is what would be used to load an instance of a class, such as your model instance: ```py def load_inst(self): module = self.readline()[:-1].decode("ascii") name = self.readline()[:-1].decode("ascii") klass = self.find_class(module, name) self._instantiate(klass, self.pop_mark()) ``` [The documentation for `Unpickler.find_class` explains](https://docs.python.org/3/library/pickle.html#pickle.Unpickler.find_class): > > Import module if necessary and return the object called name from it, where the module and name arguments are str objects. > > > [The docs also explain how you can restrict this behaviour](https://docs.python.org/3/library/pickle.html#restricting-globals): > > [You] may want to control what gets unpickled by customizing Unpickler.find\_class(). Unlike its name suggests, Unpickler.find\_class() is called whenever a global (i.e., a class or a function) is requested. Thus it is possible to either completely forbid globals or restrict them to a safe subset. > > > Though this is generally only relevant when unpickling untrusted data, which doesn't appear to be the case here. --- > > Does the serialized model do import sklearn? > > > The serialized model itself doesn't *do* anything, strictly speaking. It's all handled by the Pickle module as described above. --- > > Why can I use predict function without import scikit learn in the first code? > > > Because sklearn is imported by the Pickle module when it unpickles the data, thereby providing you with a fully realized model object. It's just like if some other module imported sklearn, created the model object, and then passed it into your code as a parameter to a function. --- As a consequence of all this, in order to unpickle your model you'll need to have sklearn installed - ideally the same version that was used to create the pickle. In general the Pickle module stores the fully qualified path of any required module, so the Python process that pickles the object and the one that unpickles the object must have all [1] required modules exist with the same fully qualified names. --- [1] A caveat to that is that the Pickle module can automatically adjust/fix certain imports for particular modules/classes that have different fully qualified names between Python 2 and 3. From [the docs](https://docs.python.org/3/library/pickle.html#pickle.Unpickler): > > If fix\_imports is true, pickle will try to map the old Python 2 names to the new names used in Python 3. > > >
*When the model was first pickled*, you had sklearn installed. The pickle file depends on sklearn for its structure, as the class of the object it represents is a sklearn class, and `pickle` needs to know the details of that class’s structure in order to unpickle the object. When you try to unpickle the file without sklearn installed, `pickle` determines from the file that the class the object is an instance of is `sklearn.x.y.z` or what have you, and then the unpickling fails because the module `sklearn` cannot be found when `pickle` tries to resolve that name. Notice that the exception occurs on the unpickling line, not on the line where `predict` is called. You don’t need to import sklearn in your code when it does work because once the object is unpickled, it knows what its class is and what all its method names are, so you can just call them from the object.
42,913,788
I'm trying to ask a question on python, so that if the person gets it right, they can move onto the next question. If they get it wrong, they have 3 or so attempts at getting it right, before the quiz moves onto the next question. I thought I solved it with the below program, however this just asks the user make another choice even if they get it correct. How do I move onto the next question if the user gets it correct, but also gives another chance to those that get it wrong? ``` score = 0 counter = 0 while counter<3: answer = input("Make your choice >>>> ") if answer == "c": print("Correct!") score += 1 else: print("That is incorrect. Try again.") counter = counter +1 print("The correct answer is C!") print("Your current score is {0}".format(score) ```
2017/03/20
[ "https://Stackoverflow.com/questions/42913788", "https://Stackoverflow.com", "https://Stackoverflow.com/users/7735015/" ]
You're stuck in the loop. So put ``` counter = 3 ``` after ``` score += 1 ``` To get out of the loop. ``` score = 0 counter = 0 while counter<3: answer = input("Make your choice >>>> ") if answer == "c": print("Correct!") score += 1 counter = 3 else: print("That is incorrect. Try again.") counter = counter +1 print("The correct answer is C!") print("Your current score is {0}".format(score) ```
You're stucked in the loop, a cleaner way of solving this is using the function break as in: ``` score = 0 counter = 0 while counter < 3: answer = input("Make your choice >>>> ") if answer == "c": print ("Correct!") score += 1 break else: print("That is incorrect. Try Again") counter += 1 print("The correct answer is C!") print("Your current score is {" + str(score) + "}") ``` I would like to highlight a few things about your original code. 1- Python is case sensitive, the code that you gave us will work as long as you type 'c' in lowercase. 2- The last line of I edited it so it would correctly print the score. For further reading about control flow and the function break try python docs here: <https://docs.python.org/2/tutorial/controlflow.html>
66,157,729
I have some info store in a MySQL database, something like: `AHmmgZq\n/+AH+G4` We get that using an API, so when I read it in my python I get: `AHmmgZq\\n/+AH+G4` The backslash is doubled! Now I need to put that into a JSON file, how can I remove the extra backslash? **EDIT:** let me show my full code: ``` json_dict = { "private_key": "AHmmgZq\\n/+AH+G4" } print(json_dict) print(json_dict['private_key']) with open(file_name, "w", encoding="utf-8") as f: json.dump(json_dict, f, ensure_ascii=False, indent=2) ``` In the first print I have the doubled backslash, but in the second one there's only one. When I dump it to the json file it gives me doubled.
2021/02/11
[ "https://Stackoverflow.com/questions/66157729", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4663446/" ]
Turns out that the badge appears once you open a TeX file. I thought you'd first create a TeX project, then the file.
As you already figured out, the badge appears once you open a TeX file. Keep also in mind that you have to install LaTeX, or update LaTex. I say so because personally I was trying to use `\tableofcontents` but the table wouldn't be generated until the moment I installed texlive using homebrew (`brew install texlive`)
39,305,286
According to [documentation](https://docs.python.org/3.4/c-api/capsule.html?highlight=capsule), the third argument to `PyCapsule_New()` can specify a destructor, which I assume should be called when the capsule is destroyed. ``` void mapDestroy(PyObject *capsule) { lash_map_simple_t *map; fprintf(stderr, "Entered destructor\n"); map = (lash_map_simple_t*)PyCapsule_GetPointer(capsule, "MAP_C_API"); if (map == NULL) return; fprintf(stderr, "Destroying map %p\n", map); lashMapSimpleFree(map); free(map); } static PyObject * mapSimpleInit_func(PyObject *self, PyObject *args) { unsigned int w; unsigned int h; PyObject *pymap; lash_map_simple_t *map = (lash_map_simple_t*)malloc(sizeof(lash_map_simple_t)); pymap = PyCapsule_New((void *)map, "MAP_C_API", mapDestroy); if (!PyArg_ParseTuple(args, "II", &w, &h)) return NULL; lashMapSimpleInit(map, &w, &h); return Py_BuildValue("O", pymap); } ``` However, when I instantiate the object and delete it or exit from Python console, the destructor doesn't seem to be called: ``` >>> a = mapSimpleInit(10,20) >>> a <capsule object "MAP_C_API" at 0x7fcf4959f930> >>> del(a) >>> a = mapSimpleInit(10,20) >>> a <capsule object "MAP_C_API" at 0x7fcf495186f0> >>> quit() lash@CANTANDO ~/programming/src/liblashgame $ ``` My guess is that it has something to do with the `Py_BuildValue()` returning a new reference to the "capsule", which upon deletion doesn't affect the original. Anyway, how would I go about ensuring that the object is properly destroyed? Using Python 3.4.3 [GCC 4.8.4] (on linux)
2016/09/03
[ "https://Stackoverflow.com/questions/39305286", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3333488/" ]
The code above has a reference leak: `pymap = PyCapsule_New()` returns a new object (its refcount is 1), but `Py_BuildValue("O", pymap)` creates a new reference to the same object, and its refcount is now 2. Just `return pymap;`.
`Py_BuildValue("O", thingy)` will just increment the refcount for `thingy` and return it – the docs say that it returns a “new reference” but that is not quite true when you pass it an existing `PyObject*`. If these functions of yours – the ones in your question, that is – are all defined in the same translation unit, the destructor function will likely have to be declared `static` (so its full signature would be `static void mapDestroy(PyObject* capsule);`) to ensure that the Python API can look up the functions’ address properly when it comes time to call the destructor. … You don’t have to use a `static` function, as long as the destructor’s address in memory is valid. For example, I’ve successfully used [a C++ non-capturing lambda as a destructor](https://github.com/fish2000/libimread/blob/093a4e6d6556b543dbf750a2d61f746b6cf12e77/python/im/include/pycapsule.hpp#L18-L30), as non-capturing C++ lambdas can be converted to function pointers; if you want to use another way of obtaining and handing off a function pointer for your capsule destructor that works better for you, by all means go for it.
48,775,587
I am trying to learn python through some basic exercises with my own online store. I have a list of parts that are in-transit to us that we have already ordered, and I have a list of parts that we are currently out of stock of. I want to be able to send a list to the supplier of what we need - but I do not want to create duplicate orders as a result of the fact that the parts on order, are listed as out of stock. I put together this basic program that looks through the list of items that are out of stock and only prints the item if it is present in the outofstock list but *not* present in the onorder list, so that if it is on order we do not order it again. However, it outputs nothing. ``` onorder = ["A1417", "A1322", "ISL6259", "LP8545B1SQ", "PM6640", "SLG3NB148V", "PD4HDMIREG", "338S1201", "SN2400B0", "AD7149", "J3801", "J4502", "IPRO97B"] outofstock = ["ISL6259", "LY-UVH900", "triwing", "banana-to-alligator", "LP8548B1SQ", "EDP-J9000-30-PIN-IPEX", "J3801", "LT3470", "PM6640", "SN2400B0", "IPRO97B", "SLG3NB148V", "SN2400AB0", "usbammeter", "821-00814-A", "J5713", "343S0645", "PMCM4401VPE", "J4502", "PMD9645", "J9600", "J2401", "AD7149", "593-1604", "821-1722", "LM3534TMX", "U4001"] for part in onorder: if (part in onorder) == False and (part in outofstock) == True: print (part) ``` It doesn't print anything, even though there are entries in outofstock that are not in onorder. If I try this outside of a loop, it works and prints every part in the onorder list. ``` for part in onorder: print (part) ``` If I try this outside a loop, it also works and prints triwing, since it is true that triwing is in the outofstock list. ``` if ('triwing' in outofstock) == True: print ("triwing") ``` However, the program in the for loop returns nothing. What am I missing?
2018/02/13
[ "https://Stackoverflow.com/questions/48775587", "https://Stackoverflow.com", "https://Stackoverflow.com/users/6713690/" ]
``` for part in onorder: if (part in onorder) == False ... ``` This does not make sense. Since you are iterating over exactly every element of `onorder`, you will never get a `part` not in `onorder`. Therefore, it is not a miracle that the print statement is not being executed.
Doh! Appropriate code was ``` for part in outofstock: if (part not in onorder): print (part) ``` This way it prints my out of stock items which I need to order, unless they were already on order. I can't believe I overly complicated this for no good reason. Thank you so much for pointing out where I had gone wrong. This was such a dumb question in hindsight.
48,775,587
I am trying to learn python through some basic exercises with my own online store. I have a list of parts that are in-transit to us that we have already ordered, and I have a list of parts that we are currently out of stock of. I want to be able to send a list to the supplier of what we need - but I do not want to create duplicate orders as a result of the fact that the parts on order, are listed as out of stock. I put together this basic program that looks through the list of items that are out of stock and only prints the item if it is present in the outofstock list but *not* present in the onorder list, so that if it is on order we do not order it again. However, it outputs nothing. ``` onorder = ["A1417", "A1322", "ISL6259", "LP8545B1SQ", "PM6640", "SLG3NB148V", "PD4HDMIREG", "338S1201", "SN2400B0", "AD7149", "J3801", "J4502", "IPRO97B"] outofstock = ["ISL6259", "LY-UVH900", "triwing", "banana-to-alligator", "LP8548B1SQ", "EDP-J9000-30-PIN-IPEX", "J3801", "LT3470", "PM6640", "SN2400B0", "IPRO97B", "SLG3NB148V", "SN2400AB0", "usbammeter", "821-00814-A", "J5713", "343S0645", "PMCM4401VPE", "J4502", "PMD9645", "J9600", "J2401", "AD7149", "593-1604", "821-1722", "LM3534TMX", "U4001"] for part in onorder: if (part in onorder) == False and (part in outofstock) == True: print (part) ``` It doesn't print anything, even though there are entries in outofstock that are not in onorder. If I try this outside of a loop, it works and prints every part in the onorder list. ``` for part in onorder: print (part) ``` If I try this outside a loop, it also works and prints triwing, since it is true that triwing is in the outofstock list. ``` if ('triwing' in outofstock) == True: print ("triwing") ``` However, the program in the for loop returns nothing. What am I missing?
2018/02/13
[ "https://Stackoverflow.com/questions/48775587", "https://Stackoverflow.com", "https://Stackoverflow.com/users/6713690/" ]
You're looping over the wrong list. To find items in `outofstock` but not in `onorder`, loop over `outofstock`: ``` for part in outofstock: if part not in onorder: print(part) ``` Simpler would be to convert both lists to sets, and compute the difference: ``` print(set(outofstock) - set(onorder)) ```
Doh! Appropriate code was ``` for part in outofstock: if (part not in onorder): print (part) ``` This way it prints my out of stock items which I need to order, unless they were already on order. I can't believe I overly complicated this for no good reason. Thank you so much for pointing out where I had gone wrong. This was such a dumb question in hindsight.
2,286,633
I have a basic grasp of XML and python and have been using minidom with some success. I have run into a situation where I am unable to get the values I want from an XML file. Here is the basic structure of the pre-existing file. ``` <localization> <b n="Stats"> <l k="SomeStat1"> <v>10</v> </l> <l k="SomeStat2"> <v>6</v> </l> </b> <b n="Levels"> <l k="Level1"> <v>Beginner Level</v> </l> <l k="Level2"> <v>Intermediate Level</v> </l> </b> </localization> ``` There are about 15 different `<b>` tags with dozens of children. What I'd like to do is, if given a level number(1), is find the `<v>` node for the corresponding level. I just have no idea how to go about this.
2010/02/18
[ "https://Stackoverflow.com/questions/2286633", "https://Stackoverflow.com", "https://Stackoverflow.com/users/224476/" ]
``` #!/usr/bin/python from xml.dom.minidom import parseString xml = parseString("""<localization> <b n="Stats"> <l k="SomeStat1"> <v>10</v> </l> <l k="SomeStat2"> <v>6</v> </l> </b> <b n="Levels"> <l k="Level1"> <v>Beginner Level</v> </l> <l k="Level2"> <v>Intermediate Level</v> </l> </b> </localization>""") level = 1 blist = xml.getElementsByTagName('b') for b in blist: if b.getAttribute('n') == 'Levels': llist = b.getElementsByTagName('l') l = llist.item(level) v = l.getElementsByTagName('v') print v.item(0).firstChild.nodeValue; #prints Intermediate Level ```
``` level = "Level"+raw_input("Enter level number: ") content= open("xmlfile").read() data= content.split("</localization>") for item in data: if "localization" in item: s = item.split("</b>") for i in s: if """<b n="Levels">""" in i: for c in i.split("</l>"): if "<l" in c and level in c: for v in c.split("</v>"): if "<v>" in v: print v[v.index("<v>")+3:] ```
2,286,633
I have a basic grasp of XML and python and have been using minidom with some success. I have run into a situation where I am unable to get the values I want from an XML file. Here is the basic structure of the pre-existing file. ``` <localization> <b n="Stats"> <l k="SomeStat1"> <v>10</v> </l> <l k="SomeStat2"> <v>6</v> </l> </b> <b n="Levels"> <l k="Level1"> <v>Beginner Level</v> </l> <l k="Level2"> <v>Intermediate Level</v> </l> </b> </localization> ``` There are about 15 different `<b>` tags with dozens of children. What I'd like to do is, if given a level number(1), is find the `<v>` node for the corresponding level. I just have no idea how to go about this.
2010/02/18
[ "https://Stackoverflow.com/questions/2286633", "https://Stackoverflow.com", "https://Stackoverflow.com/users/224476/" ]
If you really only care about searching for an `<l>` tag with a specific "k" attribute and then getting its `<v>` tag (that's how I understood your question), you could do it with DOM: ``` from xml.dom.minidom import parseString xmlDoc = parseString("""<document goes here>""") lNodesWithLevel2 = [lNode for lNode in xmlDoc.getElementsByTagName("l") if lNode.getAttribute("k") == "Level2"] matchingVNodes = map(lambda lNode: lNode.getElementsByTagName("v"), lNodesWithLevel2) print map(lambda vNode: vNode.firstChild.nodeValue, matchingVNodes) # Prints [u'Intermediate Level'] ``` How that is what you meant.
``` level = "Level"+raw_input("Enter level number: ") content= open("xmlfile").read() data= content.split("</localization>") for item in data: if "localization" in item: s = item.split("</b>") for i in s: if """<b n="Levels">""" in i: for c in i.split("</l>"): if "<l" in c and level in c: for v in c.split("</v>"): if "<v>" in v: print v[v.index("<v>")+3:] ```
2,286,633
I have a basic grasp of XML and python and have been using minidom with some success. I have run into a situation where I am unable to get the values I want from an XML file. Here is the basic structure of the pre-existing file. ``` <localization> <b n="Stats"> <l k="SomeStat1"> <v>10</v> </l> <l k="SomeStat2"> <v>6</v> </l> </b> <b n="Levels"> <l k="Level1"> <v>Beginner Level</v> </l> <l k="Level2"> <v>Intermediate Level</v> </l> </b> </localization> ``` There are about 15 different `<b>` tags with dozens of children. What I'd like to do is, if given a level number(1), is find the `<v>` node for the corresponding level. I just have no idea how to go about this.
2010/02/18
[ "https://Stackoverflow.com/questions/2286633", "https://Stackoverflow.com", "https://Stackoverflow.com/users/224476/" ]
You might consider using XPATH, a language for addressing parts of an xml document. Here's the answer using `lxml.etree` and it's support for `xpath`. ``` >>> data = """ ... <localization> ... <b n="Stats"> ... <l k="SomeStat1"> ... <v>10</v> ... </l> ... <l k="SomeStat2"> ... <v>6</v> ... </l> ... </b> ... <b n="Levels"> ... <l k="Level1"> ... <v>Beginner Level</v> ... </l> ... <l k="Level2"> ... <v>Intermediate Level</v> ... </l> ... </b> ... </localization> ... """ >>> >>> from lxml import etree >>> >>> xmldata = etree.XML(data) >>> xmldata.xpath('/localization/b[@n="Levels"]/l[@k=$level]/v/text()',level='Level1') ['Beginner Level'] ```
``` level = "Level"+raw_input("Enter level number: ") content= open("xmlfile").read() data= content.split("</localization>") for item in data: if "localization" in item: s = item.split("</b>") for i in s: if """<b n="Levels">""" in i: for c in i.split("</l>"): if "<l" in c and level in c: for v in c.split("</v>"): if "<v>" in v: print v[v.index("<v>")+3:] ```
2,286,633
I have a basic grasp of XML and python and have been using minidom with some success. I have run into a situation where I am unable to get the values I want from an XML file. Here is the basic structure of the pre-existing file. ``` <localization> <b n="Stats"> <l k="SomeStat1"> <v>10</v> </l> <l k="SomeStat2"> <v>6</v> </l> </b> <b n="Levels"> <l k="Level1"> <v>Beginner Level</v> </l> <l k="Level2"> <v>Intermediate Level</v> </l> </b> </localization> ``` There are about 15 different `<b>` tags with dozens of children. What I'd like to do is, if given a level number(1), is find the `<v>` node for the corresponding level. I just have no idea how to go about this.
2010/02/18
[ "https://Stackoverflow.com/questions/2286633", "https://Stackoverflow.com", "https://Stackoverflow.com/users/224476/" ]
If you could use [BeautifulSoup](http://www.crummy.com/software/BeautifulSoup/documentation.html) library (couldn't you?) you could end up with this dead-simple code: ``` from BeautifulSoup import BeautifulStoneSoup def get_it(xml, level_n): soup = BeautifulStoneSoup(xml) l = soup.find('l', k="Level%d" % level_n) return l.v.string if __name__ == '__main__': print get_it(1) ``` It prints `Beginner Level` for the example XML you provided.
``` level = "Level"+raw_input("Enter level number: ") content= open("xmlfile").read() data= content.split("</localization>") for item in data: if "localization" in item: s = item.split("</b>") for i in s: if """<b n="Levels">""" in i: for c in i.split("</l>"): if "<l" in c and level in c: for v in c.split("</v>"): if "<v>" in v: print v[v.index("<v>")+3:] ```