qid
int64
46k
74.7M
question
stringlengths
54
37.8k
date
stringlengths
10
10
metadata
listlengths
3
3
response_j
stringlengths
17
26k
response_k
stringlengths
26
26k
29,000,392
When I Try to use the sorted() function in python it only sorts the elements within each array alphabetically as the first 3 outputs are: ``` [u'A', u'a', u'a', u'f', u'g', u'h', u'i', u'n', u'n', u's', u't'] [u'N', u'a', u'e', u'g', u'i', u'i', u'r'] [u'C', u'a', u'e', u'm', u'n', u'o', u'o', u'r'] ``` These should be Afghanistan, Nigeria and Cameroon respectively but instead they are only sorted within their own array. Where have I went wrong in my code? ``` import urllib2 import csv from bs4 import BeautifulSoup url = "http://en.wikipedia.org/wiki/List_of_ongoing_armed_conflicts" soup = BeautifulSoup(urllib2.urlopen(url)) #f= csv.writer(open("test.csv","w")) #f.writerow(["location"]) def unique(countries): seen = set() for country in countries: l = country.lower() if l in seen: continue seen.add(l) yield country for row in soup.select('table.wikitable tr'): cells = row.find_all('td') if cells: for location in cells[3].find_all(text=True): location = location.split() for locations in unique(location): print sorted(locations) #f.writerow([location]) ```
2015/03/12
[ "https://Stackoverflow.com/questions/29000392", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3092868/" ]
As far as I'm concerned, the built in VBE is the best way to go. It has not changed in the last 10 years or even longer, but it is tightly integrated into Excel and does everything you need, especially as a beginner.
some time passed since you asked, but maybe someone else might find it useful. I looked for the same thing and couldn't find anything good until I found Notepad++. It makes analyzing the code much easier.
31,388,514
I have to split a list of characters such that it gets cut when it encounters a vowel. For example, a string like ``` toy = ['b', 'a', 'm', 'b', 'i', 'n', 'o'] ``` the output should be ``` [('b', 'a'), ('m', 'b', 'i'), ('n', 'o')] ``` I tried to run 2 loops, one behind the other. ``` # usr/bin/env/python apple = [] consonants = ('k', 'h', 'b', 'n') vowels = ('i', 'a', 'u') toy = ('k', 'h', 'u', 'b', 'a', 'n', 'i') for i in range(len(toy)): if i == 0: pass else: if toy[i] in vowels: for k in range(i - 1, len(toy)): if toy[k - 1] in consonants: n = toy[i - k:i - 1] apple.append(n) break print apple ``` But this does not let me come out of the loop once the vowel is reached. Using the "bambino" example, it gives me something like `[('b', 'a'), ('b', 'a', 'm', 'b', 'i'), ('b', 'a', 'm', 'b', 'i', 'n', 'o')]`. Can someone please help?
2015/07/13
[ "https://Stackoverflow.com/questions/31388514", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3116297/" ]
You seem to be overcomplicating things. A simple solution for this would be - ``` >>> toy = ['b', 'a', 'm', 'b', 'i', 'n', 'o'] >>> vowels = ['a','i','e','o','u'] >>> apples = [] >>> k = 0 >>> for i ,x in enumerate(toy): ... if x in vowels: ... apples.append(tuple(toy[k:i+1])) ... k = i+1 ... >>> apples [('b', 'a'), ('m', 'b', 'i'), ('n', 'o')] ``` [`enumerate()`](https://docs.python.org/2/library/functions.html#enumerate) function returns the index as well as the current element of the list.
You could also use this : ``` #usr/bin/env/python apple = [] vowels = ('i', 'a', 'u') toy = ('k', 'h', 'u', 'b', 'a', 'n', 'i') collector = [] for i in toy: collector.append(i) if i in vowels: apple.append(collector) collector = [] print apple ``` Result: ``` [['k', 'h', 'u'], ['b', 'a'], ['n', 'i']] ```
31,388,514
I have to split a list of characters such that it gets cut when it encounters a vowel. For example, a string like ``` toy = ['b', 'a', 'm', 'b', 'i', 'n', 'o'] ``` the output should be ``` [('b', 'a'), ('m', 'b', 'i'), ('n', 'o')] ``` I tried to run 2 loops, one behind the other. ``` # usr/bin/env/python apple = [] consonants = ('k', 'h', 'b', 'n') vowels = ('i', 'a', 'u') toy = ('k', 'h', 'u', 'b', 'a', 'n', 'i') for i in range(len(toy)): if i == 0: pass else: if toy[i] in vowels: for k in range(i - 1, len(toy)): if toy[k - 1] in consonants: n = toy[i - k:i - 1] apple.append(n) break print apple ``` But this does not let me come out of the loop once the vowel is reached. Using the "bambino" example, it gives me something like `[('b', 'a'), ('b', 'a', 'm', 'b', 'i'), ('b', 'a', 'm', 'b', 'i', 'n', 'o')]`. Can someone please help?
2015/07/13
[ "https://Stackoverflow.com/questions/31388514", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3116297/" ]
1. Have a temporary list and a result list 2. loop through the elements, till the the end is reached 3. add the current element to the temporary list 4. if the current element is a vowel, then add the temporary list to the result list and empty the temporary list 5. goto step 2 --- ``` >>> result = [] >>> temp = [] >>> for char in toy: ... temp.append(char) ... if char.lower() in "aeiou": ... result.append(temp) ... temp = [] ... if temp: ... result.append(temp) ... >>> result [['b', 'a'], ['m', 'b', 'i'], ['n', 'o']] ```
You could also use this : ``` #usr/bin/env/python apple = [] vowels = ('i', 'a', 'u') toy = ('k', 'h', 'u', 'b', 'a', 'n', 'i') collector = [] for i in toy: collector.append(i) if i in vowels: apple.append(collector) collector = [] print apple ``` Result: ``` [['k', 'h', 'u'], ['b', 'a'], ['n', 'i']] ```
13,608,029
I have 365 2d `numpy` arrays for the every day of the year, displaying an image like this: ![http://i50.tinypic.com/34i62gw.jpg](https://i.stack.imgur.com/yJR7N.jpg) I have them all stacked in a 3d numpy array. Pixels with a value that represents cloud i want to get rid of, i want to search through the previous 7 days, or next 7 days (previous 7 layers, next 7 layers) to find a value other than cloud, and then replace the cloud value with other possible values for that pixel (values experienced in the other days/layers for the corresponding pixel). I am new to python, and am a bit lost. Any ideas? Thanks
2012/11/28
[ "https://Stackoverflow.com/questions/13608029", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1860229/" ]
You are essentially trying to write a filter for your array. First you need to write a function that when given an array of values, with the middle one being the element currently examined, will return some computation of those values. In your case the function will expect to take 1-d array and returns the element nearest to the middle index that is not cloud: ``` import numpy as np from scipy.ndimage.filters import generic_filter _cloud = -1 def findNearestNonCloud(elements): middleIndex = len(elements) / 2 if elements[middleIndex] != _cloud: return elements[middleIndex] # middle value is not cloud nonCloudIndices, = np.where(elements != _cloud) if len(nonCloudIndices) == 0: return elements[middleIndex] # all values were cloud prevNonCloudIndex = np.where(nonCloudIndices < middleIndex, nonCloudIndices, -1).max() nextNonCloudIndex = -np.where(nonCloudIndices > middleIndex, -nonCloudIndices, 1).min() # -1 means no non-cloud index # pick index closest to middle index if (abs(prevNonCloudIndex - middleIndex) <= abs(nextNonCloudIndex - middleIndex)): return elements[prevNonCloudIndex] else: return elements[nextNonCloudIndex] ``` Now you need to apply this function to the elements you're interesting. To do this you'll need a mask that denotes which other elements you'll be interested in with regard to a specific element. ``` from scipy.ndimage.filters import generic_filter # creates 5 days worth of a 3x3 plot of land input = np.ones((5, 3, 3)) * _cloud input[0,:,:] = 10 # set first "image" to all be 10s input[4,0,0] = 12 # uppper left corner of fourth image is set to 12 print "input data\n", input, "\n" mask = (5, 1, 1) # mask represents looking at the present day, 2 days in the future and 2 days in # the past for 5 days in total. print "result\n", generic_filter(input, findNearestNonCloud, size=mask) # second and third images should mirror first image, # except upper left corner of third image should be 12 ```
I would think that you could do something like: ``` data = somehow_get_your_3d_data() #indexed as [day_of_year,y,x] for i,dat in enumerate(data): weeks2 = data[max(i-7,i):min(i+7,len(data)), ... ] new_value = get_new_value(weeks2) #get value from weeks2 here somehow dat[dat == cloud_value] = new_value ```
13,608,029
I have 365 2d `numpy` arrays for the every day of the year, displaying an image like this: ![http://i50.tinypic.com/34i62gw.jpg](https://i.stack.imgur.com/yJR7N.jpg) I have them all stacked in a 3d numpy array. Pixels with a value that represents cloud i want to get rid of, i want to search through the previous 7 days, or next 7 days (previous 7 layers, next 7 layers) to find a value other than cloud, and then replace the cloud value with other possible values for that pixel (values experienced in the other days/layers for the corresponding pixel). I am new to python, and am a bit lost. Any ideas? Thanks
2012/11/28
[ "https://Stackoverflow.com/questions/13608029", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1860229/" ]
I solved it by this: ``` interpdata = [] j = 0 for i in stack: try: temp = np.where( stack[j] == 50, stack[j-1], modis[j] ) temp = np.where( temp == 50, stack[j+1], temp ) temp = np.where( temp == 50, stack[j-2], temp ) temp = np.where( temp == 50, stack[j+2], temp ) temp = np.where( temp == 50, stack[j-3], temp ) temp = np.where( temp == 50, stack[j+3], temp ) temp = np.where( temp == 50, stack[j-4], temp ) temp = np.where( temp == 50, stack[j+4], temp ) except IndexError: print 'IndexError Passed' pass else: pass interpdata [j, :, :] = temp j = j + 1 ```
I would think that you could do something like: ``` data = somehow_get_your_3d_data() #indexed as [day_of_year,y,x] for i,dat in enumerate(data): weeks2 = data[max(i-7,i):min(i+7,len(data)), ... ] new_value = get_new_value(weeks2) #get value from weeks2 here somehow dat[dat == cloud_value] = new_value ```
71,401,616
While writing a program to help myself study, I run into a problem with my program not displaying the Chinese characters properly. The Chinese characters are loaded in from a .JSON file, and are then printed using a python program. The JSON entries look like this. ``` { "symbol": "我", "reading": "wo", "meaning": "I", "streak": 0 }, ``` The output in the console looks like this [VS Code console output](https://i.stack.imgur.com/UwPRi.png) And once the program has finished and dumps the info pack into the JSON, it looks like this. ``` { "symbol": "\u00e6\u02c6\u2018", "reading": "wo", "meaning": "I", "streak": 0 } ``` Changing Language for non-Unicode programs to Chinese (simplified) didn't fix. Using chcp 936 didn't fix the issue. The program is not a .py file that is not being hosted online. The IDE is Visual Studio code. The program for the python file is ``` import json #Import JSON file as an object with open('cards.json') as f: data = json.load(f) def main(): for card in data['cards']: #Store Current meaning reading and kanji in a local varialbe currentKanji = card['symbol'] currentReading = card['reading'] currentMeaning = card['meaning'] #Ask the user the meaning of the kanji inputMeaning = input(f'What is the meaning of {currentKanji}\n') #Check if The user's answer is correct if inputMeaning == currentMeaning: print("Was correct") else: print("Was incorrect") #Ask the User the reading of the kanji inputReading = input(f'What is the reading of {currentKanji}\n') #Check if the User's input is correct if inputReading == currentReading: print("Was Correct") else: print("Was incorrect") #If both Answers correct, update the streak by one if (inputMeaning == currentMeaning) and (inputReading == currentReading): card['streak'] = card['streak'] + 1 print(card['streak']) #If one of the answers is incorrect, decrease the streak by one if not (inputMeaning == currentMeaning) or not (inputReading == currentReading): card['streak'] = card['streak'] - 1 main() #Reopen the JSON file an write new info into it. with open('cards.json', 'w') as f: json.dump(data,f,indent=2) ```
2022/03/08
[ "https://Stackoverflow.com/questions/71401616", "https://Stackoverflow.com", "https://Stackoverflow.com/users/15465144/" ]
An example creating a precompiled header: ```sh mkdir -p pch/bits && g++ -O3 -std=c++20 -pedantic-errors -o pch/bits/stdc++.h.gch \ /usr/include/c++/11/x86_64-redhat-linux/bits/stdc++.h ``` Check what you got: ```sh $ file pch/bits/stdc++.h.gch pch/bits/stdc++.h.gch: GCC precompiled header (version 014) for C++ $ ls -l pch/bits/stdc++.h.gch -rw-r--r-- 1 ted users 118741428 8 mar 23.27 pch/bits/stdc++.h.gch ``` A program using it: ```cpp #include <bits/stdc++.h> int main() { std::vector<int> foo{1, 2, 3}; for(auto v : foo) std::cout << v << '\n'; } ``` Example compilation (put `-Ipch` first of the `-I` directives): ```sh $ strace -f g++ -Ipch -O3 -std=c++20 -pedantic-errors -o program program.cpp 2>&1 | grep 'stdc++.h' [pid 13964] read(3, "#include <bits/stdc++.h>\n\nint ma"..., 122) = 122 [pid 13964] newfstatat(AT_FDCWD, "pch/bits/stdc++.h.gch", {st_mode=S_IFREG|0644, st_size=118741428, ...}, 0) = 0 [pid 13964] openat(AT_FDCWD, "pch/bits/stdc++.h.gch", O_RDONLY|O_NOCTTY) = 4 ```
Building on Ted's answer, I would actually do something like this (untested): my\_pch.h: ``` #include <bits/stdc++.h> // might need to specify the full path here ``` And then: ``` g++ -O3 -std=c++20 -pedantic-errors -o pch/bits/my_pch.h.gch my_pch.h ``` And finally, your program would look like this: ``` #include "my_pch.h" int main() { // ... } ``` This means you don't need to put `#include <bits/stdc++.h>` directly in your source files, since that is a bit naughty. It also means you can add other include files to `my_pch.h` if you want them. I think, also, it wouldn't cost you anything to put, say, `#include <string>` after including `my_pch.h`, and that doing that sort of thing might be wise. If you're ever going to move the code into production you could then recompile it with `my_pch.h` empty. --- **Edit:** Another thing to consider (which I also can't test) is just to include the things you actually use (`string`, `vector`, whatever) in `my_pch.h`. That will probably pull in `bits/stdc++.h` anyway, when building the precompiled header. Make it a comprehensive list so that you don't need to have to keep adding to it. Then you have portable code and people won't keep beating you up about it.
71,401,616
While writing a program to help myself study, I run into a problem with my program not displaying the Chinese characters properly. The Chinese characters are loaded in from a .JSON file, and are then printed using a python program. The JSON entries look like this. ``` { "symbol": "我", "reading": "wo", "meaning": "I", "streak": 0 }, ``` The output in the console looks like this [VS Code console output](https://i.stack.imgur.com/UwPRi.png) And once the program has finished and dumps the info pack into the JSON, it looks like this. ``` { "symbol": "\u00e6\u02c6\u2018", "reading": "wo", "meaning": "I", "streak": 0 } ``` Changing Language for non-Unicode programs to Chinese (simplified) didn't fix. Using chcp 936 didn't fix the issue. The program is not a .py file that is not being hosted online. The IDE is Visual Studio code. The program for the python file is ``` import json #Import JSON file as an object with open('cards.json') as f: data = json.load(f) def main(): for card in data['cards']: #Store Current meaning reading and kanji in a local varialbe currentKanji = card['symbol'] currentReading = card['reading'] currentMeaning = card['meaning'] #Ask the user the meaning of the kanji inputMeaning = input(f'What is the meaning of {currentKanji}\n') #Check if The user's answer is correct if inputMeaning == currentMeaning: print("Was correct") else: print("Was incorrect") #Ask the User the reading of the kanji inputReading = input(f'What is the reading of {currentKanji}\n') #Check if the User's input is correct if inputReading == currentReading: print("Was Correct") else: print("Was incorrect") #If both Answers correct, update the streak by one if (inputMeaning == currentMeaning) and (inputReading == currentReading): card['streak'] = card['streak'] + 1 print(card['streak']) #If one of the answers is incorrect, decrease the streak by one if not (inputMeaning == currentMeaning) or not (inputReading == currentReading): card['streak'] = card['streak'] - 1 main() #Reopen the JSON file an write new info into it. with open('cards.json', 'w') as f: json.dump(data,f,indent=2) ```
2022/03/08
[ "https://Stackoverflow.com/questions/71401616", "https://Stackoverflow.com", "https://Stackoverflow.com/users/15465144/" ]
An example creating a precompiled header: ```sh mkdir -p pch/bits && g++ -O3 -std=c++20 -pedantic-errors -o pch/bits/stdc++.h.gch \ /usr/include/c++/11/x86_64-redhat-linux/bits/stdc++.h ``` Check what you got: ```sh $ file pch/bits/stdc++.h.gch pch/bits/stdc++.h.gch: GCC precompiled header (version 014) for C++ $ ls -l pch/bits/stdc++.h.gch -rw-r--r-- 1 ted users 118741428 8 mar 23.27 pch/bits/stdc++.h.gch ``` A program using it: ```cpp #include <bits/stdc++.h> int main() { std::vector<int> foo{1, 2, 3}; for(auto v : foo) std::cout << v << '\n'; } ``` Example compilation (put `-Ipch` first of the `-I` directives): ```sh $ strace -f g++ -Ipch -O3 -std=c++20 -pedantic-errors -o program program.cpp 2>&1 | grep 'stdc++.h' [pid 13964] read(3, "#include <bits/stdc++.h>\n\nint ma"..., 122) = 122 [pid 13964] newfstatat(AT_FDCWD, "pch/bits/stdc++.h.gch", {st_mode=S_IFREG|0644, st_size=118741428, ...}, 0) = 0 [pid 13964] openat(AT_FDCWD, "pch/bits/stdc++.h.gch", O_RDONLY|O_NOCTTY) = 4 ```
So, After having help from @TedLyngmo's answer and doing a little bit more research, I decided to answer the question myself with more clear steps. **PS**: *This answer will be more relatable to those who are using sublime with their custom build file and are on Linux OS (Ubuntu).* > > 1. You need to find where **stdc++.h** header file is present, so open the terminal and use the command: > > > `find /usr/include -name 'stdc++.h'` > > > Output: > > > `/usr/include/x86_64-linux-gnu/c++/11/bits/stdc++.h` > 2. Go to the above location and open the terminal there and now we are ready to precompile **bits/stdc++.h** header file. > 3. Use the following command: > > > `sudo g++ -std=c++17 stdc++.h` > 4. You'll observe **stdc++.h.gch** file is now created implying that precompiling is done. > > > **PS:** You need to use **sudo** as we need root privileges when **g++** makes stdc++.h.gch file. **NOTE:** Here as I was using the **c++17** version in my custom build file so I mentioned c++17, you can use whatever version you are using. It worked perfectly for me so I hope it helps you too!
71,401,616
While writing a program to help myself study, I run into a problem with my program not displaying the Chinese characters properly. The Chinese characters are loaded in from a .JSON file, and are then printed using a python program. The JSON entries look like this. ``` { "symbol": "我", "reading": "wo", "meaning": "I", "streak": 0 }, ``` The output in the console looks like this [VS Code console output](https://i.stack.imgur.com/UwPRi.png) And once the program has finished and dumps the info pack into the JSON, it looks like this. ``` { "symbol": "\u00e6\u02c6\u2018", "reading": "wo", "meaning": "I", "streak": 0 } ``` Changing Language for non-Unicode programs to Chinese (simplified) didn't fix. Using chcp 936 didn't fix the issue. The program is not a .py file that is not being hosted online. The IDE is Visual Studio code. The program for the python file is ``` import json #Import JSON file as an object with open('cards.json') as f: data = json.load(f) def main(): for card in data['cards']: #Store Current meaning reading and kanji in a local varialbe currentKanji = card['symbol'] currentReading = card['reading'] currentMeaning = card['meaning'] #Ask the user the meaning of the kanji inputMeaning = input(f'What is the meaning of {currentKanji}\n') #Check if The user's answer is correct if inputMeaning == currentMeaning: print("Was correct") else: print("Was incorrect") #Ask the User the reading of the kanji inputReading = input(f'What is the reading of {currentKanji}\n') #Check if the User's input is correct if inputReading == currentReading: print("Was Correct") else: print("Was incorrect") #If both Answers correct, update the streak by one if (inputMeaning == currentMeaning) and (inputReading == currentReading): card['streak'] = card['streak'] + 1 print(card['streak']) #If one of the answers is incorrect, decrease the streak by one if not (inputMeaning == currentMeaning) or not (inputReading == currentReading): card['streak'] = card['streak'] - 1 main() #Reopen the JSON file an write new info into it. with open('cards.json', 'w') as f: json.dump(data,f,indent=2) ```
2022/03/08
[ "https://Stackoverflow.com/questions/71401616", "https://Stackoverflow.com", "https://Stackoverflow.com/users/15465144/" ]
Building on Ted's answer, I would actually do something like this (untested): my\_pch.h: ``` #include <bits/stdc++.h> // might need to specify the full path here ``` And then: ``` g++ -O3 -std=c++20 -pedantic-errors -o pch/bits/my_pch.h.gch my_pch.h ``` And finally, your program would look like this: ``` #include "my_pch.h" int main() { // ... } ``` This means you don't need to put `#include <bits/stdc++.h>` directly in your source files, since that is a bit naughty. It also means you can add other include files to `my_pch.h` if you want them. I think, also, it wouldn't cost you anything to put, say, `#include <string>` after including `my_pch.h`, and that doing that sort of thing might be wise. If you're ever going to move the code into production you could then recompile it with `my_pch.h` empty. --- **Edit:** Another thing to consider (which I also can't test) is just to include the things you actually use (`string`, `vector`, whatever) in `my_pch.h`. That will probably pull in `bits/stdc++.h` anyway, when building the precompiled header. Make it a comprehensive list so that you don't need to have to keep adding to it. Then you have portable code and people won't keep beating you up about it.
So, After having help from @TedLyngmo's answer and doing a little bit more research, I decided to answer the question myself with more clear steps. **PS**: *This answer will be more relatable to those who are using sublime with their custom build file and are on Linux OS (Ubuntu).* > > 1. You need to find where **stdc++.h** header file is present, so open the terminal and use the command: > > > `find /usr/include -name 'stdc++.h'` > > > Output: > > > `/usr/include/x86_64-linux-gnu/c++/11/bits/stdc++.h` > 2. Go to the above location and open the terminal there and now we are ready to precompile **bits/stdc++.h** header file. > 3. Use the following command: > > > `sudo g++ -std=c++17 stdc++.h` > 4. You'll observe **stdc++.h.gch** file is now created implying that precompiling is done. > > > **PS:** You need to use **sudo** as we need root privileges when **g++** makes stdc++.h.gch file. **NOTE:** Here as I was using the **c++17** version in my custom build file so I mentioned c++17, you can use whatever version you are using. It worked perfectly for me so I hope it helps you too!
71,401,616
While writing a program to help myself study, I run into a problem with my program not displaying the Chinese characters properly. The Chinese characters are loaded in from a .JSON file, and are then printed using a python program. The JSON entries look like this. ``` { "symbol": "我", "reading": "wo", "meaning": "I", "streak": 0 }, ``` The output in the console looks like this [VS Code console output](https://i.stack.imgur.com/UwPRi.png) And once the program has finished and dumps the info pack into the JSON, it looks like this. ``` { "symbol": "\u00e6\u02c6\u2018", "reading": "wo", "meaning": "I", "streak": 0 } ``` Changing Language for non-Unicode programs to Chinese (simplified) didn't fix. Using chcp 936 didn't fix the issue. The program is not a .py file that is not being hosted online. The IDE is Visual Studio code. The program for the python file is ``` import json #Import JSON file as an object with open('cards.json') as f: data = json.load(f) def main(): for card in data['cards']: #Store Current meaning reading and kanji in a local varialbe currentKanji = card['symbol'] currentReading = card['reading'] currentMeaning = card['meaning'] #Ask the user the meaning of the kanji inputMeaning = input(f'What is the meaning of {currentKanji}\n') #Check if The user's answer is correct if inputMeaning == currentMeaning: print("Was correct") else: print("Was incorrect") #Ask the User the reading of the kanji inputReading = input(f'What is the reading of {currentKanji}\n') #Check if the User's input is correct if inputReading == currentReading: print("Was Correct") else: print("Was incorrect") #If both Answers correct, update the streak by one if (inputMeaning == currentMeaning) and (inputReading == currentReading): card['streak'] = card['streak'] + 1 print(card['streak']) #If one of the answers is incorrect, decrease the streak by one if not (inputMeaning == currentMeaning) or not (inputReading == currentReading): card['streak'] = card['streak'] - 1 main() #Reopen the JSON file an write new info into it. with open('cards.json', 'w') as f: json.dump(data,f,indent=2) ```
2022/03/08
[ "https://Stackoverflow.com/questions/71401616", "https://Stackoverflow.com", "https://Stackoverflow.com/users/15465144/" ]
You **should not** `#include <bits/stdc++.h>` from ths source code, for at least two reasons: * It's not portable, so it would need to be guarded by ugly `#ifdef`s. * Unlike GCC, Clang never uses PCHs for `#include`s. Instead you should include the PCH using the `-include` flag. This is the only option accepted by both GCC and Clang. 1. Precompile the header using `g++ -xc++-header /path/to/bits/stdc++.h -o stdc++.h.gch`. 2. Include it with `-include`: `g++ foo.cpp -include stdc++.h`. Notice that `-include` automatically appends `.gch` to the filename; the current directory doesn't have to contain `stdc++.h`. If there's no `.gch` file, `-include` falls back to including the regular non-PCH header, if it exists. If both exist in the include path, the PCH gets precedence.
Building on Ted's answer, I would actually do something like this (untested): my\_pch.h: ``` #include <bits/stdc++.h> // might need to specify the full path here ``` And then: ``` g++ -O3 -std=c++20 -pedantic-errors -o pch/bits/my_pch.h.gch my_pch.h ``` And finally, your program would look like this: ``` #include "my_pch.h" int main() { // ... } ``` This means you don't need to put `#include <bits/stdc++.h>` directly in your source files, since that is a bit naughty. It also means you can add other include files to `my_pch.h` if you want them. I think, also, it wouldn't cost you anything to put, say, `#include <string>` after including `my_pch.h`, and that doing that sort of thing might be wise. If you're ever going to move the code into production you could then recompile it with `my_pch.h` empty. --- **Edit:** Another thing to consider (which I also can't test) is just to include the things you actually use (`string`, `vector`, whatever) in `my_pch.h`. That will probably pull in `bits/stdc++.h` anyway, when building the precompiled header. Make it a comprehensive list so that you don't need to have to keep adding to it. Then you have portable code and people won't keep beating you up about it.
71,401,616
While writing a program to help myself study, I run into a problem with my program not displaying the Chinese characters properly. The Chinese characters are loaded in from a .JSON file, and are then printed using a python program. The JSON entries look like this. ``` { "symbol": "我", "reading": "wo", "meaning": "I", "streak": 0 }, ``` The output in the console looks like this [VS Code console output](https://i.stack.imgur.com/UwPRi.png) And once the program has finished and dumps the info pack into the JSON, it looks like this. ``` { "symbol": "\u00e6\u02c6\u2018", "reading": "wo", "meaning": "I", "streak": 0 } ``` Changing Language for non-Unicode programs to Chinese (simplified) didn't fix. Using chcp 936 didn't fix the issue. The program is not a .py file that is not being hosted online. The IDE is Visual Studio code. The program for the python file is ``` import json #Import JSON file as an object with open('cards.json') as f: data = json.load(f) def main(): for card in data['cards']: #Store Current meaning reading and kanji in a local varialbe currentKanji = card['symbol'] currentReading = card['reading'] currentMeaning = card['meaning'] #Ask the user the meaning of the kanji inputMeaning = input(f'What is the meaning of {currentKanji}\n') #Check if The user's answer is correct if inputMeaning == currentMeaning: print("Was correct") else: print("Was incorrect") #Ask the User the reading of the kanji inputReading = input(f'What is the reading of {currentKanji}\n') #Check if the User's input is correct if inputReading == currentReading: print("Was Correct") else: print("Was incorrect") #If both Answers correct, update the streak by one if (inputMeaning == currentMeaning) and (inputReading == currentReading): card['streak'] = card['streak'] + 1 print(card['streak']) #If one of the answers is incorrect, decrease the streak by one if not (inputMeaning == currentMeaning) or not (inputReading == currentReading): card['streak'] = card['streak'] - 1 main() #Reopen the JSON file an write new info into it. with open('cards.json', 'w') as f: json.dump(data,f,indent=2) ```
2022/03/08
[ "https://Stackoverflow.com/questions/71401616", "https://Stackoverflow.com", "https://Stackoverflow.com/users/15465144/" ]
You **should not** `#include <bits/stdc++.h>` from ths source code, for at least two reasons: * It's not portable, so it would need to be guarded by ugly `#ifdef`s. * Unlike GCC, Clang never uses PCHs for `#include`s. Instead you should include the PCH using the `-include` flag. This is the only option accepted by both GCC and Clang. 1. Precompile the header using `g++ -xc++-header /path/to/bits/stdc++.h -o stdc++.h.gch`. 2. Include it with `-include`: `g++ foo.cpp -include stdc++.h`. Notice that `-include` automatically appends `.gch` to the filename; the current directory doesn't have to contain `stdc++.h`. If there's no `.gch` file, `-include` falls back to including the regular non-PCH header, if it exists. If both exist in the include path, the PCH gets precedence.
So, After having help from @TedLyngmo's answer and doing a little bit more research, I decided to answer the question myself with more clear steps. **PS**: *This answer will be more relatable to those who are using sublime with their custom build file and are on Linux OS (Ubuntu).* > > 1. You need to find where **stdc++.h** header file is present, so open the terminal and use the command: > > > `find /usr/include -name 'stdc++.h'` > > > Output: > > > `/usr/include/x86_64-linux-gnu/c++/11/bits/stdc++.h` > 2. Go to the above location and open the terminal there and now we are ready to precompile **bits/stdc++.h** header file. > 3. Use the following command: > > > `sudo g++ -std=c++17 stdc++.h` > 4. You'll observe **stdc++.h.gch** file is now created implying that precompiling is done. > > > **PS:** You need to use **sudo** as we need root privileges when **g++** makes stdc++.h.gch file. **NOTE:** Here as I was using the **c++17** version in my custom build file so I mentioned c++17, you can use whatever version you are using. It worked perfectly for me so I hope it helps you too!
17,504,570
So, I'm on simple project for a online course to make an image gallery using python. The thing is to create 3 buttons one Next, Previous and Quit. So far the quit button works and the next loads a new image but in a different window, I'm quite new to python and GUI-programming with Tkinter so this is a big part of the begineers course. So far my code looks like this and everything works. But I need help in HOW to make a previous and a next button, I've used the NEW statement so far but it opens in a different window. I simply want to display 1 image then click next image with some simple text. ``` import Image import ImageTk import Tkinter root = Tkinter.Tk(); text = Tkinter.Text(root, width=50, height=15); myImage = ImageTk.PhotoImage(file='nesta.png'); def new(): wind = Tkinter.Toplevel() wind.geometry('600x600') imageFile2 = Image.open("signori.png") image2 = ImageTk.PhotoImage(imageFile2) panel2 = Tkinter.Label(wind , image=image2) panel2.place(relx=0.0, rely=0.0) wind.mainloop() master = Tkinter.Tk() master.geometry('600x600') B = Tkinter.Button(master, text = 'Previous picture', command = new).pack() B = Tkinter.Button(master, text = 'Quit', command = quit).pack() B = Tkinter.Button(master, text = 'Next picture', command = new).pack() master.mainloop() ```
2013/07/06
[ "https://Stackoverflow.com/questions/17504570", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2475520/" ]
Change image by setting image item: `Label['image'] = photoimage_obj` ``` import Image import ImageTk import Tkinter image_list = ['1.jpg', '2.jpg', '5.jpg'] text_list = ['apple', 'bird', 'cat'] current = 0 def move(delta): global current, image_list if not (0 <= current + delta < len(image_list)): tkMessageBox.showinfo('End', 'No more image.') return current += delta image = Image.open(image_list[current]) photo = ImageTk.PhotoImage(image) label['text'] = text_list[current] label['image'] = photo label.photo = photo root = Tkinter.Tk() label = Tkinter.Label(root, compound=Tkinter.TOP) label.pack() frame = Tkinter.Frame(root) frame.pack() Tkinter.Button(frame, text='Previous picture', command=lambda: move(-1)).pack(side=Tkinter.LEFT) Tkinter.Button(frame, text='Next picture', command=lambda: move(+1)).pack(side=Tkinter.LEFT) Tkinter.Button(frame, text='Quit', command=root.quit).pack(side=Tkinter.LEFT) move(0) root.mainloop() ```
My UI is not so good. But my logics works well i tested well. U can change the UI. How it works is, First we need to browse the file and when we click open it displays the image and also it will creates a list of images that are in that selected image folder. I mentioned only '.png' and '.jpg' fils only. If u want to add more u can add it in the path\_func(). ``` from tkinter import * from PIL import Image, ImageTk from tkinter import filedialog import os class ImageViewer: def __init__(self, root): self.root = root self.root.title("Photo Viewer") self.root.geometry("1360x750") self.root.config(bg = "lightblue") menus = Menu(self.root) self.root.config(menu = menus) file_menu = Menu(menus) menus.add_cascade(label = "File", menu = file_menu) file_menu.add_command(label = "Open", command = self.open_dialog) file_menu.add_separator() file_menu.add_command(label = "Previous", command = self.previous_img) file_menu.add_command(label = "Next", command = self.next_img) file_menu.add_separator() file_menu.add_command(label = "Exit", command = self.root.destroy) self.label = Label(self.root, text = "Open a image using open menu", font = ("Helvetica", 15), foreground = "#0000FF", background = "lightblue") self.label.grid(row = 0, column = 0, columnspan = 4) self.buttons() def path_func(self, path): l = [] self.path = path.split('/') self.path.pop() self.path = '/'.join([str(x) for x in self.path]) #print(self.path) for file in os.listdir(self.path): if file.endswith('.jpg') or file.endswith('.png'): l.append(file) #print(l) def join(file): os.chdir(self.path) #print(os.getcwd()) cwd = os.getcwd().replace('\\', '/') #print(cwd) f = cwd + '/' + file #print(f) return f global file_list file_list = list(map(join, l)) #print(file_list) def open_dialog(self): global file_name file_name = filedialog.askopenfilename(initialdir = "C:/Users/elcot/Pictures", title = "Open file") #print(file_name) self.view_image(file_name) self.path_func(file_name) '''except: label = Label(self.root, text = "Select a file to open") label.grid(row = 4, column =1)''' def view_image(self, filename): try: self.label.destroy() global img img = Image.open(filename) img = img.resize((1360, 650)) img = ImageTk.PhotoImage(img) #print(img) show_pic = Label(self.root, image = img) show_pic.grid(row = 1, column = 0, columnspan = 3) except: pass def buttons(self): open_button = Button(self.root, text = "Browse", command = self.open_dialog, background = "lightblue") open_button.grid(row = 1, column = 1) previous_button = Button(self.root, text = "Previous", command = self.previous_img, background = "lightblue", width = 25) previous_button.grid(row = 3, column = 0, pady = 10) empty = Label(self.root, text = " ", background = "lightblue") empty.grid(row = 3, column = 1) next_button = Button(self.root, text = "Next", command = self.next_img, background = "lightblue", width = 25) next_button.grid(row = 3, column = 2) def previous_img(self): global file_name #print(file_list) index = file_list.index(file_name) #print(index) curr = file_list[index - 1] #print(curr) self.view_image(curr) file_name = curr def next_img(self): global file_name index = file_list.index(file_name) #print(index) if index == len(file_list) - 1: index = -1 curr = file_list[index + 1] #print(curr) self.view_image(curr) file_name = curr else: curr = file_list[index + 1] #print(curr) self.view_image(curr) file_name = curr if __name__ == "__main__": root = Tk() gallery = ImageViewer(root) root.mainloop() ```
64,774,439
I'm trying to install some packages, and for some reason I can't for the life of me make it happen. My set up is that I'm using PyCharm on Windows with Conda. I'm having these problems with all the non-standard packages I'd like to install (things like numpy install just fine), but for reference I'll use [this](https://github.com/wmayner/pyphi) package. I'll detail all the methods I've tried: 1. Going `File>Settings>Python Interpreter> Install` and then searching for pyphy - it's not found. 2. Adding the URL of in the link above as a repository in PyCharm, and searching again for packages; again not found. 3. Downloading the .tar.gz from the above GitHub, and adding the folder containing that as a repository in PyCharm, then searching for `pyphi` -- again, nothing found. 4. From the terminal in PyCharm, `pip install pyphi`, which gives `ERROR: Could not find a version that satisfies the requirement pyphy (from versions: none)` 5. From the terminal in pycharm, `conda install -c wmayner pyphi`, which gives a very long error report EDIT: the long error message is [1](https://i.imgur.com/0JpWdRy.png), [2](https://i.imgur.com/ZCMckBv.png), [3](https://i.imgur.com/nOL9EYe.png), [4](https://i.imgur.com/W0QvuaA.png), [5](https://i.imgur.com/P4CTwmY.png). EDIT 2: The error message is now included at the end of this post, as text. 6. With the `.tar.gz` file in the working directory, trying `conda install -c local pyphi-1.2.0.tar.gz`, which gives ``` Solving environment: failed with initial frozen solve. Retrying with flexible solve. PackagesNotFoundError: The following packages are not available from current channels: - pyphi-1.2.0.tar.gz ``` 7. Putting the `.tar.gz` file in the directory that the conda virtual environment is installed in, then directing the terminal there, and trying `python pip install pyphi-1.2.0.tar.gz`. What am I doing wrong? ERROR MESSAGE FROM APPROACH 5: ``` (MyPy) C:\Users\Dan Goldwater\Dropbox\Nottingham\GPT NNs\MyPy>conda install -c wmayner pyphi Collecting package metadata (current_repodata.json): done Solving environment: failed with initial frozen solve. Retrying with flexible solve. Solving environment: failed with repodata from current_repodata.json, will retry with next repodata source. Collecting package metadata (repodata.json): done Solving environment: failed with initial frozen solve. Retrying with flexible solve. Solving environment: - Found conflicts! Looking for incompatible packages. This can take several minutes. Press CTRL-C to abort. Examining @/win-64::__cuda==11.0=0: 33%|█████████████████████████████████████████████████████▎ | 1/3 [00:00<00:00, 3.42it/s]/ failed # >>>>>>>>>>>>>>>>>>>>>> ERROR REPORT <<<<<<<<<<<<<<<<<<<<<< Traceback (most recent call last): File "C:\Anaconda\lib\site-packages\conda\cli\install.py", line 265, in install should_retry_solve=(_should_retry_unfrozen or repodata_fn != repodata_fns[-1]), File "C:\Anaconda\lib\site-packages\conda\core\solve.py", line 117, in solve_for_transaction should_retry_solve) File "C:\Anaconda\lib\site-packages\conda\core\solve.py", line 158, in solve_for_diff force_remove, should_retry_solve) File "C:\Anaconda\lib\site-packages\conda\core\solve.py", line 275, in solve_final_state ssc = self._add_specs(ssc) File "C:\Anaconda\lib\site-packages\conda\core\solve.py", line 696, in _add_specs raise UnsatisfiableError({}) conda.exceptions.UnsatisfiableError: Did not find conflicting dependencies. If you would like to know which packages conflict ensure that you have enabled unsatisfiable hints. conda config --set unsatisfiable_hints True During handling of the above exception, another exception occurred: Traceback (most recent call last): File "C:\Anaconda\lib\site-packages\conda\exceptions.py", line 1079, in __call__ return func(*args, **kwargs) File "C:\Anaconda\lib\site-packages\conda\cli\main.py", line 84, in _main exit_code = do_call(args, p) File "C:\Anaconda\lib\site-packages\conda\cli\conda_argparse.py", line 82, in do_call return getattr(module, func_name)(args, parser) File "C:\Anaconda\lib\site-packages\conda\cli\main_install.py", line 20, in execute install(args, parser, 'install') File "C:\Anaconda\lib\site-packages\conda\cli\install.py", line 299, in install should_retry_solve=(repodata_fn != repodata_fns[-1]), File "C:\Anaconda\lib\site-packages\conda\core\solve.py", line 117, in solve_for_transaction should_retry_solve) File "C:\Anaconda\lib\site-packages\conda\core\solve.py", line 158, in solve_for_diff force_remove, should_retry_solve) File "C:\Anaconda\lib\site-packages\conda\core\solve.py", line 275, in solve_final_state ssc = self._add_specs(ssc) File "C:\Anaconda\lib\site-packages\conda\core\solve.py", line 694, in _add_specs ssc.r.find_conflicts(spec_set) File "C:\Anaconda\lib\site-packages\conda\resolve.py", line 347, in find_conflicts bad_deps = self.build_conflict_map(specs, specs_to_add, history_specs) File "C:\Anaconda\lib\site-packages\conda\resolve.py", line 507, in build_conflict_map root, search_node, dep_graph, num_occurances) File "C:\Anaconda\lib\site-packages\conda\resolve.py", line 369, in breadth_first_search_for_dep_graph last_spec = MatchSpec.union((path[-1], target_paths[-1][-1]))[0] File "C:\Anaconda\lib\site-packages\conda\models\match_spec.py", line 481, in union return cls.merge(match_specs, union=True) File "C:\Anaconda\lib\site-packages\conda\models\match_spec.py", line 475, in merge reduce(lambda x, y: x._merge(y, union), group) if len(group) > 1 else group[0] File "C:\Anaconda\lib\site-packages\conda\models\match_spec.py", line 475, in <lambda> reduce(lambda x, y: x._merge(y, union), group) if len(group) > 1 else group[0] File "C:\Anaconda\lib\site-packages\conda\models\match_spec.py", line 502, in _merge final = this_component.union(that_component) File "C:\Anaconda\lib\site-packages\conda\models\match_spec.py", line 764, in union return '|'.join(options) TypeError: sequence item 0: expected str instance, Channel found `$ C:\Anaconda\Scripts\conda-script.py install -c wmayner pyphi` environment variables: CIO_TEST=<not set> CONDA_DEFAULT_ENV=MyPy CONDA_EXE=C:\Anaconda\condabin\..\Scripts\conda.exe CONDA_EXES="C:\Anaconda\condabin\..\Scripts\conda.exe" CONDA_PREFIX=C:\Anaconda\envs\MyPy CONDA_PROMPT_MODIFIER=(MyPy) CONDA_ROOT=C:\Anaconda CONDA_SHLVL=1 CUDA_PATH=C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.0 DOCKER_TOOLBOX_INSTALL_PATH=C:\Program Files\Docker Toolbox HOMEPATH=\Users\Dan Goldwater MOZ_PLUGIN_PATH=C:\Program Files (x86)\Foxit Software\Foxit Reader\plugins\ NVTOOLSEXT_PATH=C:\Program Files\NVIDIA Corporation\NvToolsExt\ PATH=C:\Anaconda;C:\Anaconda\Library\mingw-w64\bin;C:\Anaconda\Library\usr\ bin;C:\Anaconda\Library\bin;C:\Anaconda\Scripts;C:\Anaconda\bin;C:\Ana conda\envs\MyPy;C:\Anaconda\envs\MyPy\Library\mingw-w64\bin;C:\Anacond a\envs\MyPy\Library\usr\bin;C:\Anaconda\envs\MyPy\Library\bin;C:\Anaco nda\envs\MyPy\Scripts;C:\Anaconda\envs\MyPy\bin;C:\Anaconda\condabin;C :\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.0\bin;C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.0\libnvvp;.;C:\Program Files (x86)\Intel\iCLS Client;C:\Program Files\Intel\iCLS Client;C:\WI NDOWS\system32;C:\WINDOWS;C:\WINDOWS\System32\Wbem;C:\WINDOWS\System32 \WindowsPowerShell\v1.0;C:\Program Files (x86)\Intel\Intel(R) Management Engine Components\DAL;C:\Program Files\Intel\Intel(R) Management Engine Components\DAL;C:\Program Files (x86)\Intel\Intel(R) Management Engine Components\IPT;C:\Program Files\Intel\Intel(R) Management Engine Components\IPT;C:\Program Files\MiKTeX 2.9\miktex\bin\x64;C:\Program Files\MATLAB\R2017a\runtime\win64;C:\Program Files\MATLAB\R2017a\bin;C:\Program Files\Calibre2;C:\Program Files\PuTTY;C:\Program Files (x86)\OpenSSH\bin;C:\Program Files\Git\cmd;C:\Program Files\Golem;C:\WINDOWS\System32\OpenSSH;C:\Program Files\Microsoft VS Code\bin;C:\Program Files (x86)\Wolfram Research\WolframScript;C:\Program Files\NVIDIA Corporation\Nsight Compute 2020.1.2;C:\Program Files (x86)\NVIDIA Corporation\PhysX\Common;C:\Program Files\NVIDIA Corporation\NVIDIA Nv DLISR;C:\WINDOWS\system32;C:\WINDOWS;C:\WINDOWS\System32\Wbem;C:\WINDO WS\System32\WindowsPowerShell\v1.0;C:\WINDOWS\System32\OpenSSH;C:\Prog ram Files\dotnet;C:\Users\Dan Goldwater\AppData\Local\Programs\Python\ Python38-32\Scripts;C:\Users\Dan Goldwater\AppData\Local\Programs\Python\Python38-32;C:\Users\Dan Goldwater\AppData\Local\Microsoft\WindowsApps;.;C:\Program Files\Microsoft VS Code\bin;C:\Program Files\Docker Toolbox;C:\texlive\2018\bin\win32;C:\Users\Dan Goldwater\AppData\Local\Pandoc;C:\Program Files\JetBrains\PyCharm Community Edition 2020.1.2\bin;. PSMODULEPATH=C:\WINDOWS\system32\WindowsPowerShell\v1.0\Modules\ REQUESTS_CA_BUNDLE=<not set> SSL_CERT_FILE=<not set> VBOX_MSI_INSTALL_PATH=C:\Program Files\Oracle\VirtualBox\ active environment : MyPy active env location : C:\Anaconda\envs\MyPy shell level : 1 user config file : C:\Users\Dan Goldwater\.condarc populated config files : C:\Users\Dan Goldwater\.condarc conda version : 4.8.2 conda-build version : 3.18.11 python version : 3.7.6.final.0 virtual packages : __cuda=11.0 base environment : C:\Anaconda (writable) channel URLs : https://conda.anaconda.org/wmayner/win-64 https://conda.anaconda.org/wmayner/noarch https://repo.anaconda.com/pkgs/main/win-64 https://repo.anaconda.com/pkgs/main/noarch https://repo.anaconda.com/pkgs/r/win-64 https://repo.anaconda.com/pkgs/r/noarch https://repo.anaconda.com/pkgs/msys2/win-64 https://repo.anaconda.com/pkgs/msys2/noarch package cache : C:\Anaconda\pkgs C:\Users\Dan Goldwater\.conda\pkgs C:\Users\Dan Goldwater\AppData\Local\conda\conda\pkgs envs directories : C:\Anaconda\envs C:\Users\Dan Goldwater\.conda\envs C:\Users\Dan Goldwater\AppData\Local\conda\conda\envs platform : win-64 user-agent : conda/4.8.2 requests/2.22.0 CPython/3.7.6 Windows/10 Windows/10.0.19041 administrator : False netrc file : None offline mode : False An unexpected error has occurred. Conda has prepared the above report. ```
2020/11/10
[ "https://Stackoverflow.com/questions/64774439", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5559681/" ]
You should do sth like [installation package from git](https://stackoverflow.com/questions/20101834/pip-install-from-git-repo-branch) , but for conda. (replace pip with conda, and provide valid URL). This is not a pypi package, it is not known to pip by default.
In response to @buran's comments above, I updated conda using `conda update --name base conda` Then used the recommended install for Windows, with ``` conda install -c wmayner pyphi ``` which now gives the error: ``` Found conflicts! Looking for incompatible packages. This can take several minutes. Press CTRL-C to abort. failed UnsatisfiableError: The following specifications were found to be incompatible with the existing python installation in your environment: Specifications: - pyphi -> python[version='>=3.5,<3.6.0a0|>=3.6,<3.7.0a0|>=3.7,<3.8.0a0'] Your python: python=3.8 If python is on the left-most side of the chain, that's the version you've asked for. When python appears to the right, that indicates that the thing on the left is somehow not available for the python version you are constrained to. Note that conda will not change your python version to a different minor version unless you explicitly specify that. ``` So at least it's clear why it won't install, and I can create another environment with another version of Python to house it. Kudos to @buran!
41,850,809
``` import datetime from nltk_contrib import timex now = datetime.date.today() basedate = timex.Date(now.year, now.month, now.day) print timex.ground(timex.tag("Hai i would like to go to mumbai 22nd of next month"), basedate) print str(datetime.date.day) ``` when i am trying to run the above code i am getting the following error ``` File "/usr/local/lib/python2.7/dist-packages/nltk_contrib/timex.py", line 250, in ground elif re.match(r'last ' + month, timex, re.IGNORECASE): UnboundLocalError: local variable 'month' referenced before assignment ``` what should i do to rectify this error?
2017/01/25
[ "https://Stackoverflow.com/questions/41850809", "https://Stackoverflow.com", "https://Stackoverflow.com/users/7334656/" ]
The `timex` module has a bug where a global variable is referenced without assignment in the `ground` function. To fix the bug, add the following code which should start at line 171: def ground(tagged\_text, base\_date): ``` # Find all identified timex and put them into a list timex_regex = re.compile(r'<TIMEX2>.*?</TIMEX2>', re.DOTALL) timex_found = timex_regex.findall(tagged_text) timex_found = map(lambda timex:re.sub(r'</?TIMEX2.*?>', '', timex), \ timex_found) # Calculate the new date accordingly for timex in timex_found: global month # <--- here is the global reference assignment ```
The solution above about adding month as a global variable causes other problems when timex is called multiple times in a row, because variables are not reset unless you import again. This happens for me in a deployed environment in AWS Lambda. A solution that isn't super pretty but will not cause problems is just to set the month value again in the ground function: ``` def ground(tagged_text, base_date): # Find all identified timex and put them into a list timex_regex = re.compile(r'<TIMEX2>.*?</TIMEX2>', re.DOTALL) timex_found = timex_regex.findall(tagged_text) timex_found = map(lambda timex:re.sub(r'</?TIMEX2.*?>', '', timex), \ timex_found) # Calculate the new date accordingly for timex in timex_found: month = "(january|february|march|april|may|june|july|august|september| \ october|november|december)" # <--- reset month to the value it is set to upon import ```
8,399,341
I can't find any tutorial for jQuery + web.py. So I've got basic question on POST method. I've got jQuery script: ``` <script> jQuery('#continue').click(function() { var command = jQuery('#continue').attr('value'); jQuery.ajax({ type: "POST", data: {signal : command}, }); }); </script> ``` A simple form: ``` <form> <button type="submit">Cancel</button> <button type="submit" id="continue" value="next">Continue</button> </form> ``` and python script: ``` def POST (self): s = signal print s return ``` I expect to see string "next" in console. But it doesn't happen. I have a strong feeling that selectors somehow do not work. Any way to check it?
2011/12/06
[ "https://Stackoverflow.com/questions/8399341", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1073589/" ]
you need to use web.input in web.py to access POST variables look at the docs: <http://webpy.org/docs/0.3/api> (search for "function input") ``` def POST(self): s = web.input().signal print s return ```
``` <script> jQuery('#continue').click(function() { var command = jQuery('#continue').attr('value'); jQuery.ajax({ type: "POST", data: {signal : command}, url: "add the url here" }); }); </script> ``` **add the url of the server.**
42,237,103
I'm writing a python program for the purpose of studying HTML source code used in different countries. I'm testing in a UNIX Shell. The code I have so far works fine, except that I'm getting [HTTP Error 403: Forbidden](https://i.stack.imgur.com/9gzqG.png). Through testing it line by line, I know it has something to do with line 27: (`url3response = urllib2.urlopen(url3) url3Content =url3response.read()` Every other URL response works fine except this one. Any ideas??? Here is the text file I'm reading from (top5\_US.txt): ``` http://www.caltech.edu http://www.stanford.edu http://www.harvard.edu http://www.mit.edu http://www.princeton.edu ``` And here is my code: ``` import urllib2 #Open desired text file (In this case, "top5_US.txt) text_file = open('top5_US.txt', 'r') #Read each line of the text file firstLine = text_file.readline().strip() secondLine = text_file.readline().strip() thirdLine = text_file.readline().strip() fourthLine = text_file.readline().strip() fifthLine = text_file.readline().strip() #Turn each line into a URL variable url1 = firstLine url2 = secondLine url3 = thirdLine url4 = fourthLine url5 = fifthLine #Read URL 1, get content , and store it in a variable. url1response = urllib2.urlopen(url1) url1Content =url1response.read() #Read URL 2, get content , and store it in a variable. url2response = urllib2.urlopen(url2) url2Content =url2response.read() #Read URL 3, get content , and store it in a variable. url3response = urllib2.urlopen(url3) url3Content =url3response.read() #Read URL 4, get content , and store it in a variable. url4response = urllib2.urlopen(url4) url4Content =url4response.read() #Read URL 5, get content , and store it in a variable. url5response = urllib2.urlopen(url5) url5Content =url5response.read() text_file.close() ```
2017/02/14
[ "https://Stackoverflow.com/questions/42237103", "https://Stackoverflow.com", "https://Stackoverflow.com/users/6115937/" ]
With jQuery you need to also provide the other transformation: ``` //Image Category size change effect $('.cat-wrap div').hover(function() { $(this).css('width', '30%'); $(this).children().css('opacity', '1'); $(this).siblings().css("width", "16.5%"); }, function() { $(this).css('width', '19.2%'); $(this).children().css('opacity', '0'); $(this).siblings().css("width", "19.2%"); }); ```
For it to revert back, you need to add the handler for the mouseout event. This is simply passing a second callback argument to the `hover()` method. ``` $('.cat-wrap div').hover(function() { $(this).css('width', '30%'); $(this).children().css('opacity','1'); $(this).siblings().css( "width", "16.5%"); }, function(){ // mouse out function }); ```
42,237,103
I'm writing a python program for the purpose of studying HTML source code used in different countries. I'm testing in a UNIX Shell. The code I have so far works fine, except that I'm getting [HTTP Error 403: Forbidden](https://i.stack.imgur.com/9gzqG.png). Through testing it line by line, I know it has something to do with line 27: (`url3response = urllib2.urlopen(url3) url3Content =url3response.read()` Every other URL response works fine except this one. Any ideas??? Here is the text file I'm reading from (top5\_US.txt): ``` http://www.caltech.edu http://www.stanford.edu http://www.harvard.edu http://www.mit.edu http://www.princeton.edu ``` And here is my code: ``` import urllib2 #Open desired text file (In this case, "top5_US.txt) text_file = open('top5_US.txt', 'r') #Read each line of the text file firstLine = text_file.readline().strip() secondLine = text_file.readline().strip() thirdLine = text_file.readline().strip() fourthLine = text_file.readline().strip() fifthLine = text_file.readline().strip() #Turn each line into a URL variable url1 = firstLine url2 = secondLine url3 = thirdLine url4 = fourthLine url5 = fifthLine #Read URL 1, get content , and store it in a variable. url1response = urllib2.urlopen(url1) url1Content =url1response.read() #Read URL 2, get content , and store it in a variable. url2response = urllib2.urlopen(url2) url2Content =url2response.read() #Read URL 3, get content , and store it in a variable. url3response = urllib2.urlopen(url3) url3Content =url3response.read() #Read URL 4, get content , and store it in a variable. url4response = urllib2.urlopen(url4) url4Content =url4response.read() #Read URL 5, get content , and store it in a variable. url5response = urllib2.urlopen(url5) url5Content =url5response.read() text_file.close() ```
2017/02/14
[ "https://Stackoverflow.com/questions/42237103", "https://Stackoverflow.com", "https://Stackoverflow.com/users/6115937/" ]
Here is an example of the javascript you are looking for: ```js var origWidth; var origOpacity; //Image Category size change effect $('.cat-wrap div').hover(function() { origWidth = $(this).css('width'); origOpacity = $(this).children().css('opacity'); $(this).css('width', '30%'); $(this).children().css('opacity','1'); $(this).siblings().css( "width", "16.5%"); }, function(){ $(this).css('width', origWidth); $(this).children().css('opacity',origOpacity); }); ```
For it to revert back, you need to add the handler for the mouseout event. This is simply passing a second callback argument to the `hover()` method. ``` $('.cat-wrap div').hover(function() { $(this).css('width', '30%'); $(this).children().css('opacity','1'); $(this).siblings().css( "width", "16.5%"); }, function(){ // mouse out function }); ```
42,237,103
I'm writing a python program for the purpose of studying HTML source code used in different countries. I'm testing in a UNIX Shell. The code I have so far works fine, except that I'm getting [HTTP Error 403: Forbidden](https://i.stack.imgur.com/9gzqG.png). Through testing it line by line, I know it has something to do with line 27: (`url3response = urllib2.urlopen(url3) url3Content =url3response.read()` Every other URL response works fine except this one. Any ideas??? Here is the text file I'm reading from (top5\_US.txt): ``` http://www.caltech.edu http://www.stanford.edu http://www.harvard.edu http://www.mit.edu http://www.princeton.edu ``` And here is my code: ``` import urllib2 #Open desired text file (In this case, "top5_US.txt) text_file = open('top5_US.txt', 'r') #Read each line of the text file firstLine = text_file.readline().strip() secondLine = text_file.readline().strip() thirdLine = text_file.readline().strip() fourthLine = text_file.readline().strip() fifthLine = text_file.readline().strip() #Turn each line into a URL variable url1 = firstLine url2 = secondLine url3 = thirdLine url4 = fourthLine url5 = fifthLine #Read URL 1, get content , and store it in a variable. url1response = urllib2.urlopen(url1) url1Content =url1response.read() #Read URL 2, get content , and store it in a variable. url2response = urllib2.urlopen(url2) url2Content =url2response.read() #Read URL 3, get content , and store it in a variable. url3response = urllib2.urlopen(url3) url3Content =url3response.read() #Read URL 4, get content , and store it in a variable. url4response = urllib2.urlopen(url4) url4Content =url4response.read() #Read URL 5, get content , and store it in a variable. url5response = urllib2.urlopen(url5) url5Content =url5response.read() text_file.close() ```
2017/02/14
[ "https://Stackoverflow.com/questions/42237103", "https://Stackoverflow.com", "https://Stackoverflow.com/users/6115937/" ]
With jQuery you need to also provide the other transformation: ``` //Image Category size change effect $('.cat-wrap div').hover(function() { $(this).css('width', '30%'); $(this).children().css('opacity', '1'); $(this).siblings().css("width", "16.5%"); }, function() { $(this).css('width', '19.2%'); $(this).children().css('opacity', '0'); $(this).siblings().css("width", "19.2%"); }); ```
Here is an example of the javascript you are looking for: ```js var origWidth; var origOpacity; //Image Category size change effect $('.cat-wrap div').hover(function() { origWidth = $(this).css('width'); origOpacity = $(this).children().css('opacity'); $(this).css('width', '30%'); $(this).children().css('opacity','1'); $(this).siblings().css( "width", "16.5%"); }, function(){ $(this).css('width', origWidth); $(this).children().css('opacity',origOpacity); }); ```
44,181,879
I believe that I have installed virtualenvwrapper incorrectly (the perils of following different tutorials for python setup). I would like to remove the extension completely from my Mac OSX system but there seems to be no documentation on how to do this. Does anyone know how to completely reverse the installation? Its wreaking havoc with my attempts to compile python scripts.
2017/05/25
[ "https://Stackoverflow.com/questions/44181879", "https://Stackoverflow.com", "https://Stackoverflow.com/users/7442842/" ]
``` pip uninstall virtualenvwrapper ``` Or ``` sudo pip uninstall virtualenvwrapper ``` worked for me.
On windows - This works great pip uninstall virtualenvwrapper-win
38,129,357
In the python difflib library, is the SequenceMatcher class behaving unexpectedly, or am I misreading what the supposed behavior is? Why does the isjunk argument seem to not make any difference in this case? ``` difflib.SequenceMatcher(None, "AA", "A A").ratio() return 0.8 difflib.SequenceMatcher(lambda x: x in ' ', "AA", "A A").ratio() returns 0.8 ``` My understanding is that if space is omitted, the ratio should be 1.
2016/06/30
[ "https://Stackoverflow.com/questions/38129357", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5839052/" ]
This is happening because the `ratio` function uses total sequences' length while calculating the ratio, **but it doesn't filter elements using `isjunk`**. So, as long as the number of matches in the matching blocks results in the same value (with and without `isjunk`), the ratio measure will be the same. I assume that sequences are not filtered by `isjunk` because of performance reasons. ```python def ratio(self): """Return a measure of the sequences' similarity (float in [0,1]). Where T is the total number of elements in both sequences, and M is the number of matches, this is 2.0*M / T. """ matches = sum(triple[-1] for triple in self.get_matching_blocks()) return _calculate_ratio(matches, len(self.a) + len(self.b)) ``` `self.a` and `self.b` are the strings (sequences) passed to the SequenceMatcher object ("AA" and "A A" in your example). The `isjunk` function `lambda x: x in ' '` is only used to determine the matching blocks. Your example is quite simple, so the resulting ratio and matching blocks are the same for both calls. ```python difflib.SequenceMatcher(None, "AA", "A A").get_matching_blocks() [Match(a=0, b=0, size=1), Match(a=1, b=2, size=1), Match(a=2, b=3, size=0)] difflib.SequenceMatcher(lambda x: x == ' ', "AA", "A A").get_matching_blocks() [Match(a=0, b=0, size=1), Match(a=1, b=2, size=1), Match(a=2, b=3, size=0)] ``` *Same matching blocks, the ratio is*: `M = 2, T = 6 => ratio = 2.0 * 2 / 6` **Now consider the following example**: ```python difflib.SequenceMatcher(None, "AA ", "A A").get_matching_blocks() [Match(a=1, b=0, size=2), Match(a=3, b=3, size=0)] difflib.SequenceMatcher(lambda x: x == ' ', "AA ", "A A").get_matching_blocks() [Match(a=0, b=0, size=1), Match(a=1, b=2, size=1), Match(a=3, b=3, size=0)] ``` *Now matching blocks are different, but the ratio will be the same because the number of matches is still equal*: *When `isjunk` is None*: `M = 2, T = 6 => ratio = 2.0 * 2 / 6` *When `isjunk` is* `lambda x: x == ' '`: `M = 1 + 1, T = 6 => ratio = 2.0 * 2 / 6` **Finally, a different number of matches:** ```python difflib.SequenceMatcher(None, "AA ", "A A ").get_matching_blocks() [Match(a=1, b=0, size=2), Match(a=3, b=4, size=0)] difflib.SequenceMatcher(lambda x: x == ' ', "AA ", "A A ").get_matching_blocks() [Match(a=0, b=0, size=1), Match(a=1, b=2, size=2), Match(a=3, b=4, size=0)] ``` *The number of matches is different* *When `isjunk` is None*: `M = 2, T = 7 => ratio = 2.0 * 2 / 7` *When `isjunk` is* `lambda x: x == ' '`: `M = 1 + 2, T = 6 => ratio = 2.0 * 3 / 7`
You can remove the characters from the string before sequencing it ``` def withoutJunk(input, chars): return input.translate(str.maketrans('', '', chars)) a = withoutJunk('AA', ' ') b = withoutJunk('A A', ' ') difflib.SequenceMatcher(None, a, b).ratio() # -> 1.0 ```
58,674,723
I have a new MacBook with fresh installs of everything which I upgraded to macOS Catalina. I installed homebrew and then pyenv, and installed Python 3.8.0 using pyenv. All these things seemed to work properly. However, neither `pyenv local` nor `pyenv global` seem to take effect. Here are all the details of what I'm seeing: ``` thewizard@Special-MacBook-Pro ~ % pyenv versions system * 3.8.0 (set by /Usersthewizard/.python-version) thewizard@Special-MacBook-Pro ~ % python --version Python 2.7.16 thewizard@Special-MacBook-Pro ~ % pyenv global 3.8.0 thewizard@Special-MacBook-Pro ~ % python --version Python 2.7.16 thewizard@Special-MacBook-Pro ~ % pyenv local 3.8.0 thewizard@Special-MacBook-Pro ~ % python --version Python 2.7.16 thewizard@Special-MacBook-Pro ~ % echo $PATH /usr/local/bin:/usr/bin:/bin:/usr/sbin:/sbin:/Users/thewizard/.pyenv/bin thewizard@Special-MacBook-Pro ~ % cat ~/.zshenv export PYENV_ROOT="$HOME/.pyenv" export PATH="$PYENV_ROOT/bin:$PATH" if command -v pyenv 1>/dev/null 2>&1; then eval "$(pyenv init -)" fi ``` BTW there is no `/bin` in my .pyenv, I only added those commands per some other instructions but I'm planning to remove it because I think it is wrong: ``` thewizard@Special-MacBook-Pro ~ % ls -al ~/.pyenv total 8 drwxr-xr-x 5 thewizard staff 160 Nov 2 15:03 . drwxr-xr-x+ 22 thewizard staff 704 Nov 2 15:36 .. drwxr-xr-x 22 thewizard staff 704 Nov 2 15:03 shims -rw-r--r-- 1 thewizard staff 6 Nov 2 15:36 version drwxr-xr-x 3 thewizard staff 96 Nov 2 15:01 versions ``` It's worth noting that Catalina moved to zsh from bash, not sure if that's relevant here.
2019/11/02
[ "https://Stackoverflow.com/questions/58674723", "https://Stackoverflow.com", "https://Stackoverflow.com/users/783314/" ]
I added the following to my ~/.zprofile and got it working. ``` export PYENV_ROOT="$HOME/.pyenv/versions/3.7.3" export PATH="$PYENV_ROOT/bin:$PATH" ```
catalina (and OS X in general) uses `/etc/zprofile` to set the `$PATH` in advance of what you're specifying within the local dotfiles. it uses the `path_helper` utility to specify the `$PATH` and i suspect this is overriding the shim injection in your local dotfiles. you can comment out the following lines in `/etc/zprofile`. this will get overridden in subsequent OS updates. ``` # if [ -x /usr/libexec/path_helper ]; then # eval `/usr/libexec/path_helper -s` # fi ``` alternately, and less intrusively, you can unset the `GLOBAL_RCS` option (add `unsetopt GLOBAL_RCS`) in your personal `.zshenv` file which will allow you to suppress sourcing all of the system default RC files for zsh and allow the pyenv shims to operate as intended.
58,674,723
I have a new MacBook with fresh installs of everything which I upgraded to macOS Catalina. I installed homebrew and then pyenv, and installed Python 3.8.0 using pyenv. All these things seemed to work properly. However, neither `pyenv local` nor `pyenv global` seem to take effect. Here are all the details of what I'm seeing: ``` thewizard@Special-MacBook-Pro ~ % pyenv versions system * 3.8.0 (set by /Usersthewizard/.python-version) thewizard@Special-MacBook-Pro ~ % python --version Python 2.7.16 thewizard@Special-MacBook-Pro ~ % pyenv global 3.8.0 thewizard@Special-MacBook-Pro ~ % python --version Python 2.7.16 thewizard@Special-MacBook-Pro ~ % pyenv local 3.8.0 thewizard@Special-MacBook-Pro ~ % python --version Python 2.7.16 thewizard@Special-MacBook-Pro ~ % echo $PATH /usr/local/bin:/usr/bin:/bin:/usr/sbin:/sbin:/Users/thewizard/.pyenv/bin thewizard@Special-MacBook-Pro ~ % cat ~/.zshenv export PYENV_ROOT="$HOME/.pyenv" export PATH="$PYENV_ROOT/bin:$PATH" if command -v pyenv 1>/dev/null 2>&1; then eval "$(pyenv init -)" fi ``` BTW there is no `/bin` in my .pyenv, I only added those commands per some other instructions but I'm planning to remove it because I think it is wrong: ``` thewizard@Special-MacBook-Pro ~ % ls -al ~/.pyenv total 8 drwxr-xr-x 5 thewizard staff 160 Nov 2 15:03 . drwxr-xr-x+ 22 thewizard staff 704 Nov 2 15:36 .. drwxr-xr-x 22 thewizard staff 704 Nov 2 15:03 shims -rw-r--r-- 1 thewizard staff 6 Nov 2 15:36 version drwxr-xr-x 3 thewizard staff 96 Nov 2 15:01 versions ``` It's worth noting that Catalina moved to zsh from bash, not sure if that's relevant here.
2019/11/02
[ "https://Stackoverflow.com/questions/58674723", "https://Stackoverflow.com", "https://Stackoverflow.com/users/783314/" ]
If you're using `pyenv` with `pipenv` and encountering the same issue, you can add the following lines to your `.zshrc` or `.zprofile` file: ```sh export PYENV_ROOT="$HOME/.pyenv/shims" export PATH="$PYENV_ROOT:$PATH" export PIPENV_PYTHON="$PYENV_ROOT/python" ``` Referencing `pyenv`'s `/shims` folder helps to keep it more general and to allow you to easily switch between different Python versions, should you have more than one installed. `pipenv` will then always reference the version of Python that is currently set as global by `pyenv`.
I added the following to my ~/.zprofile and got it working. ``` export PYENV_ROOT="$HOME/.pyenv/versions/3.7.3" export PATH="$PYENV_ROOT/bin:$PATH" ```
58,674,723
I have a new MacBook with fresh installs of everything which I upgraded to macOS Catalina. I installed homebrew and then pyenv, and installed Python 3.8.0 using pyenv. All these things seemed to work properly. However, neither `pyenv local` nor `pyenv global` seem to take effect. Here are all the details of what I'm seeing: ``` thewizard@Special-MacBook-Pro ~ % pyenv versions system * 3.8.0 (set by /Usersthewizard/.python-version) thewizard@Special-MacBook-Pro ~ % python --version Python 2.7.16 thewizard@Special-MacBook-Pro ~ % pyenv global 3.8.0 thewizard@Special-MacBook-Pro ~ % python --version Python 2.7.16 thewizard@Special-MacBook-Pro ~ % pyenv local 3.8.0 thewizard@Special-MacBook-Pro ~ % python --version Python 2.7.16 thewizard@Special-MacBook-Pro ~ % echo $PATH /usr/local/bin:/usr/bin:/bin:/usr/sbin:/sbin:/Users/thewizard/.pyenv/bin thewizard@Special-MacBook-Pro ~ % cat ~/.zshenv export PYENV_ROOT="$HOME/.pyenv" export PATH="$PYENV_ROOT/bin:$PATH" if command -v pyenv 1>/dev/null 2>&1; then eval "$(pyenv init -)" fi ``` BTW there is no `/bin` in my .pyenv, I only added those commands per some other instructions but I'm planning to remove it because I think it is wrong: ``` thewizard@Special-MacBook-Pro ~ % ls -al ~/.pyenv total 8 drwxr-xr-x 5 thewizard staff 160 Nov 2 15:03 . drwxr-xr-x+ 22 thewizard staff 704 Nov 2 15:36 .. drwxr-xr-x 22 thewizard staff 704 Nov 2 15:03 shims -rw-r--r-- 1 thewizard staff 6 Nov 2 15:36 version drwxr-xr-x 3 thewizard staff 96 Nov 2 15:01 versions ``` It's worth noting that Catalina moved to zsh from bash, not sure if that's relevant here.
2019/11/02
[ "https://Stackoverflow.com/questions/58674723", "https://Stackoverflow.com", "https://Stackoverflow.com/users/783314/" ]
I added the following to my ~/.zprofile and got it working. ``` export PYENV_ROOT="$HOME/.pyenv/versions/3.7.3" export PATH="$PYENV_ROOT/bin:$PATH" ```
Check if exist any symbolic links on your account root ``` ls -al .pyenv/versions/x.x.x/bin ``` if you don't have symlink files ``` unset CLICOLOR unset CLICOLOR_FORCE unset LSCOLORS unalias ls ``` and try python install again with pyenv
58,674,723
I have a new MacBook with fresh installs of everything which I upgraded to macOS Catalina. I installed homebrew and then pyenv, and installed Python 3.8.0 using pyenv. All these things seemed to work properly. However, neither `pyenv local` nor `pyenv global` seem to take effect. Here are all the details of what I'm seeing: ``` thewizard@Special-MacBook-Pro ~ % pyenv versions system * 3.8.0 (set by /Usersthewizard/.python-version) thewizard@Special-MacBook-Pro ~ % python --version Python 2.7.16 thewizard@Special-MacBook-Pro ~ % pyenv global 3.8.0 thewizard@Special-MacBook-Pro ~ % python --version Python 2.7.16 thewizard@Special-MacBook-Pro ~ % pyenv local 3.8.0 thewizard@Special-MacBook-Pro ~ % python --version Python 2.7.16 thewizard@Special-MacBook-Pro ~ % echo $PATH /usr/local/bin:/usr/bin:/bin:/usr/sbin:/sbin:/Users/thewizard/.pyenv/bin thewizard@Special-MacBook-Pro ~ % cat ~/.zshenv export PYENV_ROOT="$HOME/.pyenv" export PATH="$PYENV_ROOT/bin:$PATH" if command -v pyenv 1>/dev/null 2>&1; then eval "$(pyenv init -)" fi ``` BTW there is no `/bin` in my .pyenv, I only added those commands per some other instructions but I'm planning to remove it because I think it is wrong: ``` thewizard@Special-MacBook-Pro ~ % ls -al ~/.pyenv total 8 drwxr-xr-x 5 thewizard staff 160 Nov 2 15:03 . drwxr-xr-x+ 22 thewizard staff 704 Nov 2 15:36 .. drwxr-xr-x 22 thewizard staff 704 Nov 2 15:03 shims -rw-r--r-- 1 thewizard staff 6 Nov 2 15:36 version drwxr-xr-x 3 thewizard staff 96 Nov 2 15:01 versions ``` It's worth noting that Catalina moved to zsh from bash, not sure if that's relevant here.
2019/11/02
[ "https://Stackoverflow.com/questions/58674723", "https://Stackoverflow.com", "https://Stackoverflow.com/users/783314/" ]
I added the following to my ~/.zprofile and got it working. ``` export PYENV_ROOT="$HOME/.pyenv/versions/3.7.3" export PATH="$PYENV_ROOT/bin:$PATH" ```
I think the issue is due to the default HD partitions that might be causing confusion. "With macOS Catalina, you can no longer store files or data in the read-only system volume, nor can you write to the "root" directory ( / ) from the command line, such as with Terminal" (<https://support.apple.com/en-ca/HT210650>). I had the same issues on macbook pro and imac, which forced me to perform factory resets. I've given up on pyenv and decided to go with Anaconda to manage python versions.
58,674,723
I have a new MacBook with fresh installs of everything which I upgraded to macOS Catalina. I installed homebrew and then pyenv, and installed Python 3.8.0 using pyenv. All these things seemed to work properly. However, neither `pyenv local` nor `pyenv global` seem to take effect. Here are all the details of what I'm seeing: ``` thewizard@Special-MacBook-Pro ~ % pyenv versions system * 3.8.0 (set by /Usersthewizard/.python-version) thewizard@Special-MacBook-Pro ~ % python --version Python 2.7.16 thewizard@Special-MacBook-Pro ~ % pyenv global 3.8.0 thewizard@Special-MacBook-Pro ~ % python --version Python 2.7.16 thewizard@Special-MacBook-Pro ~ % pyenv local 3.8.0 thewizard@Special-MacBook-Pro ~ % python --version Python 2.7.16 thewizard@Special-MacBook-Pro ~ % echo $PATH /usr/local/bin:/usr/bin:/bin:/usr/sbin:/sbin:/Users/thewizard/.pyenv/bin thewizard@Special-MacBook-Pro ~ % cat ~/.zshenv export PYENV_ROOT="$HOME/.pyenv" export PATH="$PYENV_ROOT/bin:$PATH" if command -v pyenv 1>/dev/null 2>&1; then eval "$(pyenv init -)" fi ``` BTW there is no `/bin` in my .pyenv, I only added those commands per some other instructions but I'm planning to remove it because I think it is wrong: ``` thewizard@Special-MacBook-Pro ~ % ls -al ~/.pyenv total 8 drwxr-xr-x 5 thewizard staff 160 Nov 2 15:03 . drwxr-xr-x+ 22 thewizard staff 704 Nov 2 15:36 .. drwxr-xr-x 22 thewizard staff 704 Nov 2 15:03 shims -rw-r--r-- 1 thewizard staff 6 Nov 2 15:36 version drwxr-xr-x 3 thewizard staff 96 Nov 2 15:01 versions ``` It's worth noting that Catalina moved to zsh from bash, not sure if that's relevant here.
2019/11/02
[ "https://Stackoverflow.com/questions/58674723", "https://Stackoverflow.com", "https://Stackoverflow.com/users/783314/" ]
If you're using `pyenv` with `pipenv` and encountering the same issue, you can add the following lines to your `.zshrc` or `.zprofile` file: ```sh export PYENV_ROOT="$HOME/.pyenv/shims" export PATH="$PYENV_ROOT:$PATH" export PIPENV_PYTHON="$PYENV_ROOT/python" ``` Referencing `pyenv`'s `/shims` folder helps to keep it more general and to allow you to easily switch between different Python versions, should you have more than one installed. `pipenv` will then always reference the version of Python that is currently set as global by `pyenv`.
catalina (and OS X in general) uses `/etc/zprofile` to set the `$PATH` in advance of what you're specifying within the local dotfiles. it uses the `path_helper` utility to specify the `$PATH` and i suspect this is overriding the shim injection in your local dotfiles. you can comment out the following lines in `/etc/zprofile`. this will get overridden in subsequent OS updates. ``` # if [ -x /usr/libexec/path_helper ]; then # eval `/usr/libexec/path_helper -s` # fi ``` alternately, and less intrusively, you can unset the `GLOBAL_RCS` option (add `unsetopt GLOBAL_RCS`) in your personal `.zshenv` file which will allow you to suppress sourcing all of the system default RC files for zsh and allow the pyenv shims to operate as intended.
58,674,723
I have a new MacBook with fresh installs of everything which I upgraded to macOS Catalina. I installed homebrew and then pyenv, and installed Python 3.8.0 using pyenv. All these things seemed to work properly. However, neither `pyenv local` nor `pyenv global` seem to take effect. Here are all the details of what I'm seeing: ``` thewizard@Special-MacBook-Pro ~ % pyenv versions system * 3.8.0 (set by /Usersthewizard/.python-version) thewizard@Special-MacBook-Pro ~ % python --version Python 2.7.16 thewizard@Special-MacBook-Pro ~ % pyenv global 3.8.0 thewizard@Special-MacBook-Pro ~ % python --version Python 2.7.16 thewizard@Special-MacBook-Pro ~ % pyenv local 3.8.0 thewizard@Special-MacBook-Pro ~ % python --version Python 2.7.16 thewizard@Special-MacBook-Pro ~ % echo $PATH /usr/local/bin:/usr/bin:/bin:/usr/sbin:/sbin:/Users/thewizard/.pyenv/bin thewizard@Special-MacBook-Pro ~ % cat ~/.zshenv export PYENV_ROOT="$HOME/.pyenv" export PATH="$PYENV_ROOT/bin:$PATH" if command -v pyenv 1>/dev/null 2>&1; then eval "$(pyenv init -)" fi ``` BTW there is no `/bin` in my .pyenv, I only added those commands per some other instructions but I'm planning to remove it because I think it is wrong: ``` thewizard@Special-MacBook-Pro ~ % ls -al ~/.pyenv total 8 drwxr-xr-x 5 thewizard staff 160 Nov 2 15:03 . drwxr-xr-x+ 22 thewizard staff 704 Nov 2 15:36 .. drwxr-xr-x 22 thewizard staff 704 Nov 2 15:03 shims -rw-r--r-- 1 thewizard staff 6 Nov 2 15:36 version drwxr-xr-x 3 thewizard staff 96 Nov 2 15:01 versions ``` It's worth noting that Catalina moved to zsh from bash, not sure if that's relevant here.
2019/11/02
[ "https://Stackoverflow.com/questions/58674723", "https://Stackoverflow.com", "https://Stackoverflow.com/users/783314/" ]
If you're using `pyenv` with `pipenv` and encountering the same issue, you can add the following lines to your `.zshrc` or `.zprofile` file: ```sh export PYENV_ROOT="$HOME/.pyenv/shims" export PATH="$PYENV_ROOT:$PATH" export PIPENV_PYTHON="$PYENV_ROOT/python" ``` Referencing `pyenv`'s `/shims` folder helps to keep it more general and to allow you to easily switch between different Python versions, should you have more than one installed. `pipenv` will then always reference the version of Python that is currently set as global by `pyenv`.
Check if exist any symbolic links on your account root ``` ls -al .pyenv/versions/x.x.x/bin ``` if you don't have symlink files ``` unset CLICOLOR unset CLICOLOR_FORCE unset LSCOLORS unalias ls ``` and try python install again with pyenv
58,674,723
I have a new MacBook with fresh installs of everything which I upgraded to macOS Catalina. I installed homebrew and then pyenv, and installed Python 3.8.0 using pyenv. All these things seemed to work properly. However, neither `pyenv local` nor `pyenv global` seem to take effect. Here are all the details of what I'm seeing: ``` thewizard@Special-MacBook-Pro ~ % pyenv versions system * 3.8.0 (set by /Usersthewizard/.python-version) thewizard@Special-MacBook-Pro ~ % python --version Python 2.7.16 thewizard@Special-MacBook-Pro ~ % pyenv global 3.8.0 thewizard@Special-MacBook-Pro ~ % python --version Python 2.7.16 thewizard@Special-MacBook-Pro ~ % pyenv local 3.8.0 thewizard@Special-MacBook-Pro ~ % python --version Python 2.7.16 thewizard@Special-MacBook-Pro ~ % echo $PATH /usr/local/bin:/usr/bin:/bin:/usr/sbin:/sbin:/Users/thewizard/.pyenv/bin thewizard@Special-MacBook-Pro ~ % cat ~/.zshenv export PYENV_ROOT="$HOME/.pyenv" export PATH="$PYENV_ROOT/bin:$PATH" if command -v pyenv 1>/dev/null 2>&1; then eval "$(pyenv init -)" fi ``` BTW there is no `/bin` in my .pyenv, I only added those commands per some other instructions but I'm planning to remove it because I think it is wrong: ``` thewizard@Special-MacBook-Pro ~ % ls -al ~/.pyenv total 8 drwxr-xr-x 5 thewizard staff 160 Nov 2 15:03 . drwxr-xr-x+ 22 thewizard staff 704 Nov 2 15:36 .. drwxr-xr-x 22 thewizard staff 704 Nov 2 15:03 shims -rw-r--r-- 1 thewizard staff 6 Nov 2 15:36 version drwxr-xr-x 3 thewizard staff 96 Nov 2 15:01 versions ``` It's worth noting that Catalina moved to zsh from bash, not sure if that's relevant here.
2019/11/02
[ "https://Stackoverflow.com/questions/58674723", "https://Stackoverflow.com", "https://Stackoverflow.com/users/783314/" ]
If you're using `pyenv` with `pipenv` and encountering the same issue, you can add the following lines to your `.zshrc` or `.zprofile` file: ```sh export PYENV_ROOT="$HOME/.pyenv/shims" export PATH="$PYENV_ROOT:$PATH" export PIPENV_PYTHON="$PYENV_ROOT/python" ``` Referencing `pyenv`'s `/shims` folder helps to keep it more general and to allow you to easily switch between different Python versions, should you have more than one installed. `pipenv` will then always reference the version of Python that is currently set as global by `pyenv`.
I think the issue is due to the default HD partitions that might be causing confusion. "With macOS Catalina, you can no longer store files or data in the read-only system volume, nor can you write to the "root" directory ( / ) from the command line, such as with Terminal" (<https://support.apple.com/en-ca/HT210650>). I had the same issues on macbook pro and imac, which forced me to perform factory resets. I've given up on pyenv and decided to go with Anaconda to manage python versions.
54,569,512
I need to parse json file size of 200MB, at the end I would like to write data from the file in sqlite3 database. I have a working python code, but it takes around 9 minutes to complete the task. ``` @transaction.atomic def create_database(): with open('file.json') as f: data = json.load(f) cve_items = data['CVE_Items'] for i in range(len(cve_items)): database_object = Data() for vendor_data in cve_items[i]['cve']['affects']['vendor']['vendor_data']: database_object.vendor_name = vendor_data['vendor_name'] for description_data in cve_items[i]['cve']['description']['description_data']: database_object.description = description_data['value'] for product_data in vendor_data['product']['product_data']: database_object.product_name = product_data['product_name'] database_object.save() for version_data in product_data['version']['version_data']: if version_data['version_value'] != '-': database_object.versions_set.create(version=version_data['version_value']) ``` Is it possible to speed up the process?
2019/02/07
[ "https://Stackoverflow.com/questions/54569512", "https://Stackoverflow.com", "https://Stackoverflow.com/users/9517224/" ]
``` 'ODataManifestModel>EntitySetForBoolean>booleanProperty' ``` A few things: * your screenshot is probably wrong because you always need the entitySet name that can be found in the "folder" `Entity Sets` not the `Entity Type`. Although your name looks correct. * you have to bind one element of the entitySet (array) to the `mode` property specifying it with its defined key in SEGW -> your entitytype needs at least one key field. You **cannot** acces oData entitSet elements in an OdataModel with an index * you need an absolute path if you are referencing the entitySet, means after `model>` it must start with `/`. Alternatively in your controller init method after metadata loaded, bind one element to the whole view `var that = this; this.getOwnerComponent().getModel().metadataLoaded().then(function() { that.getView().bindElement({path:"/EntitySetForBoolean('1234')" }); })` to use relative binding in the view (not starting with `/`) * the path within the structure uses `/` instead of `>` Absolute Binding: ``` "ODataManifestModel>/EntitySetForBoolean('1234')/booleanProperty" ``` Or if the element is bound to the view or a parent container object in the view, you can use a relative path: ``` "ODataManifestModel>booleanProperty" ```
**mode** property from ListBase can have a the following properties (**None, SingleSelect, MultiSelect, Delete**) and it is applied to all the list elements
54,569,512
I need to parse json file size of 200MB, at the end I would like to write data from the file in sqlite3 database. I have a working python code, but it takes around 9 minutes to complete the task. ``` @transaction.atomic def create_database(): with open('file.json') as f: data = json.load(f) cve_items = data['CVE_Items'] for i in range(len(cve_items)): database_object = Data() for vendor_data in cve_items[i]['cve']['affects']['vendor']['vendor_data']: database_object.vendor_name = vendor_data['vendor_name'] for description_data in cve_items[i]['cve']['description']['description_data']: database_object.description = description_data['value'] for product_data in vendor_data['product']['product_data']: database_object.product_name = product_data['product_name'] database_object.save() for version_data in product_data['version']['version_data']: if version_data['version_value'] != '-': database_object.versions_set.create(version=version_data['version_value']) ``` Is it possible to speed up the process?
2019/02/07
[ "https://Stackoverflow.com/questions/54569512", "https://Stackoverflow.com", "https://Stackoverflow.com/users/9517224/" ]
``` 'ODataManifestModel>EntitySetForBoolean>booleanProperty' ``` A few things: * your screenshot is probably wrong because you always need the entitySet name that can be found in the "folder" `Entity Sets` not the `Entity Type`. Although your name looks correct. * you have to bind one element of the entitySet (array) to the `mode` property specifying it with its defined key in SEGW -> your entitytype needs at least one key field. You **cannot** acces oData entitSet elements in an OdataModel with an index * you need an absolute path if you are referencing the entitySet, means after `model>` it must start with `/`. Alternatively in your controller init method after metadata loaded, bind one element to the whole view `var that = this; this.getOwnerComponent().getModel().metadataLoaded().then(function() { that.getView().bindElement({path:"/EntitySetForBoolean('1234')" }); })` to use relative binding in the view (not starting with `/`) * the path within the structure uses `/` instead of `>` Absolute Binding: ``` "ODataManifestModel>/EntitySetForBoolean('1234')/booleanProperty" ``` Or if the element is bound to the view or a parent container object in the view, you can use a relative path: ``` "ODataManifestModel>booleanProperty" ```
Am assuming your service looks similar to this via URL, there is no sample data provided in your question: [Northwinds oData V2](https://services.odata.org/V3/Northwind/Northwind.svc/). **[`Open preview in external window`](https://embed.plnkr.co/LnJMwR/)** Here am using the `Products` Entity set. ```js //manifest.json "dataSources": { "ODataManifestModel": { "uri": "path_to_your_service", "type": "OData", "settings": { "odataVersion": "2.0", "localUri": "", "annotations": [] } }, ..."models": { "ODataManifestModel": { "type": "sap.ui.model.odata.v2.ODataModel", "dataSource": "ODataManifestModel" }, .. } ``` ```html //view.xml <mvc:View controllerName="sap.otuniyi.sample.Master" xmlns:mvc="sap.ui.core.mvc" xmlns:core="sap.ui.core" xmlns="sap.m" xmlns:semantic="sap.m.semantic"> <semantic:MasterPage id="page" title="Contents"> <semantic:content> <List items="{ODataManifestModel>/Products}" mode="SingleSelectMaster" noDataText="No Data Available" growing="true" growingScrollToLoad="true" selectionChange="onSelectionChange"> <items> <ObjectListItem title="{ODataManifestModel>ProductName}" type="Active" icon="sap-icon://user-settings" press="onSelectionChange" /> </items> </List> </semantic:content> </semantic:MasterPage> </mvc:View> ```
56,793,083
I am getting an error and I'm not sure what is causing the error to occur. The error is: ``` Parts[n] = PN IndexError: list assignment index out of range ``` The code I'm using is this. I'm pretty new to python and tried to look out similar problems but didn't seem to find anything exactly similar to this. Any help would be appreciated. ``` import pandas as pd df = pd.read_excel(r'C:\Users\md77879\Desktop\Test.xls') Parts = list() Prices = list() print("\nEnter 'exit' to end") PN = input('Enter PN: ') Parts.append(PN) Number = (df['Part Number'] == PN) print(df[Number][['Part Number', 'Price']]) i, n = 0, 0 while PN != ('exit'): n = n + 1 PN = input(' ') Number = df['Part Number'] == PN print(df[Number][['Part Number', 'Price']]) Parts[n] = PN for i in range(0, n): print(Parts[i]) ```
2019/06/27
[ "https://Stackoverflow.com/questions/56793083", "https://Stackoverflow.com", "https://Stackoverflow.com/users/10560733/" ]
If the chunks you need to upper are separated with `-` or `.` you may use ``` "Filename to UPPER_SNAKE_CASE": { "prefix": "usc_", "body": [ "${TM_FILENAME/\\.component\\.html$|(^|[-.])([^-.]+)/${1:+_}${2:/upcase}/g}" ], "description": "Convert filename to UPPER_SNAKE_CASE dropping .component.html at the end" } ``` You may check the [regex workings here](https://regex101.com/r/L4Jsj7/1). * `\.component\.html$` - matches `.component.html` at the end of the string * `|` - or * `(^|[-.])` capture start of string or `-` / `.` into Group 1 * `([^-.]+)` capture any 1+ chars other than `-` and `.` into Group 2. The `${1:+_}${2:/upcase}` replacement means: * `${1:+` - if Group 1 is not empty, * `_` - replace with `_` * `}` - end of the first group handling * `${2:/upcase}` - put the uppered Group 2 value back.
Here is a pretty simple alternation regex: ``` "upcaseSnake": { "prefix": "rf1", "body": [ "${TM_FILENAME_BASE/(\\..*)|(-)|(.)/${2:+_}${3:/upcase}/g}", "${TM_FILENAME/(\\..*)|(-)|(.)/${2:+_}${3:/upcase}/g}" ], "description": "upcase and snake the filename" }, ``` Either version works. `(\\..*)|(-)|(.)` alternation of three capture groups is conceptually simple. The order of the groups is **important**, and it is also what makes the regex so simple. `(\\..*)` everything after and including the first dot `.` in the filename goes into group 1 which will not be used in the transform. `(-)` group 2, if there is a group 2, replace it with an underscore `${2:+_}`. `(.)` group 3, all other characters go into group 3 which will be upcased `${3:/upcase}`. See [regex101 demo](http://some-fancy-ui.html).
74,165,004
i have 2d list implementation as follows. It shows no. of times every student topped in exams:- ``` list = main_record ['student1',1] ['student2',1] ['student2',2] ['student1',5] ['student3',3] ``` i have another list of unique students as follows:- ``` list = students_enrolled ['student1','student2','student3'] ``` which i want to display student ranking based on their distinctions as follows:- ``` list = student_ranking ['student1','student3','student2'] ``` What built in functions can be useful. I could not pose proper query on net. In other words i need python equivalent of following queries:- ``` select max(main_record[1]) where name = student1 >>> result = 5 select max(main_record[1]) where name = student2 >>> result = 2 select max(main_record[1]) where name = student3 >>> result = 3 ```
2022/10/22
[ "https://Stackoverflow.com/questions/74165004", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1028289/" ]
You define a `dict` base key of `studentX` and save the max value for each student `key` then sort the `students_enrolled` base max value of each key. ``` from collections import defaultdict main_record = [['student1',1], ['student2',1], ['student2',2], ['student1',5], ['student3',3]] students_enrolled = ['student1','student2','student3'] # defind dict with negative infinity and update with max in each iteration tmp_dct = defaultdict(lambda: float('-inf')) for lst in main_record: k, v = lst tmp_dct[k] = max(tmp_dct[k], v) print(tmp_dct) students_enrolled.sort(key = lambda x: tmp_dct[x], reverse=True) print(students_enrolled) ``` Output: ``` # tmp_dct => defaultdict(<function <lambda> at 0x7fd81044b1f0>, {'student1': 5, 'student2': 2, 'student3': 3}) # students_enrolled after sorting ['student1', 'student3', 'student2'] ```
If it is a 2D list it should look like this: `l = [["student1", 2], ["student2", 3], ["student3", 4]]`. To get the highest numeric value from the 2nd column you can use a loop like this: ``` numbers = [] for student in list: numbers.append(student[1]) for num in numbers: n = numbers.copy() n.sort() n.reverse() student_index = numbers.index(n[0]) print(list[student_index], n[0]) numbers.remove(n[0]) ```
51,903,617
How can a check for list membership be inverted based on a boolean variable? I am looking for a way to simplify the following code: ```python # variables: `is_allowed:boolean`, `action:string` and `allowed_actions:list of strings` if is_allowed: if action not in allowed_actions: print(r'{action} must be allowed!') else: if action in allowed_actions: print(r'{action} must NOT be allowed!') ``` I feel there must be a way to avoid doing the check twice, once for `in` and another time for `not in`, but can't figure out a less verbose way.
2018/08/17
[ "https://Stackoverflow.com/questions/51903617", "https://Stackoverflow.com", "https://Stackoverflow.com/users/191246/" ]
Compare the result of the test to `is_allowed`. Then use `is_allowed` to put together the correct error message. ``` if (action in allowed_actions) != is_allowed: print(action, "must" if is_allowed else "must NOT", "be allowed!") ```
Given the way your specific code is structured, I think the only improvement you can make is to just store `action in allowed_actions` in a variable: ``` present = action in allowed_actions if is_allowed: if not present: print(r'{action} must be allowed!') else: if present: print(r'{action} must NOT be allowed!') ```
18,033,700
I generate lot of messages for sending to client (push notifications using push woosh). I collect messages for a period of time and the send a bucket of messages. Need advice, what is the best to use for queue python list ( I am afraid to store in memory lot of messages and to lose if server restarts), Redis or MySQL ?
2013/08/03
[ "https://Stackoverflow.com/questions/18033700", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1800871/" ]
Redis can save the data contained in memory on your hard drive, so you don't have to be worried about to lose informations. And you can add a key expiration to your data saved in memory, so you can remove old messages. Have a look here : <http://redis.io/topics/persistence> And here : <http://redis.io/commands/expire>
I don't know what is the best from MySQL or Redis to process message queues since I don't know Redis. But I could tell, MySQL is *not* designed for that purpose. You should take a look at dedicated tools such as [RabitMQ](http://www.rabbitmq.com/) that will probably serve better your purpose. Here is a basic tutorial (incl. Python): <http://www.rabbitmq.com/tutorials/tutorial-one-python.html>
34,939,762
I have a function called `prepared_db` in submodule `db.db_1`: ``` from spam import db submodule_name = "db_1" func_name = "prepare_db" func = ... ``` how can I get the function by the submodule name and function name in the context above? **UPDATE**: To respond @histrio 's answer, I can verify his code works for `os` module. But it does not work in this case. To create the example: ``` $ mkdir -p spam/db $ cat > spam/db/db_1.py def prepare_db(): print('prepare_db func') $ touch spam/db/__init__.py $ PYTHONPATH=$PYTHONPATH:spam ``` now, you can do the import normally: ``` >>> from spam.db.db_1 import prepare_db >>> prepare_db() prepare_db func ``` but if you do this dynamically, I get this error: ``` >>> getattr(getattr(db, submodule_name), func_name) --------------------------------------------------------------------------- AttributeError Traceback (most recent call last) <ipython-input-9-1b6aa1216551> in <module>() ----> 1 getattr(getattr(db, submodule_name), func_name) AttributeError: module 'spam.db.db_1' has no attribute 'prepared_db' ```
2016/01/22
[ "https://Stackoverflow.com/questions/34939762", "https://Stackoverflow.com", "https://Stackoverflow.com/users/489564/" ]
It's simple. You can consider module as object. ``` import os submodule_name = "path" func_name = "exists" submodule = getattr(os, submodule_name) function = getattr(submodule, func_name) function('/home') # True ``` or [just for fun, don't do that] ``` fn = reduce(getattr, ('sub1', 'sub2', 'sub3', 'fn'), module) ``` **UPDATE** ``` import importlib submodule = importlib.import_module('.'+submodule_name, module.__name__) function = getattr(submodule, func_name) ```
I think I figured it out, you need to add the function to the `__all__` variable in the `__init__.py` file, so that would be something along the lines of: ``` from .db_1 import prepare_db __all__ = ['prepare_db'] ``` After that, it should work just fine.
54,833,296
I am using spyder python 2.7 and i changed the syntax coloring in Spyder black theme, but i really want my python programme to look in full black, so WITHOUT the white windows. Can someone provide me a good explanation about how to change this? [Python example of how i want it to be](https://i.stack.imgur.com/MO17A.png)
2019/02/22
[ "https://Stackoverflow.com/questions/54833296", "https://Stackoverflow.com", "https://Stackoverflow.com/users/11103122/" ]
If you can't wait for Spyder 4 - this is what does it for **Spyder 3.3.2 in Windows, using Anaconda3**. 1. Exit Spyder 2. Open command prompt or Anaconda prompt 3. Run `pip install qdarkstyle` and exit the prompt 4. Go to ...\Anaconda3\Lib\site-packages\spyder\utils and open *qhelpers.py* 5. Add `import qdarkstyle` to the top of that file 6. Replace the `qapplication` function definition with below code (only two added lines) 7. Save and close the file 8. Open Spyder and enjoy your dark theme ``` def qapplication(translate=True, test_time=3): """ Return QApplication instance Creates it if it doesn't already exist test_time: Time to maintain open the application when testing. It's given in seconds """ if running_in_mac_app(): SpyderApplication = MacApplication else: SpyderApplication = QApplication app = SpyderApplication.instance() if app is None: # Set Application name for Gnome 3 # https://groups.google.com/forum/#!topic/pyside/24qxvwfrRDs app = SpyderApplication(['Spyder']) # Set application name for KDE (See issue 2207) app.setApplicationName('Spyder') app.setStyleSheet(qdarkstyle.load_stylesheet_pyqt5()) if translate: install_translator(app) test_ci = os.environ.get('TEST_CI_WIDGETS', None) if test_ci is not None: timer_shutdown = QTimer(app) timer_shutdown.timeout.connect(app.quit) timer_shutdown.start(test_time*1000) return app ```
(*Spyder maintainer here*) This functionality will be available in Spyder **4**, to be released later in 2019. For now there's nothing you can do to get what you want with Spyder's current version, sorry.
54,833,296
I am using spyder python 2.7 and i changed the syntax coloring in Spyder black theme, but i really want my python programme to look in full black, so WITHOUT the white windows. Can someone provide me a good explanation about how to change this? [Python example of how i want it to be](https://i.stack.imgur.com/MO17A.png)
2019/02/22
[ "https://Stackoverflow.com/questions/54833296", "https://Stackoverflow.com", "https://Stackoverflow.com/users/11103122/" ]
The complete dark theme is available from Spyder 4.0.0 beta <https://github.com/spyder-ide/spyder/releases> How I did it : 1) In Anaconda prompt, ``` conda update qt pyqt conda install -c spyder-ide spyder=4.0.0b2 ``` 2) And if you haven't done it before, go to ``` Tools > Preferences > Syntax Coloring ```
(*Spyder maintainer here*) This functionality will be available in Spyder **4**, to be released later in 2019. For now there's nothing you can do to get what you want with Spyder's current version, sorry.
54,833,296
I am using spyder python 2.7 and i changed the syntax coloring in Spyder black theme, but i really want my python programme to look in full black, so WITHOUT the white windows. Can someone provide me a good explanation about how to change this? [Python example of how i want it to be](https://i.stack.imgur.com/MO17A.png)
2019/02/22
[ "https://Stackoverflow.com/questions/54833296", "https://Stackoverflow.com", "https://Stackoverflow.com/users/11103122/" ]
The complete dark theme is available from Spyder 4.0.0 beta <https://github.com/spyder-ide/spyder/releases> How I did it : 1) In Anaconda prompt, ``` conda update qt pyqt conda install -c spyder-ide spyder=4.0.0b2 ``` 2) And if you haven't done it before, go to ``` Tools > Preferences > Syntax Coloring ```
If you can't wait for Spyder 4 - this is what does it for **Spyder 3.3.2 in Windows, using Anaconda3**. 1. Exit Spyder 2. Open command prompt or Anaconda prompt 3. Run `pip install qdarkstyle` and exit the prompt 4. Go to ...\Anaconda3\Lib\site-packages\spyder\utils and open *qhelpers.py* 5. Add `import qdarkstyle` to the top of that file 6. Replace the `qapplication` function definition with below code (only two added lines) 7. Save and close the file 8. Open Spyder and enjoy your dark theme ``` def qapplication(translate=True, test_time=3): """ Return QApplication instance Creates it if it doesn't already exist test_time: Time to maintain open the application when testing. It's given in seconds """ if running_in_mac_app(): SpyderApplication = MacApplication else: SpyderApplication = QApplication app = SpyderApplication.instance() if app is None: # Set Application name for Gnome 3 # https://groups.google.com/forum/#!topic/pyside/24qxvwfrRDs app = SpyderApplication(['Spyder']) # Set application name for KDE (See issue 2207) app.setApplicationName('Spyder') app.setStyleSheet(qdarkstyle.load_stylesheet_pyqt5()) if translate: install_translator(app) test_ci = os.environ.get('TEST_CI_WIDGETS', None) if test_ci is not None: timer_shutdown = QTimer(app) timer_shutdown.timeout.connect(app.quit) timer_shutdown.start(test_time*1000) return app ```
54,833,296
I am using spyder python 2.7 and i changed the syntax coloring in Spyder black theme, but i really want my python programme to look in full black, so WITHOUT the white windows. Can someone provide me a good explanation about how to change this? [Python example of how i want it to be](https://i.stack.imgur.com/MO17A.png)
2019/02/22
[ "https://Stackoverflow.com/questions/54833296", "https://Stackoverflow.com", "https://Stackoverflow.com/users/11103122/" ]
If you can't wait for Spyder 4 - this is what does it for **Spyder 3.3.2 in Windows, using Anaconda3**. 1. Exit Spyder 2. Open command prompt or Anaconda prompt 3. Run `pip install qdarkstyle` and exit the prompt 4. Go to ...\Anaconda3\Lib\site-packages\spyder\utils and open *qhelpers.py* 5. Add `import qdarkstyle` to the top of that file 6. Replace the `qapplication` function definition with below code (only two added lines) 7. Save and close the file 8. Open Spyder and enjoy your dark theme ``` def qapplication(translate=True, test_time=3): """ Return QApplication instance Creates it if it doesn't already exist test_time: Time to maintain open the application when testing. It's given in seconds """ if running_in_mac_app(): SpyderApplication = MacApplication else: SpyderApplication = QApplication app = SpyderApplication.instance() if app is None: # Set Application name for Gnome 3 # https://groups.google.com/forum/#!topic/pyside/24qxvwfrRDs app = SpyderApplication(['Spyder']) # Set application name for KDE (See issue 2207) app.setApplicationName('Spyder') app.setStyleSheet(qdarkstyle.load_stylesheet_pyqt5()) if translate: install_translator(app) test_ci = os.environ.get('TEST_CI_WIDGETS', None) if test_ci is not None: timer_shutdown = QTimer(app) timer_shutdown.timeout.connect(app.quit) timer_shutdown.start(test_time*1000) return app ```
Spyder 4 is out now. Dark mode is included ✌ Have a look at the changes: <https://github.com/spyder-ide/spyder/blob/master/CHANGELOG.md>
54,833,296
I am using spyder python 2.7 and i changed the syntax coloring in Spyder black theme, but i really want my python programme to look in full black, so WITHOUT the white windows. Can someone provide me a good explanation about how to change this? [Python example of how i want it to be](https://i.stack.imgur.com/MO17A.png)
2019/02/22
[ "https://Stackoverflow.com/questions/54833296", "https://Stackoverflow.com", "https://Stackoverflow.com/users/11103122/" ]
The complete dark theme is available from Spyder 4.0.0 beta <https://github.com/spyder-ide/spyder/releases> How I did it : 1) In Anaconda prompt, ``` conda update qt pyqt conda install -c spyder-ide spyder=4.0.0b2 ``` 2) And if you haven't done it before, go to ``` Tools > Preferences > Syntax Coloring ```
Spyder 4 is out now. Dark mode is included ✌ Have a look at the changes: <https://github.com/spyder-ide/spyder/blob/master/CHANGELOG.md>
51,839,083
I am using `MacOS`. I used following command: ``` gcloud beta functions deploy start --runtime python37 --trigger-http --memory 2048MB --timeout 540s ``` But while deploying `google cloud functions` I got this error: ``` (gcloud.beta.functions.deploy) OperationError: code=3, message=Build failed: USER ERROR: pip_download_wheels had stderr output: Command "python setup.py egg_info" failed with error code 1 in /tmp/pip-wheel-8b_hn74q/PyWavelets/ error: pip_download_wheels returned code: 1 ``` I added `scikit-image` in my `requirements.txt`, which was not added before. Code was successfully deploying when `scikit-image` was not added in `requirements.txt`. Any ideas?
2018/08/14
[ "https://Stackoverflow.com/questions/51839083", "https://Stackoverflow.com", "https://Stackoverflow.com/users/10223950/" ]
Do you have a Pipfile in your directory? I was able to replicate this same error when I tried to deploy a GCF containing a Pipfile but no accompanying Pipfile.lock. To fix, either remove Pipfile and just include requirements.txt, or generate Pipfile.lock: `$ pipenv install` without the --skip-lock flag While the current documentation does not state this, I have found that you can deploy to GCF without a requirements.txt file at all. Just include Pipfile and Pipfile.lock. To recap, acceptable dependency files to deploy a beta GCF: * requirements.txt * Pipfile + Pipfile.lock * requirements.txt + Pipfile + Pipfile.lock
According to the google cloud function [documentation](https://cloud.google.com/appengine/docs/standard/python3/runtime#dependencies) it only supports installing dependency from `requirements.txt` file. And the file `Pipfile/Pipfile.lock` must not be present in the root directory.
1,611,625
I only just noticed this feature today! ``` s={1,2,3} #Set initialisation t={x for x in s if x!=3} #Set comprehension t=={1,2} ``` What version is it in? I also noticed that it has set comprehension. Was this added in the same version? **Resources** * [Sets in Python 2.4 Docs](http://docs.python.org/library/stdtypes.html#set-types-set-frozenset) * [What's new in Python 3.0](http://docs.python.org/dev/3.0/whatsnew/3.0.html)
2009/10/23
[ "https://Stackoverflow.com/questions/1611625", "https://Stackoverflow.com", "https://Stackoverflow.com/users/165495/" ]
The `sets` module was added in Python 2.3, but the built-in set type was added to the language in 2.4, with essentially the same interface. (As of 2.6, the `sets` module has been deprecated.) So you can use sets as far back as 2.3, as long as you ``` import sets ``` But you will get a `DeprecationWarning` if you try that import in 2.6 Set comprehensions, and the set literal syntax -- that is, being able to say ``` a = { 1, 2, 3 } ``` are new in Python 3.0. To be very specific, both set literals and set comprehensions were present in Python 3.0a1, the first public release of Python 3.0, from 2007. [Python 3 release notes](http://www.python.org/download/releases/3.0/NEWS.txt) The comprehensions and literals were later implemented in 2.7. [3.x Python features incorporated into 2.7](https://docs.python.org/3/whatsnew/2.7.html#python-3-1-features)
Well, testing it: ``` >>> s = {1, 2, 3} File "<stdin>", line 1 s = {1, 2, 3} ^ SyntaxError: invalid syntax ``` I'm running 2.5, so I would assume that this syntax was added sometime in 2.6 (Update: actually added in 3.0, but Ian beat me). I should probably be upgrading sometime soon. I'm glad they added a syntax for it - I'm rather tired of `set([1, 2, 3])`. Set comprehensions have probably been around since sets were first created. The Python documentation site isn't very clear, but I wouldn't imagine sets would be too useful without iterators.
1,611,625
I only just noticed this feature today! ``` s={1,2,3} #Set initialisation t={x for x in s if x!=3} #Set comprehension t=={1,2} ``` What version is it in? I also noticed that it has set comprehension. Was this added in the same version? **Resources** * [Sets in Python 2.4 Docs](http://docs.python.org/library/stdtypes.html#set-types-set-frozenset) * [What's new in Python 3.0](http://docs.python.org/dev/3.0/whatsnew/3.0.html)
2009/10/23
[ "https://Stackoverflow.com/questions/1611625", "https://Stackoverflow.com", "https://Stackoverflow.com/users/165495/" ]
The `sets` module was added in Python 2.3, but the built-in set type was added to the language in 2.4, with essentially the same interface. (As of 2.6, the `sets` module has been deprecated.) So you can use sets as far back as 2.3, as long as you ``` import sets ``` But you will get a `DeprecationWarning` if you try that import in 2.6 Set comprehensions, and the set literal syntax -- that is, being able to say ``` a = { 1, 2, 3 } ``` are new in Python 3.0. To be very specific, both set literals and set comprehensions were present in Python 3.0a1, the first public release of Python 3.0, from 2007. [Python 3 release notes](http://www.python.org/download/releases/3.0/NEWS.txt) The comprehensions and literals were later implemented in 2.7. [3.x Python features incorporated into 2.7](https://docs.python.org/3/whatsnew/2.7.html#python-3-1-features)
The set literal and set and dict comprehension syntaxes were backported to 2.x trunk, about 2-3 days ago. So I guess this feature should be available from python 2.7.
12,698,646
How can I create a list of methods in python to be applied to an object? Given some arbitrary class: ``` class someClass: def __init__(self, s): self.size = s def shrink(self): self.size -= 1 def grow(self): self.size += 1 def invert(self): self.size = -self.size ``` I want to be able to write an iterable object like this list: ``` instructions = [shrink, grow, shrink, shrink, grow, invert] ``` To be run through a for-loop later: ``` elephant = someClass(90) sizeList = [] for ins in instructions: elephant.ins() sizeList.append(elephant.size) ``` I've done something similar with functions before. Not being able to do this with methods would require me to rewrite an intimidating amount of code...
2012/10/02
[ "https://Stackoverflow.com/questions/12698646", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1175133/" ]
You could create a list of the method *names* then use [`getattr()`](https://docs.python.org/3/library/functions.html#getattr) to access the methods: ``` instructions = ["shrink", "grow", "shrink"] for i in instructions: getattr(elephant, i)() ```
Possibly naïvely: ``` for ins in instructions: getattr(elephant, ins)() ``` Gotchas include that `ins` must be a string and that it's probably wise to validate both that `ins` is what you really want to call and that `getattr(elephant, ins)` is a callable.
12,698,646
How can I create a list of methods in python to be applied to an object? Given some arbitrary class: ``` class someClass: def __init__(self, s): self.size = s def shrink(self): self.size -= 1 def grow(self): self.size += 1 def invert(self): self.size = -self.size ``` I want to be able to write an iterable object like this list: ``` instructions = [shrink, grow, shrink, shrink, grow, invert] ``` To be run through a for-loop later: ``` elephant = someClass(90) sizeList = [] for ins in instructions: elephant.ins() sizeList.append(elephant.size) ``` I've done something similar with functions before. Not being able to do this with methods would require me to rewrite an intimidating amount of code...
2012/10/02
[ "https://Stackoverflow.com/questions/12698646", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1175133/" ]
You could create a list of the method *names* then use [`getattr()`](https://docs.python.org/3/library/functions.html#getattr) to access the methods: ``` instructions = ["shrink", "grow", "shrink"] for i in instructions: getattr(elephant, i)() ```
You can use [`dir`](http://docs.python.org/library/functions.html#dir) to get all property names of an object, and [`getattr`](http://docs.python.org/library/functions.html#getattr) to get a property value of an object. You may also want to not call any non-[callable](http://docs.python.org/library/functions.html#callable) properties (such as `2` or `"foo"`): ``` for m in dir(elephant): if not m.startswith('_'): f = getattr(elephant, m) if callable(f): # Do not try to call non-function properties f() ```
12,698,646
How can I create a list of methods in python to be applied to an object? Given some arbitrary class: ``` class someClass: def __init__(self, s): self.size = s def shrink(self): self.size -= 1 def grow(self): self.size += 1 def invert(self): self.size = -self.size ``` I want to be able to write an iterable object like this list: ``` instructions = [shrink, grow, shrink, shrink, grow, invert] ``` To be run through a for-loop later: ``` elephant = someClass(90) sizeList = [] for ins in instructions: elephant.ins() sizeList.append(elephant.size) ``` I've done something similar with functions before. Not being able to do this with methods would require me to rewrite an intimidating amount of code...
2012/10/02
[ "https://Stackoverflow.com/questions/12698646", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1175133/" ]
You could create a list of the method *names* then use [`getattr()`](https://docs.python.org/3/library/functions.html#getattr) to access the methods: ``` instructions = ["shrink", "grow", "shrink"] for i in instructions: getattr(elephant, i)() ```
As an alternative to using strings for your list of instructions, you could do the following: ``` instructions = [someClass.shrink, someClass.grow, someClass.shrink, someClass.shrink, someClass.grow, someClass.invert] elephant = someClass(90) sizeList = [] for ins in instructions: ins(elephant) sizeList.append(elephant.size) ```
12,698,646
How can I create a list of methods in python to be applied to an object? Given some arbitrary class: ``` class someClass: def __init__(self, s): self.size = s def shrink(self): self.size -= 1 def grow(self): self.size += 1 def invert(self): self.size = -self.size ``` I want to be able to write an iterable object like this list: ``` instructions = [shrink, grow, shrink, shrink, grow, invert] ``` To be run through a for-loop later: ``` elephant = someClass(90) sizeList = [] for ins in instructions: elephant.ins() sizeList.append(elephant.size) ``` I've done something similar with functions before. Not being able to do this with methods would require me to rewrite an intimidating amount of code...
2012/10/02
[ "https://Stackoverflow.com/questions/12698646", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1175133/" ]
As an alternative to using strings for your list of instructions, you could do the following: ``` instructions = [someClass.shrink, someClass.grow, someClass.shrink, someClass.shrink, someClass.grow, someClass.invert] elephant = someClass(90) sizeList = [] for ins in instructions: ins(elephant) sizeList.append(elephant.size) ```
Possibly naïvely: ``` for ins in instructions: getattr(elephant, ins)() ``` Gotchas include that `ins` must be a string and that it's probably wise to validate both that `ins` is what you really want to call and that `getattr(elephant, ins)` is a callable.
12,698,646
How can I create a list of methods in python to be applied to an object? Given some arbitrary class: ``` class someClass: def __init__(self, s): self.size = s def shrink(self): self.size -= 1 def grow(self): self.size += 1 def invert(self): self.size = -self.size ``` I want to be able to write an iterable object like this list: ``` instructions = [shrink, grow, shrink, shrink, grow, invert] ``` To be run through a for-loop later: ``` elephant = someClass(90) sizeList = [] for ins in instructions: elephant.ins() sizeList.append(elephant.size) ``` I've done something similar with functions before. Not being able to do this with methods would require me to rewrite an intimidating amount of code...
2012/10/02
[ "https://Stackoverflow.com/questions/12698646", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1175133/" ]
As an alternative to using strings for your list of instructions, you could do the following: ``` instructions = [someClass.shrink, someClass.grow, someClass.shrink, someClass.shrink, someClass.grow, someClass.invert] elephant = someClass(90) sizeList = [] for ins in instructions: ins(elephant) sizeList.append(elephant.size) ```
You can use [`dir`](http://docs.python.org/library/functions.html#dir) to get all property names of an object, and [`getattr`](http://docs.python.org/library/functions.html#getattr) to get a property value of an object. You may also want to not call any non-[callable](http://docs.python.org/library/functions.html#callable) properties (such as `2` or `"foo"`): ``` for m in dir(elephant): if not m.startswith('_'): f = getattr(elephant, m) if callable(f): # Do not try to call non-function properties f() ```
33,225,888
I'm new to python. I had a difficult time understanding why the output would be 2 for the problem below. Can someone explain it to be in very basic terms. ``` a = [1, 2, 3, 4, 0] b = [3, 0, 2, 4, 1] c = [3, 2, 4, 1, 5] print c[a[a[4]]] ```
2015/10/20
[ "https://Stackoverflow.com/questions/33225888", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5464930/" ]
Maybe it helps understanding splitting it in 3 rows ? ``` inner_one = a[4] # a[4] = 0 inner_two = a[inner_one] # a[0] = 1 result = c[inner_two] # c[1] = 2 ```
Python lists are 0-indexed. So your first call, `a[4]`, returns `0`, then `a[0]` returns `1`, and finally `c[1]` returns `2`.
38,025,218
I am running python 2.7 and django 1.8. [I have this exact issue.](https://stackoverflow.com/questions/24983777/cant-add-a-new-field-in-migration-column-does-not-exist) The answer, posted as a comment is: `What I did is completely remake the db, erase the migration history and folders.` I am very uncertain about deleting and creating the database. I am running a PostgreSQL database. If I drop/delete the database and then run the migrations, will the database be rebuilt from the migrations? I don't want to delete the database and then be stuck in a worse situation.
2016/06/25
[ "https://Stackoverflow.com/questions/38025218", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1261774/" ]
You can create a separate interface for your static needs: ``` interface IPerson { name: string; getName(): string; } class Person implements IPerson { public name: string; constructor(name: string) { this.name = name; } public getName() { return this.name; } public static create(name: string) { // method added for demonstration purposes return new Person(name); } } ``` Static interface: ``` interface IPersonStatic { new(name: string); // constructor create(name: string): IPerson; // static method } let p: IPersonStatic = Person; ``` Also, you can use `typeof` to determine the type: ``` let p2: typeof Person = Person; // same as 'let p2 = Person;' let p3: typeof Person = AnotherPerson; ```
I added `IPersonConstructor` to your example. The rest is identical; just included for clarity. `new (arg1: typeOfArg1, ...): TypeOfInstance;` describes a class, since it can be invoked with `new` and will return an instance of the class. ``` interface IPerson { name: string; getName(): string; } class Person implements IPerson { public name: string; constructor(name: string) { this.name = name; } public getName() { return this.name; } } interface IPersonConstructor { // When invoked with `new` and passed a string, returns an instance of `Person` new (name: string): Person; prototype: Person; } ```
38,025,218
I am running python 2.7 and django 1.8. [I have this exact issue.](https://stackoverflow.com/questions/24983777/cant-add-a-new-field-in-migration-column-does-not-exist) The answer, posted as a comment is: `What I did is completely remake the db, erase the migration history and folders.` I am very uncertain about deleting and creating the database. I am running a PostgreSQL database. If I drop/delete the database and then run the migrations, will the database be rebuilt from the migrations? I don't want to delete the database and then be stuck in a worse situation.
2016/06/25
[ "https://Stackoverflow.com/questions/38025218", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1261774/" ]
You can create a separate interface for your static needs: ``` interface IPerson { name: string; getName(): string; } class Person implements IPerson { public name: string; constructor(name: string) { this.name = name; } public getName() { return this.name; } public static create(name: string) { // method added for demonstration purposes return new Person(name); } } ``` Static interface: ``` interface IPersonStatic { new(name: string); // constructor create(name: string): IPerson; // static method } let p: IPersonStatic = Person; ``` Also, you can use `typeof` to determine the type: ``` let p2: typeof Person = Person; // same as 'let p2 = Person;' let p3: typeof Person = AnotherPerson; ```
How about a generic ``` interface IConstructor<T> extends Function { new (...args: any[]): T; } ```
17,134,897
i'm using the popular pythonscript ( <http://code.google.com/p/edim-mobile/source/browse/trunk/ios/IncrementalLocalization/localize.py> ) to localize my storyboards in ios5. I did only some changes in storyboard and got this error: > > Please file a bug at <http://bugreport.apple.com> with this warning > message and any useful information you can provide. > com.apple.ibtool.errors > description The strings > file "MainStoryboard.strings" could not be applied. > recovery-suggestion Missing object referenced > from oid-keyed mapping. Object ID ztT-UO-myJ > underlying-errors > > description > The strings file "MainStoryboard.strings" could not be applied. > recovery-suggestion > Missing object referenced from oid-keyed mapping. Object ID ztT-UO-myJ > Traceback (most recent call last): File "./localize.py", line 105, in > raise Exception("\n" + errorDescription) Exception: > > > **\* Error while creating the 'Project/en.lproj/MainStoryboard.storyboard' file\*** > > > **\* Error while creating the 'Project/es.lproj/MainStoryboard.storyboard' file\*** > > > **\* Error while creating the 'Project/fr.lproj/MainStoryboard.storyboard' file\*** > > > **\* Error while creating the 'Project/it.lproj/MainStoryboard.storyboard' file\*** > > > Showing first 200 notices only Command /bin/sh failed with exit code 1 > > > I can't find a solution.. Maik
2013/06/16
[ "https://Stackoverflow.com/questions/17134897", "https://Stackoverflow.com", "https://Stackoverflow.com/users/-1/" ]
The second query is an example of an implicit cross join (aka Cartesian join) - every record from users will be joined to every record from roles with `id=5`, since all these combinations will have the `where` clause evaluate as true.
A join will be required to have correct data returned ``` SELECT u.username FROM users u JOIN roles r ON u.roleid = r.id WHERE r.id = 5; ``` I think is better to use explicit join with `ON` to dtermine which columns have relationship rather than using realtionship in `WHERE` clause
17,134,897
i'm using the popular pythonscript ( <http://code.google.com/p/edim-mobile/source/browse/trunk/ios/IncrementalLocalization/localize.py> ) to localize my storyboards in ios5. I did only some changes in storyboard and got this error: > > Please file a bug at <http://bugreport.apple.com> with this warning > message and any useful information you can provide. > com.apple.ibtool.errors > description The strings > file "MainStoryboard.strings" could not be applied. > recovery-suggestion Missing object referenced > from oid-keyed mapping. Object ID ztT-UO-myJ > underlying-errors > > description > The strings file "MainStoryboard.strings" could not be applied. > recovery-suggestion > Missing object referenced from oid-keyed mapping. Object ID ztT-UO-myJ > Traceback (most recent call last): File "./localize.py", line 105, in > raise Exception("\n" + errorDescription) Exception: > > > **\* Error while creating the 'Project/en.lproj/MainStoryboard.storyboard' file\*** > > > **\* Error while creating the 'Project/es.lproj/MainStoryboard.storyboard' file\*** > > > **\* Error while creating the 'Project/fr.lproj/MainStoryboard.storyboard' file\*** > > > **\* Error while creating the 'Project/it.lproj/MainStoryboard.storyboard' file\*** > > > Showing first 200 notices only Command /bin/sh failed with exit code 1 > > > I can't find a solution.. Maik
2013/06/16
[ "https://Stackoverflow.com/questions/17134897", "https://Stackoverflow.com", "https://Stackoverflow.com/users/-1/" ]
The second query is an example of an implicit cross join (aka Cartesian join) - every record from users will be joined to every record from roles with `id=5`, since all these combinations will have the `where` clause evaluate as true.
You need two columns of the same type one for each table to JOIN .You need the join-predicate `ON u.roleid = r.id` to get the correct data . > > Inner join creates a new result table by combining column values of two tables (A and B) based upon the join-predicate. The query compares each row of A with each row of B to find all pairs of rows which satisfy the join-predicate. When the join-predicate is satisfied, column values for each matched pair of rows of A and B are combined into a result row. The result of the join can be defined as the outcome of first taking the Cartesian product (or Cross join) of all records in the tables (combining every record in table A with every record in table B)—then return all records which satisfy the join predicate. > > >
17,134,897
i'm using the popular pythonscript ( <http://code.google.com/p/edim-mobile/source/browse/trunk/ios/IncrementalLocalization/localize.py> ) to localize my storyboards in ios5. I did only some changes in storyboard and got this error: > > Please file a bug at <http://bugreport.apple.com> with this warning > message and any useful information you can provide. > com.apple.ibtool.errors > description The strings > file "MainStoryboard.strings" could not be applied. > recovery-suggestion Missing object referenced > from oid-keyed mapping. Object ID ztT-UO-myJ > underlying-errors > > description > The strings file "MainStoryboard.strings" could not be applied. > recovery-suggestion > Missing object referenced from oid-keyed mapping. Object ID ztT-UO-myJ > Traceback (most recent call last): File "./localize.py", line 105, in > raise Exception("\n" + errorDescription) Exception: > > > **\* Error while creating the 'Project/en.lproj/MainStoryboard.storyboard' file\*** > > > **\* Error while creating the 'Project/es.lproj/MainStoryboard.storyboard' file\*** > > > **\* Error while creating the 'Project/fr.lproj/MainStoryboard.storyboard' file\*** > > > **\* Error while creating the 'Project/it.lproj/MainStoryboard.storyboard' file\*** > > > Showing first 200 notices only Command /bin/sh failed with exit code 1 > > > I can't find a solution.. Maik
2013/06/16
[ "https://Stackoverflow.com/questions/17134897", "https://Stackoverflow.com", "https://Stackoverflow.com/users/-1/" ]
The second query is an example of an implicit cross join (aka Cartesian join) - every record from users will be joined to every record from roles with `id=5`, since all these combinations will have the `where` clause evaluate as true.
No. In the second version you have no relationship between the tables. The `,` operator in the `from` clause means `cross join`. The second example will either return all users at least once (depending on the number of matched in the second table). Or it will return no rows (if there are no matches in the second table). If the second example were: ``` SELECT u.username FROM users u, roles r WHERE r.id = 5 and u.id = 5 ``` Then they would mean the same thing. The clearer and better way to write this is: ``` SELECT u.username FROM users u cross join roles r WHERE r.id = 5 and u.id = 5 ``` Or using proper `inner join` syntax: ``` SELECT u.username FROM users u join roles r on r.id = u.id WHERE r.id = 5 /* this could also be in the `on` clause */ ```
63,336,300
My Tensorflow model makes heavy use of data preprocessing that should be done on the CPU to leave the GPU open for training. ``` top - 09:57:54 up 16:23, 1 user, load average: 3,67, 1,57, 0,67 Tasks: 400 total, 1 running, 399 sleeping, 0 stopped, 0 zombie %Cpu(s): 19,1 us, 2,8 sy, 0,0 ni, 78,1 id, 0,0 wa, 0,0 hi, 0,0 si, 0,0 st MiB Mem : 32049,7 total, 314,6 free, 5162,9 used, 26572,2 buff/cache MiB Swap: 6779,0 total, 6556,0 free, 223,0 used. 25716,1 avail Mem PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 17604 joro 20 0 22,1g 2,3g 704896 S 331,2 7,2 4:39.33 python ``` This is what top shows me. I would like to make this python process use at least 90% of available CPU across all cores. How can this be achieved? GPU utilization is better, around 90%. Even though I don't know why it is not at 100% ``` Mon Aug 10 10:00:13 2020 +-----------------------------------------------------------------------------+ | NVIDIA-SMI 440.100 Driver Version: 440.100 CUDA Version: 10.2 | |-------------------------------+----------------------+----------------------+ | GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC | | Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. | |===============================+======================+======================| | 0 GeForce RTX 208... Off | 00000000:01:00.0 On | N/A | | 35% 41C P2 90W / 260W | 10515MiB / 11016MiB | 11% Default | +-------------------------------+----------------------+----------------------+ +-----------------------------------------------------------------------------+ | Processes: GPU Memory | | GPU PID Type Process name Usage | |=============================================================================| | 0 1128 G /usr/lib/xorg/Xorg 102MiB | | 0 1648 G /usr/lib/xorg/Xorg 380MiB | | 0 1848 G /usr/bin/gnome-shell 279MiB | | 0 10633 G ...uest-channel-token=1206236727 266MiB | | 0 13794 G /usr/lib/firefox/firefox 6MiB | | 0 17604 C python 9457MiB | +-----------------------------------------------------------------------------+ ``` All i found was a solution for tensorflow 1.0: ``` sess = tf.Session(config=tf.ConfigProto( intra_op_parallelism_threads=NUM_THREADS)) ``` I have an Intel 9900k and a RTX 2080 Ti and use Ubuntu 20.04 E: When I add the following code on top, it uses 1 core 100% ``` tf.config.threading.set_intra_op_parallelism_threads(1) tf.config.threading.set_inter_op_parallelism_threads(1) ``` But increasing this number to 16 again only utilizes all cores ~30%
2020/08/10
[ "https://Stackoverflow.com/questions/63336300", "https://Stackoverflow.com", "https://Stackoverflow.com/users/9280994/" ]
Just setting the `set_intra_op_parallelism_threads` and `set_inter_op_parallelism_threads` wasn't working for me. Incase someone else is in the same place, after a lot of struggle with the same issue, below piece of code worked for me in limiting the CPU usage of tensorflow below 500%: ``` import os import tensorflow as tf num_threads = 5 os.environ["OMP_NUM_THREADS"] = "5" os.environ["TF_NUM_INTRAOP_THREADS"] = "5" os.environ["TF_NUM_INTEROP_THREADS"] = "5" tf.config.threading.set_inter_op_parallelism_threads( num_threads ) tf.config.threading.set_intra_op_parallelism_threads( num_threads ) tf.config.set_soft_device_placement(True) ```
There can be many issues for this, I solved it for me the following way: Set `tf.config.threading.set_intra_op_parallelism_threads(<Your_Physical_Core_Count>) tf.config.threading.set_inter_op_parallelism_threads(<Your_Physical_Core_Count>)` both to your *physical* core count. You do not want Hyperthreading for highly vectorized operations as you cannot benefit from parallized operations when there aren't any gaps. > > "With a high level of vectorization, the number of execution gaps is > very small and there is possibly insufficient opportunity to make up > any penalty due to increased contention in HT." > > > From: Saini et al, published by NASAA dvanced Supercomputing Division, 2011: The Impact of Hyper-Threading on Processor Resource Utilization in Production Applications EDIT: I am not sure anymore, if one of the two has to be 1. But one 100% needs to be set to Physical.
1,544,535
I have to synchronize two different LDAP servers with different schemas. To make my life easier I'm searching for an object mapper for python like SQLobject/SQLAlchemy, but for LDAP. I found the following packages via pypi and google that might provide such functionality: * **pumpkin 0.1.0-beta1**: Pumpkin is LDAP ORM (without R) for python. * **afpy.ldap 0.3**: This module provide an easy way to deal with ldap stuff in python. * **bda.ldap 1.3.1**: LDAP convenience library. * **Python LDAP Object Mapper**: Provides an ORM-like (Django, Storm, SQLAlchemy, et al.) layer for LDAP in Python. * **ldapdict 1.4**: Python package for connecting to LDAP, returning results as dictionary like classes. Results are cached. Which of these packages could you recommend? Or should I better use something different?
2009/10/09
[ "https://Stackoverflow.com/questions/1544535", "https://Stackoverflow.com", "https://Stackoverflow.com/users/179014/" ]
If I were you I would either use python-ldap or ldaptor. Python-ldap is a wrapper for OpenLDAP so you may have problems with using it on Windows unless you are able to build from source. LDAPtor, is pure python so you avoid that problem. Also, there is a very well written, and graphical description of ldaptor on the website so you should be able to tell whether or not it will do the job you need, just by reading through this web page: <http://eagain.net/talks/ldaptor/>
Giving links to the projects in question would help a lot. Being the developer of [Python LDAP Object Mapper](https://launchpad.net/python-ldap-om), I can tell that it is quite dead at the moment. If you (or anybody else) is up for taking it over, you're welcome :)
1,544,535
I have to synchronize two different LDAP servers with different schemas. To make my life easier I'm searching for an object mapper for python like SQLobject/SQLAlchemy, but for LDAP. I found the following packages via pypi and google that might provide such functionality: * **pumpkin 0.1.0-beta1**: Pumpkin is LDAP ORM (without R) for python. * **afpy.ldap 0.3**: This module provide an easy way to deal with ldap stuff in python. * **bda.ldap 1.3.1**: LDAP convenience library. * **Python LDAP Object Mapper**: Provides an ORM-like (Django, Storm, SQLAlchemy, et al.) layer for LDAP in Python. * **ldapdict 1.4**: Python package for connecting to LDAP, returning results as dictionary like classes. Results are cached. Which of these packages could you recommend? Or should I better use something different?
2009/10/09
[ "https://Stackoverflow.com/questions/1544535", "https://Stackoverflow.com", "https://Stackoverflow.com/users/179014/" ]
little late maybe... bda.ldap (<http://pypi.python.org/pypi/bda.ldap>) wraps again python-ldap to a more simple API than python-ldap itself provides. Further it transparently handles query caching of results due to bda.cache (<http://pypi.python.org/pypi/bda.cache>). Additionally it provides a LDAPNode object for building end editing LDAP trees via a dict like API. It uses some ZTK stuff as well for integration purposes to the zope framework (primary due to zodict package in LDAPNode implementation). We recently released bda.ldap 1.4.0. If you take a look at README.txt#TODO, you see whats missing from our POV to declare the lib as final. Comments are always welcome, Cheers, Robert
Giving links to the projects in question would help a lot. Being the developer of [Python LDAP Object Mapper](https://launchpad.net/python-ldap-om), I can tell that it is quite dead at the moment. If you (or anybody else) is up for taking it over, you're welcome :)
45,321,425
I am using python transitions module ([link](http://github.com/pytransitions/transitions)) to create finite state machine. How do I run this finite state machine forever? Bascically what I want is a fsm model which can stay "idle" when there is no more event to trigger. For examplel, in example.py: ``` state = [ 'A', B', 'C'] transtion = [ A->B->C] if name == 'main': machine = Machine(state, transition, initial='A')** print(machine.state)** ``` If I to run this machine in a python program, it will go into state 'A', print the current state, and the program will then exit immediately. So my question is how can I keep it running forever when there is nothing to trigger a transition? Shall I implement a loop or is there any other way to do so?
2017/07/26
[ "https://Stackoverflow.com/questions/45321425", "https://Stackoverflow.com", "https://Stackoverflow.com/users/8368519/" ]
@BaptisteM, Instead of using AsyncTask for this you can use Handler and Runnable like this, ``` private void startColorTimer() { mColorHandler.postDelayed(ColorRunnable, interval); timerOn = true; } private Handler mColorHandler = new Handler(); private Runnable ColorRunnable = new Runnable() { @Override public void run() { coloredTextView.setTextColor(getRanColor()); if (timerOn) { mColorHandler.postDelayed(this, interval); } } }; ```
The question have been asked, I started to answer it but a moderator closed it so I put my exemple here. This exemple change randomly the color of a TextView every 1000ms. The interval can be changed. ``` package com.your.package; import android.graphics.Color; import android.os.AsyncTask; import android.os.Bundle; import android.support.v7.app.AppCompatActivity; import android.widget.TextView; import java.util.Random; public class MainActivity extends AppCompatActivity { TextView coloredTextView; long interval = 1000; //in ms boolean timerOn = false; @Override protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.activity_main); coloredTextView = (TextView) findViewById(R.id.colorTextView); startColorTimer(); } private void startColorTimer(){ new ColorTimer().execute(); timerOn=true; } private void stopColorTimer(){ timerOn=false; } private class ColorTimer extends AsyncTask<Void, Void, Void> { public ColorTimer() { timerOn=true; } @Override protected Void doInBackground(Void... params) { try { Thread.sleep(interval); } catch (InterruptedException e) { e.printStackTrace(); } return null; } @Override protected void onPostExecute(Void aVoid) { super.onPostExecute(aVoid); coloredTextView.setTextColor(getRanColor()); if(timerOn) new ColorTimer().execute(); } private int getRanColor() { Random rand = new Random(); return Color.argb(255, rand.nextInt(256), rand.nextInt(256), rand.nextInt(256)); } @Override protected void finalize() throws Throwable { super.finalize(); timerOn=false; } } } ``` activity\_main.xml : ``` <?xml version="1.0" encoding="utf-8"?> <android.support.constraint.ConstraintLayout xmlns:android="http://schemas.android.com/apk/res/android" xmlns:app="http://schemas.android.com/apk/res-auto" xmlns:tools="http://schemas.android.com/tools" android:layout_width="match_parent" android:layout_height="match_parent" tools:context="fr.rifft.bacasable.MainActivity"> <TextView android:id="@+id/colorTextView" android:layout_width="wrap_content" android:layout_height="wrap_content" android:layout_marginLeft="8dp" android:layout_marginTop="8dp" android:text="Colored Text View" android:textSize="30sp" app:layout_constraintLeft_toLeftOf="parent" app:layout_constraintTop_toTopOf="parent" /> </android.support.constraint.ConstraintLayout> ``` I hope it could help.
45,321,425
I am using python transitions module ([link](http://github.com/pytransitions/transitions)) to create finite state machine. How do I run this finite state machine forever? Bascically what I want is a fsm model which can stay "idle" when there is no more event to trigger. For examplel, in example.py: ``` state = [ 'A', B', 'C'] transtion = [ A->B->C] if name == 'main': machine = Machine(state, transition, initial='A')** print(machine.state)** ``` If I to run this machine in a python program, it will go into state 'A', print the current state, and the program will then exit immediately. So my question is how can I keep it running forever when there is nothing to trigger a transition? Shall I implement a loop or is there any other way to do so?
2017/07/26
[ "https://Stackoverflow.com/questions/45321425", "https://Stackoverflow.com", "https://Stackoverflow.com/users/8368519/" ]
try this you can use `TimerTask` :-> A task that can be scheduled for one-time or repeated execution by a Timer. **[TimerTask](https://developer.android.com/reference/java/util/TimerTask.html)** ``` Timer timer; timer = new Timer(); timer.scheduleAtFixedRate(new RemindTask(), 0, 3000); // delay*/ private class RemindTask extends TimerTask { int current = viewPager.getCurrentItem(); @Override public void run() { runOnUiThread(new Runnable() { public void run() { coloredTextView.setTextColor(getRanColor()); } }); } ``` }
The question have been asked, I started to answer it but a moderator closed it so I put my exemple here. This exemple change randomly the color of a TextView every 1000ms. The interval can be changed. ``` package com.your.package; import android.graphics.Color; import android.os.AsyncTask; import android.os.Bundle; import android.support.v7.app.AppCompatActivity; import android.widget.TextView; import java.util.Random; public class MainActivity extends AppCompatActivity { TextView coloredTextView; long interval = 1000; //in ms boolean timerOn = false; @Override protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.activity_main); coloredTextView = (TextView) findViewById(R.id.colorTextView); startColorTimer(); } private void startColorTimer(){ new ColorTimer().execute(); timerOn=true; } private void stopColorTimer(){ timerOn=false; } private class ColorTimer extends AsyncTask<Void, Void, Void> { public ColorTimer() { timerOn=true; } @Override protected Void doInBackground(Void... params) { try { Thread.sleep(interval); } catch (InterruptedException e) { e.printStackTrace(); } return null; } @Override protected void onPostExecute(Void aVoid) { super.onPostExecute(aVoid); coloredTextView.setTextColor(getRanColor()); if(timerOn) new ColorTimer().execute(); } private int getRanColor() { Random rand = new Random(); return Color.argb(255, rand.nextInt(256), rand.nextInt(256), rand.nextInt(256)); } @Override protected void finalize() throws Throwable { super.finalize(); timerOn=false; } } } ``` activity\_main.xml : ``` <?xml version="1.0" encoding="utf-8"?> <android.support.constraint.ConstraintLayout xmlns:android="http://schemas.android.com/apk/res/android" xmlns:app="http://schemas.android.com/apk/res-auto" xmlns:tools="http://schemas.android.com/tools" android:layout_width="match_parent" android:layout_height="match_parent" tools:context="fr.rifft.bacasable.MainActivity"> <TextView android:id="@+id/colorTextView" android:layout_width="wrap_content" android:layout_height="wrap_content" android:layout_marginLeft="8dp" android:layout_marginTop="8dp" android:text="Colored Text View" android:textSize="30sp" app:layout_constraintLeft_toLeftOf="parent" app:layout_constraintTop_toTopOf="parent" /> </android.support.constraint.ConstraintLayout> ``` I hope it could help.
45,321,425
I am using python transitions module ([link](http://github.com/pytransitions/transitions)) to create finite state machine. How do I run this finite state machine forever? Bascically what I want is a fsm model which can stay "idle" when there is no more event to trigger. For examplel, in example.py: ``` state = [ 'A', B', 'C'] transtion = [ A->B->C] if name == 'main': machine = Machine(state, transition, initial='A')** print(machine.state)** ``` If I to run this machine in a python program, it will go into state 'A', print the current state, and the program will then exit immediately. So my question is how can I keep it running forever when there is nothing to trigger a transition? Shall I implement a loop or is there any other way to do so?
2017/07/26
[ "https://Stackoverflow.com/questions/45321425", "https://Stackoverflow.com", "https://Stackoverflow.com/users/8368519/" ]
@BaptisteM, Instead of using AsyncTask for this you can use Handler and Runnable like this, ``` private void startColorTimer() { mColorHandler.postDelayed(ColorRunnable, interval); timerOn = true; } private Handler mColorHandler = new Handler(); private Runnable ColorRunnable = new Runnable() { @Override public void run() { coloredTextView.setTextColor(getRanColor()); if (timerOn) { mColorHandler.postDelayed(this, interval); } } }; ```
You can use a Handler class: <https://developer.android.com/reference/android/os/Handler.html#postDelayed(java.lang.Runnable>, long) postHandler() counts the time on background and sends event to main thread when it's time. Also, you can use a recurrency: <https://en.wikipedia.org/wiki/Recursion_(computer_science)> ``` final int[] colors = new int[]{ 0xFF0000FF, 0xFF00FF00, 0xFFFF0000, 0xFFFF00FF, }; final Handler handler = new Handler(); handler.postDelayed(new Runnable() { @Override public void run() { findViewById(R.id.bg_to_change).setBackgroundColor(colors[new Random().nextInt(colors.length)]); handler.postDelayed(this, 500); } }, /*This is your X*/ 500); ```
45,321,425
I am using python transitions module ([link](http://github.com/pytransitions/transitions)) to create finite state machine. How do I run this finite state machine forever? Bascically what I want is a fsm model which can stay "idle" when there is no more event to trigger. For examplel, in example.py: ``` state = [ 'A', B', 'C'] transtion = [ A->B->C] if name == 'main': machine = Machine(state, transition, initial='A')** print(machine.state)** ``` If I to run this machine in a python program, it will go into state 'A', print the current state, and the program will then exit immediately. So my question is how can I keep it running forever when there is nothing to trigger a transition? Shall I implement a loop or is there any other way to do so?
2017/07/26
[ "https://Stackoverflow.com/questions/45321425", "https://Stackoverflow.com", "https://Stackoverflow.com/users/8368519/" ]
@BaptisteM, Instead of using AsyncTask for this you can use Handler and Runnable like this, ``` private void startColorTimer() { mColorHandler.postDelayed(ColorRunnable, interval); timerOn = true; } private Handler mColorHandler = new Handler(); private Runnable ColorRunnable = new Runnable() { @Override public void run() { coloredTextView.setTextColor(getRanColor()); if (timerOn) { mColorHandler.postDelayed(this, interval); } } }; ```
You can use TimerTask to do so. start timer from onResume() ``` @Override public void onResume() { super.onResume(); startColor(); } ``` and stop timer from onStop() ``` @Override public void onStop() { super.onStop(); if (timer != null) timer.cancel(); if (task != null) task.cancel(); } ``` And add below code to your activity. ``` TimerTask task; Timer timer; private void startColor() { task = new TimerTask() { @Override public void run() { runOnUiThread(new Runnable() { @Override public void run() { coloredTextView.setBackgroundColor(getRanColor()); } }); } private int getRanColor() { Random rand = new Random(); return Color.argb(255, rand.nextInt(256), rand.nextInt(256), rand.nextInt(256)); } }; timer = new Timer(); timer.scheduleAtFixedRate(task, 0, 1000); } ```
45,321,425
I am using python transitions module ([link](http://github.com/pytransitions/transitions)) to create finite state machine. How do I run this finite state machine forever? Bascically what I want is a fsm model which can stay "idle" when there is no more event to trigger. For examplel, in example.py: ``` state = [ 'A', B', 'C'] transtion = [ A->B->C] if name == 'main': machine = Machine(state, transition, initial='A')** print(machine.state)** ``` If I to run this machine in a python program, it will go into state 'A', print the current state, and the program will then exit immediately. So my question is how can I keep it running forever when there is nothing to trigger a transition? Shall I implement a loop or is there any other way to do so?
2017/07/26
[ "https://Stackoverflow.com/questions/45321425", "https://Stackoverflow.com", "https://Stackoverflow.com/users/8368519/" ]
@BaptisteM, Instead of using AsyncTask for this you can use Handler and Runnable like this, ``` private void startColorTimer() { mColorHandler.postDelayed(ColorRunnable, interval); timerOn = true; } private Handler mColorHandler = new Handler(); private Runnable ColorRunnable = new Runnable() { @Override public void run() { coloredTextView.setTextColor(getRanColor()); if (timerOn) { mColorHandler.postDelayed(this, interval); } } }; ```
you can do this: ``` Handler mHandler = new Handler(); Timer timer = new Timer(); timer.scheduleAtFixedRate(new TimerTask() { @Override public void run() { mHandler.post(new Runnable() { @Override public void run() { //change color } }); } }, 0, 5* 1000); // 5s ``` stop it by : ``` timer.cancel(); ```
45,321,425
I am using python transitions module ([link](http://github.com/pytransitions/transitions)) to create finite state machine. How do I run this finite state machine forever? Bascically what I want is a fsm model which can stay "idle" when there is no more event to trigger. For examplel, in example.py: ``` state = [ 'A', B', 'C'] transtion = [ A->B->C] if name == 'main': machine = Machine(state, transition, initial='A')** print(machine.state)** ``` If I to run this machine in a python program, it will go into state 'A', print the current state, and the program will then exit immediately. So my question is how can I keep it running forever when there is nothing to trigger a transition? Shall I implement a loop or is there any other way to do so?
2017/07/26
[ "https://Stackoverflow.com/questions/45321425", "https://Stackoverflow.com", "https://Stackoverflow.com/users/8368519/" ]
@BaptisteM, Instead of using AsyncTask for this you can use Handler and Runnable like this, ``` private void startColorTimer() { mColorHandler.postDelayed(ColorRunnable, interval); timerOn = true; } private Handler mColorHandler = new Handler(); private Runnable ColorRunnable = new Runnable() { @Override public void run() { coloredTextView.setTextColor(getRanColor()); if (timerOn) { mColorHandler.postDelayed(this, interval); } } }; ```
try this you can use `TimerTask` :-> A task that can be scheduled for one-time or repeated execution by a Timer. **[TimerTask](https://developer.android.com/reference/java/util/TimerTask.html)** ``` Timer timer; timer = new Timer(); timer.scheduleAtFixedRate(new RemindTask(), 0, 3000); // delay*/ private class RemindTask extends TimerTask { int current = viewPager.getCurrentItem(); @Override public void run() { runOnUiThread(new Runnable() { public void run() { coloredTextView.setTextColor(getRanColor()); } }); } ``` }
45,321,425
I am using python transitions module ([link](http://github.com/pytransitions/transitions)) to create finite state machine. How do I run this finite state machine forever? Bascically what I want is a fsm model which can stay "idle" when there is no more event to trigger. For examplel, in example.py: ``` state = [ 'A', B', 'C'] transtion = [ A->B->C] if name == 'main': machine = Machine(state, transition, initial='A')** print(machine.state)** ``` If I to run this machine in a python program, it will go into state 'A', print the current state, and the program will then exit immediately. So my question is how can I keep it running forever when there is nothing to trigger a transition? Shall I implement a loop or is there any other way to do so?
2017/07/26
[ "https://Stackoverflow.com/questions/45321425", "https://Stackoverflow.com", "https://Stackoverflow.com/users/8368519/" ]
try this you can use `TimerTask` :-> A task that can be scheduled for one-time or repeated execution by a Timer. **[TimerTask](https://developer.android.com/reference/java/util/TimerTask.html)** ``` Timer timer; timer = new Timer(); timer.scheduleAtFixedRate(new RemindTask(), 0, 3000); // delay*/ private class RemindTask extends TimerTask { int current = viewPager.getCurrentItem(); @Override public void run() { runOnUiThread(new Runnable() { public void run() { coloredTextView.setTextColor(getRanColor()); } }); } ``` }
You can use a Handler class: <https://developer.android.com/reference/android/os/Handler.html#postDelayed(java.lang.Runnable>, long) postHandler() counts the time on background and sends event to main thread when it's time. Also, you can use a recurrency: <https://en.wikipedia.org/wiki/Recursion_(computer_science)> ``` final int[] colors = new int[]{ 0xFF0000FF, 0xFF00FF00, 0xFFFF0000, 0xFFFF00FF, }; final Handler handler = new Handler(); handler.postDelayed(new Runnable() { @Override public void run() { findViewById(R.id.bg_to_change).setBackgroundColor(colors[new Random().nextInt(colors.length)]); handler.postDelayed(this, 500); } }, /*This is your X*/ 500); ```
45,321,425
I am using python transitions module ([link](http://github.com/pytransitions/transitions)) to create finite state machine. How do I run this finite state machine forever? Bascically what I want is a fsm model which can stay "idle" when there is no more event to trigger. For examplel, in example.py: ``` state = [ 'A', B', 'C'] transtion = [ A->B->C] if name == 'main': machine = Machine(state, transition, initial='A')** print(machine.state)** ``` If I to run this machine in a python program, it will go into state 'A', print the current state, and the program will then exit immediately. So my question is how can I keep it running forever when there is nothing to trigger a transition? Shall I implement a loop or is there any other way to do so?
2017/07/26
[ "https://Stackoverflow.com/questions/45321425", "https://Stackoverflow.com", "https://Stackoverflow.com/users/8368519/" ]
try this you can use `TimerTask` :-> A task that can be scheduled for one-time or repeated execution by a Timer. **[TimerTask](https://developer.android.com/reference/java/util/TimerTask.html)** ``` Timer timer; timer = new Timer(); timer.scheduleAtFixedRate(new RemindTask(), 0, 3000); // delay*/ private class RemindTask extends TimerTask { int current = viewPager.getCurrentItem(); @Override public void run() { runOnUiThread(new Runnable() { public void run() { coloredTextView.setTextColor(getRanColor()); } }); } ``` }
You can use TimerTask to do so. start timer from onResume() ``` @Override public void onResume() { super.onResume(); startColor(); } ``` and stop timer from onStop() ``` @Override public void onStop() { super.onStop(); if (timer != null) timer.cancel(); if (task != null) task.cancel(); } ``` And add below code to your activity. ``` TimerTask task; Timer timer; private void startColor() { task = new TimerTask() { @Override public void run() { runOnUiThread(new Runnable() { @Override public void run() { coloredTextView.setBackgroundColor(getRanColor()); } }); } private int getRanColor() { Random rand = new Random(); return Color.argb(255, rand.nextInt(256), rand.nextInt(256), rand.nextInt(256)); } }; timer = new Timer(); timer.scheduleAtFixedRate(task, 0, 1000); } ```
45,321,425
I am using python transitions module ([link](http://github.com/pytransitions/transitions)) to create finite state machine. How do I run this finite state machine forever? Bascically what I want is a fsm model which can stay "idle" when there is no more event to trigger. For examplel, in example.py: ``` state = [ 'A', B', 'C'] transtion = [ A->B->C] if name == 'main': machine = Machine(state, transition, initial='A')** print(machine.state)** ``` If I to run this machine in a python program, it will go into state 'A', print the current state, and the program will then exit immediately. So my question is how can I keep it running forever when there is nothing to trigger a transition? Shall I implement a loop or is there any other way to do so?
2017/07/26
[ "https://Stackoverflow.com/questions/45321425", "https://Stackoverflow.com", "https://Stackoverflow.com/users/8368519/" ]
try this you can use `TimerTask` :-> A task that can be scheduled for one-time or repeated execution by a Timer. **[TimerTask](https://developer.android.com/reference/java/util/TimerTask.html)** ``` Timer timer; timer = new Timer(); timer.scheduleAtFixedRate(new RemindTask(), 0, 3000); // delay*/ private class RemindTask extends TimerTask { int current = viewPager.getCurrentItem(); @Override public void run() { runOnUiThread(new Runnable() { public void run() { coloredTextView.setTextColor(getRanColor()); } }); } ``` }
you can do this: ``` Handler mHandler = new Handler(); Timer timer = new Timer(); timer.scheduleAtFixedRate(new TimerTask() { @Override public void run() { mHandler.post(new Runnable() { @Override public void run() { //change color } }); } }, 0, 5* 1000); // 5s ``` stop it by : ``` timer.cancel(); ```
44,753,426
I have a .csv file with two columns of interest 'latitude' and 'longitude' with populated values I would like to return [latitude, longitude] pairs of each row from the two columns as lists... [10.222, 20.445] [10.2555, 20.119] ... and so forth for each row of my csv... The problem with > import pandas colnames = [ 'latitude', 'longitude'] data = pandas.read\_csv('path\_name.csv', names=colnames) ``` latitude = data.latitude.tolist() longitude = data.longitude.tolist() ``` is that it creates two lists for all the values each latitude and longitude column How can I create lists for each latitude, longitude pair in python ?
2017/06/26
[ "https://Stackoverflow.com/questions/44753426", "https://Stackoverflow.com", "https://Stackoverflow.com/users/8124002/" ]
Most basic way ``` import csv with open('filename.txt', 'r') as csvfile: spamreader = csv.reader(csvfile) for row in spamreader: print row ```
So from what I understand is that you want many lists of two elements: lat and long. However what you are receiving is two lists, one of lat and one of long. what I would do is loop over the length of those lists and then take that element in the lat/long lists and put them together in their own list. ``` for x in range(0, len(latitude)): newList = [latitude[x], longitude[x]] ```
65,511,540
I am new to this world and I am starting to take my first steps in python. I am trying to extract in a single list the indices of certain values of my list (those that are greater than 10). When using append I get the following error and I don't understand where the error is. ```py dbs = [0, 1, 0, 0, 0, 0, 1, 0, 1, 23, 1, 0, 1, 1, 0, 0, 0, 1, 1, 0, 20, 1, 1, 15, 1, 0, 0, 0, 40, 15, 0, 0] exceed2 = [] for d, i in enumerate(dbs): if i > 10: exceed2.append= (d,i) print(exceed2) ```
2020/12/30
[ "https://Stackoverflow.com/questions/65511540", "https://Stackoverflow.com", "https://Stackoverflow.com/users/14914879/" ]
You probably mean to write ``` for i, d in enumerate(dbs): if d > 10: exceed2.append(i) print(exceed2) ``` Few fixes here: * `append=()` is invalid syntax, you should just write `append()` * the `i, d` values from `enumerate()` are returning the values and indexes. You should be checking `d > 10`, since that's the value (per your description of the task). Then you should be putting only `i` into the `exceed2` array. (I switch the `i` and `d` variables so that `i` is for `index` as that's more conventional) * `append(d,i)` wouldn't work anyway, as `append` takes one argument. If you want to append both the value *and* index, you should use `.append((d, i))`, which will append a tuple of both to the list. * you probably don't want to print `exceed2` every time the condition is hit, when you could just print it once at the end.
Welcome to this world :D the problem is that .append is actually a function that only takes one input, and appends this input to the very end of whatever list you provide. Try this instead: ``` dbs = [0, 1, 0, 0, 0, 0, 1, 0, 1, 23, 1, 0, 1, 1, 0, 0, 0, 1, 1, 0, 20, 1, 1, 15, 1, 0, 0, 0, 40, 15, 0, 0] exceed2 = [] for d, i in enumerate(dbs): if i > 10: exceed2.append(i) print(exceed2) ```
14,320,758
Is there a way to run an arbitrary method whenever a new thread is started in Python (2.7)? My goal is to use [setproctitle](http://pypi.python.org/pypi/setproctitle) to set an appropriate title for each spawned thread.
2013/01/14
[ "https://Stackoverflow.com/questions/14320758", "https://Stackoverflow.com", "https://Stackoverflow.com/users/186971/" ]
Just inherit from threading.Thread and use this class instead of Thread - as long as you have control over the Threads. ``` import threading class MyThread(threading.Thread): def __init__(self, callable, *args, **kwargs): super(MyThread, self).__init__(*args, **kwargs) self._call_on_start = callable def start(self): self._call_on_start() super(MyThread, self).start() ``` Just as a coarse sketch. **Edit** From the comments the need arose to kind of "inject" the new behavior into an existing application. Let's assume you have a script that itself imports other libraries. These libraries use the `threading` module: Before importing any other modules, first execute this; ``` import threading import time class MyThread(threading.Thread): _call_on_start = None def __init__(self, callable_ = None, *args, **kwargs): super(MyThread, self).__init__(*args, **kwargs) if callable_ is not None: self._call_on_start = callable_ def start(self): if self._call_on_start is not None: self._call_on_start super(MyThread, self).start() def set_thread_title(): print "Set thread title" MyThread._call_on_start = set_thread_title() threading.Thread = MyThread def calculate_something(): time.sleep(5) print sum(range(1000)) t = threading.Thread(target = calculate_something) t.start() time.sleep(2) t.join() ``` As subsequent imports only do a lookup in `sys.modules`, all other libraries using this should be using our new class now. I regard this as a hack, and it might have strange side effects. But at least it's worth a try. Please note: `threading.Thread` is not the only way to implement concurrency in python, there are other options like `multiprocessing` etc.. These will be unaffected here. **Edit 2** I just took a look at the library you cited and it's all about processes, not Threads! So, just do a `:%s/threading/multiprocessing/g` and `:%s/Thread/Process/g` and things should be fine.
Use `threading.setprofile`. You give it your callback and Python will invoke it every time a new thread starts. Documentation [here](https://docs.python.org/2/library/threading.html).
40,879,394
I am new to python and pydev. I have tensorflow source and am able to run the example files using python3 /pathtoexamplefile.py. I want to try to step thru the word2vec\_basic.py code inside pydev. The debuger keep throwing File "/Users/me/workspace/tensorflow/tensorflow/python/**init**.py", line 45, in from tensorflow.python import pywrap\_tensorflow ImportError: cannot import name 'pywrap\_tensorflow' I think it has something to do with the working directory. I am able to run python3 -c 'import tensorflow' from my home directory. But, once I enter /Users/me/workspace/tensorflow, the command throws the same error, referencing the same line 45. Can someone help me thru this part? Thank you. [![enter image description here](https://i.stack.imgur.com/ii5yg.png)](https://i.stack.imgur.com/ii5yg.png)
2016/11/30
[ "https://Stackoverflow.com/questions/40879394", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1058511/" ]
``` SELECT StockNo FROM sales GROUP BY StockNo HAVING SUM(CASE WHEN DATE_FORMAT(Date, '%Y-%m') = '2016-11' THEN 1 ELSE 0 END) > 0 ``` If you also want to retrieve the full records for those matching stock numbers in the above query, you can just add a join: ``` SELECT s1.* FROM sales s1 INNER JOIN ( SELECT StockNo FROM sales GROUP BY StockNo HAVING SUM(CASE WHEN DATE_FORMAT(Date, '%Y-%m') = '2016-11' THEN 1 ELSE 0 END) > 0 ) s2 ON s1.StockNo = s2.StockNo ``` **Demo here:** [SQLFiddle =========](http://sqlfiddle.com/#!9/8a539/3)
Thank you very much Tim for pointing me in the right direction. Your answer was close but it still only returned records from the current month and in the end I used the following query: ``` SELECT s1.* FROM `sales` s1 INNER JOIN ( SELECT * FROM `sales` GROUP BY `StockNo` HAVING COUNT(`StockNo`) > 1 AND SUM(CASE WHEN DATE_FORMAT(`Date`, '%Y-%m')='2016-11' THEN 1 ELSE 0 END) > 0 ) s2 ON s1.StockNo=s2.StockNo ``` This one had been eluding me for some time.
73,027,674
I have a Vertex AI notebook that contains a lot of python and jupyter notebook as well as pickled data files in it. I need to move these files to another notebook. There isn't a lot of documentation on google's help center. Has someone had to do this yet? I'm new to GCP.
2022/07/18
[ "https://Stackoverflow.com/questions/73027674", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5917787/" ]
Can you try these steps in this [article](https://cloud.google.com/vertex-ai/docs/workbench/user-managed/migrate). It says you can copy your files to a [Google Cloud Storage Bucket](https://cloud.google.com/storage/) then move it to a new notebook by using gsutil tool. In your notebook's terminal run this code to copy an object to your Google Cloud storage bucket: ``` gsutil cp -R /home/jupyter/* gs://BUCKET_NAMEPATH ``` Then open a new terminal to the target notebook and run this command to copy the directory to the notebook: ``` gsutil cp gs://BUCKET_NAMEPATH* /home/jupyter/ ``` Just change the `BUCKET_NAMEPATH` to the name of your cloud storage bucket.
I'm assuming that both notebooks are on the same GC project and that you have the same permissions on both, ok? There are many ways to do that... Listing some here: 1. The hardest to execute, but the simplest by concept: You can download everything for your computer/workstation from the original notebook instance, then go to the second notebook instance and upload everything 2. You can use [Google Cloud Storage](https://cloud.google.com/storage/), the object storage service to be used as the medium for the files movement. To do that you need to (1) [create a storage bucket](https://cloud.google.com/storage/docs/creating-buckets), (2) then using your notebook instance terminal, [copy the data from the instance to the bucket](https://cloud.google.com/storage/docs/uploading-objects), (3) and finally use the console on the target notebook instance and [copy the data from the bucket to the instance](https://cloud.google.com/storage/docs/downloading-objects)
1,997,327
Given this python code: ``` import webbrowser webbrowser.open("http://slashdot.org",new=0) webbrowser.open("http://cnn.com",new=0) ``` I would expect a browser to open up, load the first website, then load the second website *in the same window*. However, it opens up in a new window (or new tab depending on which browser I'm using). Tried on Mac OSX with Safari, Firefox and Chrome and on Ubuntue with Firefox. I'm inclined to believe that *new=0* isn't honored. Am I just missing something? tia,
2010/01/04
[ "https://Stackoverflow.com/questions/1997327", "https://Stackoverflow.com", "https://Stackoverflow.com/users/83879/" ]
Note that the documentation specifically avoids guarantees with the language *if possible*: <http://docs.python.org/library/webbrowser.html#webbrowser.open> Most browser settings by default specify tab behavior and will not allow Python to override it. I have seen it in the past using Firefox and tried your example on Chrome to the same effect. On Windows, it is not possible to specify the tab behavior at all, as suggested by my comment below. The url opening code ignores `new`: ``` if sys.platform[:3] == "win": class WindowsDefault(BaseBrowser): def open(self, url, new=0, autoraise=True): try: os.startfile(url) ```
I added a delay between successive invocations of `webbrowser.open()`. Then each was opened in a new tab instead of a separate window (on my Windows 10 machine). ```py import time ... time.sleep(0.5) ```
27,214,901
Please consider the following short Python 2.x script: ``` #!/usr/bin/env python class A(object): class B(object): class C(object): pass def __init__(self): self.c = A.B.C() def __init__(self): self.b = A.B() def main(): a = A() print "%s: %r" % (type(a).__name__, type(a)) print "%s: %r" % (type(a.b).__name__, type(a.b)) print "%s: %r" % (type(a.b.c).__name__, type(a.b.c)) if __name__ == "__main__": main() ``` The output of which, when run in Python 2.7.6, is: ``` A: <class '__main__.A'> B: <class '__main__.B'> C: <class '__main__.C'> ``` --- I was expecting a different output here. Something more along the lines of: ``` A: <class '__main__.A'> A.B: <class '__main__.A.B'> A.B.C: <class '__main__.A.B.C'> ``` In particular I expected to see the same qualified name that I have to give to instantiate `A.B` and `A.B.C` classes respectively. Could anyone shed any light on why those new type classes identify themselves as rooted in `__main__` instead of how they were nested in the code? Also: is there a way to fix this by naming the nested classes explicitly, such that they will identify themselves as `A.B` and `A.B.C` respectively (or possibly in the type representation as `__main__.A.B` and `__main__.A.B.C` respectively)?
2014/11/30
[ "https://Stackoverflow.com/questions/27214901", "https://Stackoverflow.com", "https://Stackoverflow.com/users/476371/" ]
Here are two demonstrative programs one for C++ 2003 and other for C++ 2011 that do the search **C++ 2003** ``` #include <iostream> #include <string> #include <vector> #include <algorithm> #include <utility> #include <functional> struct FindName : std::unary_function<bool, const std::pair<std::string, std::string>> { FindName( const std::pair<std::string, std::string> &p ) : p( p ){} bool operator ()( const std::vector<std::string> &v ) const { return v.size() > 1 && v[0] == p.first && v[1] == p.second; } protected: const std::pair<std::string, std::string> p; }; int main() { const size_t N = 5; std::vector<std::vector<std::string>> v; v.reserve( N ); const char * initial[N][3] = { { "Joan", "Williams", "30" }, { "Mike", "Williams", "40" }, { "Joan", "Smith", "30" }, { "William", "Anderson", "20" }, { "Sara", "Jon", "33" } }; for ( size_t i = 0; i < N; i++ ) { v.push_back( std::vector<std::string>( initial[i], initial[i] + 3 ) ); } std::pair<std::string, std::string> p( "Joan", "Williams" ); typedef std::vector<std::vector<std::string>>::iterator iterator; iterator it = std::find_if( v.begin(), v.end(), FindName( p ) ); if ( it != v.end() ) { for ( std::vector<std::string>::size_type i = 0; i < it->size(); ++i ) { std::cout << ( *it )[i] << ' '; } } std::cout << std::endl; } ``` **C++ 2011** ``` #include <iostream> #include <string> #include <vector> #include <algorithm> #include <utility> int main() { std::vector<std::vector<std::string>> v = { { "Joan", "Williams", "30" }, { "Mike", "Williams", "40" }, { "Joan", "Smith", "30" }, { "William", "Anderson", "20" }, { "Sara", "Jon", "33" } }; std::pair<std::string, std::string> p( "Joan", "Williams" ); auto it = std::find_if( v.begin(), v.end(), [&]( const std::vector<std::string> &row ) { return row.size() > 1 && row[0] == p.first && row[1] == p.second; } ); if ( it != v.end() ) { for ( const auto &s : *it ) std::cout << s << ' '; } std::cout << std::endl; } ``` The both programs' putput is Joan Williams 30
I strongly advise you to use a data structure with an overloaded equality operator instead of `vector<string>` (especially since it seems like the third element should be saved in an integer, not a string). Anyway, this is one possibility: ``` auto iter = std::find_if( std::begin(a_words), std::end(a_words), [] (std::vector<std::string> const& vec) { return vec[0] == "Joan" && vec[1] == "Williams";}; ``` If the list is lexicographically sorted by the first or second column, a binary search can be used instead.
27,214,901
Please consider the following short Python 2.x script: ``` #!/usr/bin/env python class A(object): class B(object): class C(object): pass def __init__(self): self.c = A.B.C() def __init__(self): self.b = A.B() def main(): a = A() print "%s: %r" % (type(a).__name__, type(a)) print "%s: %r" % (type(a.b).__name__, type(a.b)) print "%s: %r" % (type(a.b.c).__name__, type(a.b.c)) if __name__ == "__main__": main() ``` The output of which, when run in Python 2.7.6, is: ``` A: <class '__main__.A'> B: <class '__main__.B'> C: <class '__main__.C'> ``` --- I was expecting a different output here. Something more along the lines of: ``` A: <class '__main__.A'> A.B: <class '__main__.A.B'> A.B.C: <class '__main__.A.B.C'> ``` In particular I expected to see the same qualified name that I have to give to instantiate `A.B` and `A.B.C` classes respectively. Could anyone shed any light on why those new type classes identify themselves as rooted in `__main__` instead of how they were nested in the code? Also: is there a way to fix this by naming the nested classes explicitly, such that they will identify themselves as `A.B` and `A.B.C` respectively (or possibly in the type representation as `__main__.A.B` and `__main__.A.B.C` respectively)?
2014/11/30
[ "https://Stackoverflow.com/questions/27214901", "https://Stackoverflow.com", "https://Stackoverflow.com/users/476371/" ]
Here are two demonstrative programs one for C++ 2003 and other for C++ 2011 that do the search **C++ 2003** ``` #include <iostream> #include <string> #include <vector> #include <algorithm> #include <utility> #include <functional> struct FindName : std::unary_function<bool, const std::pair<std::string, std::string>> { FindName( const std::pair<std::string, std::string> &p ) : p( p ){} bool operator ()( const std::vector<std::string> &v ) const { return v.size() > 1 && v[0] == p.first && v[1] == p.second; } protected: const std::pair<std::string, std::string> p; }; int main() { const size_t N = 5; std::vector<std::vector<std::string>> v; v.reserve( N ); const char * initial[N][3] = { { "Joan", "Williams", "30" }, { "Mike", "Williams", "40" }, { "Joan", "Smith", "30" }, { "William", "Anderson", "20" }, { "Sara", "Jon", "33" } }; for ( size_t i = 0; i < N; i++ ) { v.push_back( std::vector<std::string>( initial[i], initial[i] + 3 ) ); } std::pair<std::string, std::string> p( "Joan", "Williams" ); typedef std::vector<std::vector<std::string>>::iterator iterator; iterator it = std::find_if( v.begin(), v.end(), FindName( p ) ); if ( it != v.end() ) { for ( std::vector<std::string>::size_type i = 0; i < it->size(); ++i ) { std::cout << ( *it )[i] << ' '; } } std::cout << std::endl; } ``` **C++ 2011** ``` #include <iostream> #include <string> #include <vector> #include <algorithm> #include <utility> int main() { std::vector<std::vector<std::string>> v = { { "Joan", "Williams", "30" }, { "Mike", "Williams", "40" }, { "Joan", "Smith", "30" }, { "William", "Anderson", "20" }, { "Sara", "Jon", "33" } }; std::pair<std::string, std::string> p( "Joan", "Williams" ); auto it = std::find_if( v.begin(), v.end(), [&]( const std::vector<std::string> &row ) { return row.size() > 1 && row[0] == p.first && row[1] == p.second; } ); if ( it != v.end() ) { for ( const auto &s : *it ) std::cout << s << ' '; } std::cout << std::endl; } ``` The both programs' putput is Joan Williams 30
As of C++11, a range based for loop would be a simple and readable solution: ``` for(auto r: a_words) if(r[0] == "Joan" && r[1] == "Williams") cout << r[0] << " " << r[1] << " " << r[2] << endl; ```
27,214,901
Please consider the following short Python 2.x script: ``` #!/usr/bin/env python class A(object): class B(object): class C(object): pass def __init__(self): self.c = A.B.C() def __init__(self): self.b = A.B() def main(): a = A() print "%s: %r" % (type(a).__name__, type(a)) print "%s: %r" % (type(a.b).__name__, type(a.b)) print "%s: %r" % (type(a.b.c).__name__, type(a.b.c)) if __name__ == "__main__": main() ``` The output of which, when run in Python 2.7.6, is: ``` A: <class '__main__.A'> B: <class '__main__.B'> C: <class '__main__.C'> ``` --- I was expecting a different output here. Something more along the lines of: ``` A: <class '__main__.A'> A.B: <class '__main__.A.B'> A.B.C: <class '__main__.A.B.C'> ``` In particular I expected to see the same qualified name that I have to give to instantiate `A.B` and `A.B.C` classes respectively. Could anyone shed any light on why those new type classes identify themselves as rooted in `__main__` instead of how they were nested in the code? Also: is there a way to fix this by naming the nested classes explicitly, such that they will identify themselves as `A.B` and `A.B.C` respectively (or possibly in the type representation as `__main__.A.B` and `__main__.A.B.C` respectively)?
2014/11/30
[ "https://Stackoverflow.com/questions/27214901", "https://Stackoverflow.com", "https://Stackoverflow.com/users/476371/" ]
Here are two demonstrative programs one for C++ 2003 and other for C++ 2011 that do the search **C++ 2003** ``` #include <iostream> #include <string> #include <vector> #include <algorithm> #include <utility> #include <functional> struct FindName : std::unary_function<bool, const std::pair<std::string, std::string>> { FindName( const std::pair<std::string, std::string> &p ) : p( p ){} bool operator ()( const std::vector<std::string> &v ) const { return v.size() > 1 && v[0] == p.first && v[1] == p.second; } protected: const std::pair<std::string, std::string> p; }; int main() { const size_t N = 5; std::vector<std::vector<std::string>> v; v.reserve( N ); const char * initial[N][3] = { { "Joan", "Williams", "30" }, { "Mike", "Williams", "40" }, { "Joan", "Smith", "30" }, { "William", "Anderson", "20" }, { "Sara", "Jon", "33" } }; for ( size_t i = 0; i < N; i++ ) { v.push_back( std::vector<std::string>( initial[i], initial[i] + 3 ) ); } std::pair<std::string, std::string> p( "Joan", "Williams" ); typedef std::vector<std::vector<std::string>>::iterator iterator; iterator it = std::find_if( v.begin(), v.end(), FindName( p ) ); if ( it != v.end() ) { for ( std::vector<std::string>::size_type i = 0; i < it->size(); ++i ) { std::cout << ( *it )[i] << ' '; } } std::cout << std::endl; } ``` **C++ 2011** ``` #include <iostream> #include <string> #include <vector> #include <algorithm> #include <utility> int main() { std::vector<std::vector<std::string>> v = { { "Joan", "Williams", "30" }, { "Mike", "Williams", "40" }, { "Joan", "Smith", "30" }, { "William", "Anderson", "20" }, { "Sara", "Jon", "33" } }; std::pair<std::string, std::string> p( "Joan", "Williams" ); auto it = std::find_if( v.begin(), v.end(), [&]( const std::vector<std::string> &row ) { return row.size() > 1 && row[0] == p.first && row[1] == p.second; } ); if ( it != v.end() ) { for ( const auto &s : *it ) std::cout << s << ' '; } std::cout << std::endl; } ``` The both programs' putput is Joan Williams 30
Essentially the answer of @Columbo is nice, eliminating C++ 11 features (besides initialization): ``` #include <algorithm> #include <iostream> #include <string> #include <vector> int main() { // Requires C++11 std::vector<std::vector<std::string>> words = { { "Joan", "Williams", "30" }, { "Mike", "Williams", "40" }, { "Joan", "Smith", "30" }, { "William", "Anderson", "20" }, { "Sara", "Jon", "33" }, }; // Below does not require C++11 struct EqualName { const char* first; const char* second; EqualName(const char* first, const char* second) : first(first), second(second) {} bool operator () (const std::vector<std::string>& element) { return element[0] == first && element[1] == second; } }; std::vector<std::vector<std::string>>::const_iterator pos = std::find_if(words.begin(), words.end(), EqualName("Joan", "Smith")); if(pos != words.end()) std::cout << (*pos)[0] << ' ' << (*pos)[1] << ' ' << (*pos)[2] << '\n'; } ```
24,804,667
I'm trying to wrap a C library for python using SWIG. I'm on a linux 64-bit sytem (Gentoo) using the standard system toolchain. The library (SUNDIALS) is installed on my system with shared libraries in `/usr/local/lib` My interface file is simple (to start with) ``` %module nvecserial %{ #include "sundials/sundials_config.h" #include "sundials/sundials_types.h" #include "sundials/sundials_nvector.h" #include "nvector/nvector_serial.h" %} %include "sundials/sundials_config.h" %include "sundials/sundials_types.h" %include "sundials/sundials_nvector.h" %include "nvector/nvector_serial.h" ``` Given the interface file above, I run ``` $ swig -python -I/usr/local/include nvecserial.i $ gcc -O2 -fPIC -I/usr/include/python2.7 -c nvecserial_wrap.c $ gcc -shared /usr/local/lib/libsundials_nvecserial.so nvecserial_wrap.o -o _nvecserial.so $ python -c "import nvecserial" Traceback (most recent call last): File "<string>", line 1, in <module> File "nvecserial.py", line 28, in <module> _nvecserial = swig_import_helper() File "nvecserial.py", line 24, in swig_import_helper _mod = imp.load_module('_nvecserial', fp, pathname, description) ImportError: ./_nvecserial.so: undefined symbol: N_VLinearSum ``` A little digging to double check things shows ``` $ objdump -t /usr/local/lib/libsundials_nvecserial.so |grep Linear 0000000000001cf0 g F .text 00000000000002e4 N_VLinearSum_Serial $ objdump -t _nvecserial.so |grep Linear 00000000000097e0 l F .text 0000000000000234 _wrap_N_VLinearSum 000000000000cd10 l F .text 0000000000000234 _wrap_N_VLinearSum_Serial 0000000000000000 *UND* 0000000000000000 N_VLinearSum 0000000000000000 F *UND* 0000000000000000 N_VLinearSum_Serial ``` As far as I can tell, N\_VLinearSum is a wrapper around N\_VLinearSum\_Serial (there's a parallel implementation too, so presumably, N\_VLinearSum in nvecparallel would wrap N\_VLinearSum\_Parallel). Where I'm lost though is what to do next. Is this a problem with my interface definition, or a problem with my compilation?
2014/07/17
[ "https://Stackoverflow.com/questions/24804667", "https://Stackoverflow.com", "https://Stackoverflow.com/users/184986/" ]
We'll I've got it working by linking in an extra library. It seems `libsundials_nvecserial.so` and brethren don't contain the symbol N\_VLinearSum. The SUNDIALS make process places functions and symbols from `sundials_nvector.h` into different .so files, somewhat counter intuitively. For now, I got this working with ``` $ gcc -shared -L/usr/local/lib nvecserial_wrap.o -o _nvecserial.so\ -lsundials_nvecserial -lsundials_cvode $ python -c "import nvecserial" $ ``` I'll continue playing around with actual .o files from the source distribution, but considering the intent to distribute the wrapped module eventually using distutils, and that not everyone will have access to the SUNDIALS source on their systems, I'll probably stick with linking in the extra shared library.
Instead of ``` gcc -shared /usr/local/lib/libsundials_nvecserial.so nvecserial_wrap.o -o _nvecserial.so ``` try ``` gcc -shared -L/usr/local/lib nvecserial_wrap.o -o _nvecserial.so -lsundials_nvecserial ``` The -l should be at end otherwise the lib may not be searched for symbols. This is explained in the ld man page.
38,791,685
I want to generate a single executable file from my python script. For this I use pyinstaller. I had issues with mkl libraries because I use numpy in the script. I used this [hook](https://github.com/pyinstaller/pyinstaller/issues/1881 "hook") so solve the issue, it worked fine. But it does not work if I copy the single executable file to another directory and execute it. I guess I have to copy the hook also. But I just want to have one single file that I can use at other computers without copying `.dll's` or the hook. I also changed the `.spec` file as described [here](https://pythonhosted.org/PyInstaller/spec-files.html) and added the necessary files to the `binaries`-variable. That also works as long as the `.dll's` are in the provided directory for the `binaries`-variable , but that won't work when I use the executable on a computer that doesn't have these `.dll's`. I tried using the `--hidden-import= FILENAME` option. This also solves the issue, but just when the `.dll's` are provided somewhere. What I'm looking for is a possibility to bundle the `.dll's` into the single executable file so that I have one file that works independently.
2016/08/05
[ "https://Stackoverflow.com/questions/38791685", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5521383/" ]
When I faced problem described here <https://github.com/ContinuumIO/anaconda-issues/issues/443> my workaround was `pyinstaller -F --add-data vcruntime140.dll;. myscript.py` `-F` - collect into one *\*.exe* file `.` - Destination path of dll in exe file from docs <http://pyinstaller.readthedocs.io/en/stable/spec-files.html#adding-data-files>
As the selected answer didn't work for the case of using **libportaudio64bit.dll**, I put my working solution here. For me, the working solution is to add **\_sounddevice\_data** folder where the .exe file is located then making a **portaudio-binaries** folder in it and finally putting **libportaudio64bit.dll** in the recently created folder. Hope it helps!
38,791,685
I want to generate a single executable file from my python script. For this I use pyinstaller. I had issues with mkl libraries because I use numpy in the script. I used this [hook](https://github.com/pyinstaller/pyinstaller/issues/1881 "hook") so solve the issue, it worked fine. But it does not work if I copy the single executable file to another directory and execute it. I guess I have to copy the hook also. But I just want to have one single file that I can use at other computers without copying `.dll's` or the hook. I also changed the `.spec` file as described [here](https://pythonhosted.org/PyInstaller/spec-files.html) and added the necessary files to the `binaries`-variable. That also works as long as the `.dll's` are in the provided directory for the `binaries`-variable , but that won't work when I use the executable on a computer that doesn't have these `.dll's`. I tried using the `--hidden-import= FILENAME` option. This also solves the issue, but just when the `.dll's` are provided somewhere. What I'm looking for is a possibility to bundle the `.dll's` into the single executable file so that I have one file that works independently.
2016/08/05
[ "https://Stackoverflow.com/questions/38791685", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5521383/" ]
When I faced problem described here <https://github.com/ContinuumIO/anaconda-issues/issues/443> my workaround was `pyinstaller -F --add-data vcruntime140.dll;. myscript.py` `-F` - collect into one *\*.exe* file `.` - Destination path of dll in exe file from docs <http://pyinstaller.readthedocs.io/en/stable/spec-files.html#adding-data-files>
Add the current project folder to Path, then Create EXE using following command: ``` pyinstaller --add-binary AutoItX3_x64.dll;. program_name.py ``` Create folder `\dist\program_name\autoit\lib` in tge current project folder, and paste `AutoItX3_x64.dll` in it.
38,791,685
I want to generate a single executable file from my python script. For this I use pyinstaller. I had issues with mkl libraries because I use numpy in the script. I used this [hook](https://github.com/pyinstaller/pyinstaller/issues/1881 "hook") so solve the issue, it worked fine. But it does not work if I copy the single executable file to another directory and execute it. I guess I have to copy the hook also. But I just want to have one single file that I can use at other computers without copying `.dll's` or the hook. I also changed the `.spec` file as described [here](https://pythonhosted.org/PyInstaller/spec-files.html) and added the necessary files to the `binaries`-variable. That also works as long as the `.dll's` are in the provided directory for the `binaries`-variable , but that won't work when I use the executable on a computer that doesn't have these `.dll's`. I tried using the `--hidden-import= FILENAME` option. This also solves the issue, but just when the `.dll's` are provided somewhere. What I'm looking for is a possibility to bundle the `.dll's` into the single executable file so that I have one file that works independently.
2016/08/05
[ "https://Stackoverflow.com/questions/38791685", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5521383/" ]
When I faced problem described here <https://github.com/ContinuumIO/anaconda-issues/issues/443> my workaround was `pyinstaller -F --add-data vcruntime140.dll;. myscript.py` `-F` - collect into one *\*.exe* file `.` - Destination path of dll in exe file from docs <http://pyinstaller.readthedocs.io/en/stable/spec-files.html#adding-data-files>
Here is a modified version of Ilya's answer. `pyinstaller --onefile --add-binary ".venv/Lib/site-packages/example_package/example.dll;." myscript.py` It wasn't clear to me when first stumbling into this issue that you must tell PyInstaller exactly where to find the given file (either via relative or absolute path) if it is not already on your `PATH`. I have more discussion on how I found exactly which DLL was missing in [this answer](https://stackoverflow.com/a/69981523/1437625) to a similar question. I much prefer this solution to manually copying DLLs into the export directory, since single EXE is better for distributing utilities to non-programmers.
38,791,685
I want to generate a single executable file from my python script. For this I use pyinstaller. I had issues with mkl libraries because I use numpy in the script. I used this [hook](https://github.com/pyinstaller/pyinstaller/issues/1881 "hook") so solve the issue, it worked fine. But it does not work if I copy the single executable file to another directory and execute it. I guess I have to copy the hook also. But I just want to have one single file that I can use at other computers without copying `.dll's` or the hook. I also changed the `.spec` file as described [here](https://pythonhosted.org/PyInstaller/spec-files.html) and added the necessary files to the `binaries`-variable. That also works as long as the `.dll's` are in the provided directory for the `binaries`-variable , but that won't work when I use the executable on a computer that doesn't have these `.dll's`. I tried using the `--hidden-import= FILENAME` option. This also solves the issue, but just when the `.dll's` are provided somewhere. What I'm looking for is a possibility to bundle the `.dll's` into the single executable file so that I have one file that works independently.
2016/08/05
[ "https://Stackoverflow.com/questions/38791685", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5521383/" ]
Add the current project folder to Path, then Create EXE using following command: ``` pyinstaller --add-binary AutoItX3_x64.dll;. program_name.py ``` Create folder `\dist\program_name\autoit\lib` in tge current project folder, and paste `AutoItX3_x64.dll` in it.
As the selected answer didn't work for the case of using **libportaudio64bit.dll**, I put my working solution here. For me, the working solution is to add **\_sounddevice\_data** folder where the .exe file is located then making a **portaudio-binaries** folder in it and finally putting **libportaudio64bit.dll** in the recently created folder. Hope it helps!
38,791,685
I want to generate a single executable file from my python script. For this I use pyinstaller. I had issues with mkl libraries because I use numpy in the script. I used this [hook](https://github.com/pyinstaller/pyinstaller/issues/1881 "hook") so solve the issue, it worked fine. But it does not work if I copy the single executable file to another directory and execute it. I guess I have to copy the hook also. But I just want to have one single file that I can use at other computers without copying `.dll's` or the hook. I also changed the `.spec` file as described [here](https://pythonhosted.org/PyInstaller/spec-files.html) and added the necessary files to the `binaries`-variable. That also works as long as the `.dll's` are in the provided directory for the `binaries`-variable , but that won't work when I use the executable on a computer that doesn't have these `.dll's`. I tried using the `--hidden-import= FILENAME` option. This also solves the issue, but just when the `.dll's` are provided somewhere. What I'm looking for is a possibility to bundle the `.dll's` into the single executable file so that I have one file that works independently.
2016/08/05
[ "https://Stackoverflow.com/questions/38791685", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5521383/" ]
Add the current project folder to Path, then Create EXE using following command: ``` pyinstaller --add-binary AutoItX3_x64.dll;. program_name.py ``` Create folder `\dist\program_name\autoit\lib` in tge current project folder, and paste `AutoItX3_x64.dll` in it.
Here is a modified version of Ilya's answer. `pyinstaller --onefile --add-binary ".venv/Lib/site-packages/example_package/example.dll;." myscript.py` It wasn't clear to me when first stumbling into this issue that you must tell PyInstaller exactly where to find the given file (either via relative or absolute path) if it is not already on your `PATH`. I have more discussion on how I found exactly which DLL was missing in [this answer](https://stackoverflow.com/a/69981523/1437625) to a similar question. I much prefer this solution to manually copying DLLs into the export directory, since single EXE is better for distributing utilities to non-programmers.
10,550,870
I have some Pickled data, which is stored on disk, and it is about 100 MB in size. When my python program is executed, the picked data is loaded using the `cPickle` module, and all that works fine. If I execute the python multiple times using `python main.py` for example, each python process will load the same data multiple times, which is the correct behaviour. How can I make it so, all new python process share this data, so it is only loaded a single time into memory?
2012/05/11
[ "https://Stackoverflow.com/questions/10550870", "https://Stackoverflow.com", "https://Stackoverflow.com/users/406930/" ]
If you're on Unix, one possibility is to load the data into memory, and then have the script use [`os.fork()`](http://docs.python.org/library/os.html#os.fork) to create a bunch of sub-processes. As long as the sub-processes don't attempt to *modify* the data, they would automatically share the parent's copy of it, without using any additional memory. Unfortunately, this won't work on Windows. P.S. I [once asked](https://stackoverflow.com/questions/6270849/placing-python-objects-in-shared-memory) about placing Python objects into shared memory, but that didn't produce any easy solutions.
Depending on how seriously you need to solve this problem, you may want to look at memcached, if that is not overkill.
41,612,654
I got an error after I modified the User Model in django. when I was going to create a super user, it didn't prompt for username, instead it skipped it, anyway the object propery username still required and causing the user creation to failed. ``` import jwt from django.db import models from django.contrib.auth.models import (AbstractBaseUser, BaseUserManager, PermissionsMixin) from datetime import datetime, timedelta from django.conf import settings # from api.core.models import TimeStampedModel class AccountManager(BaseUserManager): def create_user(self, username, email, password=None, **kwargs): if not email: raise ValueError('Please provide a valid email address') if not kwargs.get('username'): # username = 'what' raise ValueError('User must have a username') account = self.model( email=self.normalize_email(email), username=kwargs.get('username')) account.set_password(password) account.save() return account def create_superuser(self,username, email, password, **kwargs): account = self.create_user(username, email, password, **kwargs) account.is_admin = True account.save() return account class Account(AbstractBaseUser): email = models.EmailField(unique=True) username = models.CharField(max_length=40, unique=True) first_name = models.CharField(max_length=40) last_name = models.CharField(max_length =40, blank=True) tagline = models.CharField(max_length=200, blank=True) is_admin = models.BooleanField(default=False) USERNAME_FIELD = 'email' REQUIRED_FIELD = ['username'] objects = AccountManager() def __str__(self): return self.username @property def token(self): return generate_jwt_token() def generate_jwt_token(self): dt = datetime.now + datetime.timedelta(days=60) token = jwt.encode({ 'id': self.pk, 'exp': int(dt.strftime('%s')) }, settings.SECRET_KEY, algorithm='HS256') return token.decode('utf-8') def get_full_name(self): return ' '.join(self.first_name, self.last_name) def get_short_name(self): return self.username ``` it results this : ``` manage.py createsuperuser Running 'python /home/gema/A/PyProject/janet_x/janet/manage.py createsuperuser' command: Email: admin@nister.com Password: Password (again): Traceback (most recent call last): File "/home/gema/A/PyProject/janet_x/janet/manage.py", line 22, in <module> execute_from_command_line(sys.argv) File "/home/gema/.virtualenvs/JANET/lib/python3.5/site-packages/django/core/management/__init__.py", line 367, in execute_from_command_line utility.execute() File "/home/gema/.virtualenvs/JANET/lib/python3.5/site-packages/django/core/management/__init__.py", line 359, in execute self.fetch_command(subcommand).run_from_argv(self.argv) File "/home/gema/.virtualenvs/JANET/lib/python3.5/site-packages/django/core/management/base.py", line 294, in run_from_argv self.execute(*args, **cmd_options) File "/home/gema/.virtualenvs/JANET/lib/python3.5/site-packages/django/contrib/auth/management/commands/createsuperuser.py", line 63, in execute return super(Command, self).execute(*args, **options) File "/home/gema/.virtualenvs/JANET/lib/python3.5/site-packages/django/core/management/base.py", line 345, in execute output = self.handle(*args, **options) File "/home/gema/.virtualenvs/JANET/lib/python3.5/site-packages/django/contrib/auth/management/commands/createsuperuser.py", line 183, in handle self.UserModel._default_manager.db_manager(database).create_superuser(**user_data) TypeError: create_superuser() missing 1 required positional argument: 'username' ``` When I use the shell : ``` Account.objects.create(u'administer', u'admin@ister.com', u'password123') ``` it returns : ``` return getattr(self.get_queryset(), name)(*args, **kwargs) TypeError: create() takes 1 positional argument but 3 were given ``` What could possibly be wrong ? Thank you.
2017/01/12
[ "https://Stackoverflow.com/questions/41612654", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3465227/" ]
`REQUIRED_FIELD` should be `REQUIRED_FIELDS` (plural), otherwise you won't be prompted for a username (or any other required fields) because Django did not find anything in `REQUIRED_FIELDS`. As an example, I use this UserManager in one of my projects: ``` class UserManager(BaseUserManager): def create_user(self, email, password=None, first_name=None, last_name=None, **extra_fields): if not email: raise ValueError('Enter an email address') if not first_name: raise ValueError('Enter a first name') if not last_name: raise ValueError('Enter a last name') email = self.normalize_email(email) user = self.model(email=email, first_name=first_name, last_name=last_name, **extra_fields) user.set_password(password) user.save(using=self._db) return user def create_superuser(self, email, password, first_name, last_name): user = self.create_user(email, password=password, first_name=first_name, last_name=last_name) user.is_superuser = True user.is_staff = True user.save(using=self._db) return user ``` `USERNAME_FIELD` is set to `email` and `REQUIRED_FIELDS` to `('first_name', 'last_name')`. You should be able to adapt this example to fit your needs.
This bit doesn't make sense: ``` USERNAME_FIELD = 'email' REQUIRED_FIELD = ['username'] ``` Why have you set `USERNAME_FIELD` to "email"? Surely it should be "username".
48,617,779
I am receiving the error: `ImportError: No module named MySQLdb` whenever I try to run my local dev server and it is driving me crazy. I have tried everything I could find online: 1. `brew install mysql` 2. `pip install mysqldb` 3. `pip install mysql` 4. `pip install mysql-python` 5. `pip install MySQL-python` 6. `easy_install mysql-python` 7. `easy_install MySQL-python` 8. `pip install mysqlclient` I am running out of options and can't figure out why I continue to receive this error. I am attempting to run my local dev server from using Google App Engine on a macOS Sierra system and I am using python version 2.7. I am also running: `source env/bin/activate` at the directory my project files are and am installing all dependencies there as well. My path looks like this: `/usr/local/bin/python:/usr/local/mysql/bin:/usr/local/opt/node@6/bin:/usr/local/bin:/usr/bin:/bin:/usr/sbin:/sbin:/Library/Frameworks/Python.framework/Versions/2.7/bin` Does anyone have further ideas I can attempt to resolve this issue?
2018/02/05
[ "https://Stackoverflow.com/questions/48617779", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4918575/" ]
When you are in the virtual env (`source venv/bin/activate`), just run in terminal: ``` sudo apt-get install python3-mysqldb sudo apt-get install libmysqlclient-dev pip install mysqlclient ``` You don't have to import anything in your py files. The first one is just in case, but the other two work perfectly by themselves. If you don't run `libmysqlclient-dev`, you won't be able to run any other mysql installation command while in the virtual environment.
Turns out I had the wrong python being pointed to in my virtualenv. It comes preinstalled with its own default python version and so, I created a new virtualenv and used the `-p` to set the python path to my own local python path.
48,617,779
I am receiving the error: `ImportError: No module named MySQLdb` whenever I try to run my local dev server and it is driving me crazy. I have tried everything I could find online: 1. `brew install mysql` 2. `pip install mysqldb` 3. `pip install mysql` 4. `pip install mysql-python` 5. `pip install MySQL-python` 6. `easy_install mysql-python` 7. `easy_install MySQL-python` 8. `pip install mysqlclient` I am running out of options and can't figure out why I continue to receive this error. I am attempting to run my local dev server from using Google App Engine on a macOS Sierra system and I am using python version 2.7. I am also running: `source env/bin/activate` at the directory my project files are and am installing all dependencies there as well. My path looks like this: `/usr/local/bin/python:/usr/local/mysql/bin:/usr/local/opt/node@6/bin:/usr/local/bin:/usr/bin:/bin:/usr/sbin:/sbin:/Library/Frameworks/Python.framework/Versions/2.7/bin` Does anyone have further ideas I can attempt to resolve this issue?
2018/02/05
[ "https://Stackoverflow.com/questions/48617779", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4918575/" ]
When you are in the virtual env (`source venv/bin/activate`), just run in terminal: ``` sudo apt-get install python3-mysqldb sudo apt-get install libmysqlclient-dev pip install mysqlclient ``` You don't have to import anything in your py files. The first one is just in case, but the other two work perfectly by themselves. If you don't run `libmysqlclient-dev`, you won't be able to run any other mysql installation command while in the virtual environment.
In my project, with virtualenv, I just did > > pip install mysqlclient > > > and like magic everything is ok
21,807,660
I am trying to run the first example [here](http://opencv-python-tutroals.readthedocs.org/en/latest/py_tutorials/py_gui/py_video_display/py_video_display.html), but I am getting this error. I am using Ubuntu 13.10. ``` Failed to load OpenCL runtime OpenCV Error: Unknown error code -220 (OpenCL function is not available: [clGetPlatformIDs]) in opencl_check_fn, file /home/cristi/opencv/modules/core/src/opencl/runtime/opencl_core.cpp, line 204 OpenCV Error: Assertion failed (scn == 3 || scn == 4) in cvtColor, file /home/cristi/opencv/modules/imgproc/src/color.cpp, line 3159 Traceback (most recent call last): File "/home/cristi/opencv1/src/video.py", line 11, in <module> gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY) cv2.error: /home/cristi/opencv/modules/imgproc/src/color.cpp:3159: error: (-215) scn == 3 || scn == 4 in function cvtColor Process finished with exit code 1 ``` Also, this is the line that is causing the trouble (line 11 in my code): ``` gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY) ``` What should I do?
2014/02/16
[ "https://Stackoverflow.com/questions/21807660", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1852142/" ]
As for the OpenCL failure, try installing required packages: `sudo apt-get install ocl-icd-opencl-dev` Worked for me. My guess is that OCL is a part of the `opencv_core` module, and if it failed to initialise, then many other components might behave strange.
> > Failed to load OpenCL runtime > > > Most probably there is some problem with your installation. If you are not working with GPU, then I recommend you to turn off all CUDA/OpenCL modules in OpenCV during compilation. > > error: (-215) scn == 3 || scn == 4 in function cvtColor > > > This error says your input image should have 3 channel (BGR/color image) or 4 channel(RGBA image). So please check number of channels in `frame` by executing `print frame.shape`. Since you are working with video, there is a high chance that your camera is not opened for capture, so that frame is not captured. In that case, `print frame.shape` will say it is `NoneType` data. I recommend you to run the same code with an image instead of video. Even then if the error of OpenCL shows up, it is most likely a problem with your installation. If it works fine, problem may be with VideoCapture. You can check it as mentioned in the same tutorial: > > Sometimes, **cap** may not have initialized the capture. In that case, > this code shows error. You can check whether it is initialized or not > by the method **cap.isOpened()**. If it is True, OK. > > >
21,807,660
I am trying to run the first example [here](http://opencv-python-tutroals.readthedocs.org/en/latest/py_tutorials/py_gui/py_video_display/py_video_display.html), but I am getting this error. I am using Ubuntu 13.10. ``` Failed to load OpenCL runtime OpenCV Error: Unknown error code -220 (OpenCL function is not available: [clGetPlatformIDs]) in opencl_check_fn, file /home/cristi/opencv/modules/core/src/opencl/runtime/opencl_core.cpp, line 204 OpenCV Error: Assertion failed (scn == 3 || scn == 4) in cvtColor, file /home/cristi/opencv/modules/imgproc/src/color.cpp, line 3159 Traceback (most recent call last): File "/home/cristi/opencv1/src/video.py", line 11, in <module> gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY) cv2.error: /home/cristi/opencv/modules/imgproc/src/color.cpp:3159: error: (-215) scn == 3 || scn == 4 in function cvtColor Process finished with exit code 1 ``` Also, this is the line that is causing the trouble (line 11 in my code): ``` gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY) ``` What should I do?
2014/02/16
[ "https://Stackoverflow.com/questions/21807660", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1852142/" ]
As for the OpenCL failure, try installing required packages: `sudo apt-get install ocl-icd-opencl-dev` Worked for me. My guess is that OCL is a part of the `opencv_core` module, and if it failed to initialise, then many other components might behave strange.
You might want to install/update the driver: <http://streamcomputing.eu/blog/2011-12-29/opencl-hardware-support/> Updating the driver help to solve my problem with OpenCL
54,140,922
I want to create a multiprocessing echo server. I am currently using telnet as my client to send messages to my echo server.Currently I can handle one telnet request and it echos the response. I initially, thought I should intialize the pid whenever I create a socket. Is that correct? How do I allow several clients to connect to my server using multiprocessing. ``` #!/usr/bin/env python import socket import os from multiprocessing import Process def create_socket(): # Create socket sockfd = socket.socket(socket.AF_INET, socket.SOCK_STREAM) # Port for socket and Host PORT = 8002 HOST = 'localhost' # bind the socket to host and port sockfd.bind((HOST, PORT)) # become a server socket sockfd.listen(5) start_socket(sockfd) def start_socket(sockfd): while True: # Establish and accept connections woth client (clientsocket, address) = sockfd.accept() # Get the process id. process_id = os.getpid() print("Process id:", process_id) print("Got connection from", address) # Recieve message from the client message = clientsocket.recv(2024) print("Server received: " + message.decode('utf-8')) reply = ("Server output: " + message.decode('utf-8')) if not message: print("Client has been disconnected.....") break # Display messags. clientsocket.sendall(str.encode(reply)) # Close the connection with the client clientsocket.close() if __name__ == '__main__': process = Process(target = create_socket) process.start() ```
2019/01/11
[ "https://Stackoverflow.com/questions/54140922", "https://Stackoverflow.com", "https://Stackoverflow.com/users/9005618/" ]
It's probably a good idea to understand which are blocking system calls and which are not. `listen` for example is not blocking and `accept` is blocking one. So basically - you created one process through `Process(..)`, that blocks at the `accept` and when a connection is made - handles that connection. Your code should have a structure - something like following (pseudo code) ``` def handle_connection(accepted_socket): # do whatever you want with the socket pass def server(): # Create socket and listen to it. sock = socket.socket(....) sock.bind((HOST, PORT)) sock.listen(5) while True: new_client = sock.accept() # blocks here. # unblocked client_process = Process(target=handle_connection, args=(new_client)) client_process.start() ``` I must also mention, while this is a good way to just understand how things can be done, it is not a good idea to start a new process for every connection.
The initial part of setting up the server, binding, listening etc (your `create_socket`) should be in the master process. Once you `accept` and get a socket, you should spawn off a separate process to take care of that connection. In other words, your `start_socket` should be spawned off in a separate process and should loop forever.
23,922,691
I am trying to add argv[0] as variable to the SQL query below and running into compilation error below,what is the syntax to fix this? ``` #!/usr/bin/python import pypyodbc as pyodbc from sys import argv component_id=argv[0] server_name='odsdb.company.com' database_name='ODS' cnx = pyodbc.connect("DRIVER={SQL Server};SERVER="+server_name+";DATABASE="+database_name) db_cursor=cnx.cursor() SQL = 'SELECT Top 1 cr.ReleaseLabel ' + \ 'FROM [ODS].[v000001].[ComponentRevisions] cr ' + \ 'WHERE cr.ComponentId=' + component_id + \ 'ORDER BY cr.CreatedOn DESC' resp_rows_obj=db_cursor.execute(SQL) print '('+', '.join([column_heading_tuple[0] for column_heading_tuple in resp_rows_obj.description])+')' for row in resp_rows_obj: print row ``` Error:- ``` pypyodbc.ProgrammingError: (u'42000', u"[42000] [Microsoft][ODBC SQL Server Driver][SQL Server]Incorrect syntax near the keyword 'BY'.") ```
2014/05/28
[ "https://Stackoverflow.com/questions/23922691", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3654069/" ]
Don't use string interpolation. Use SQL parameters; these are placeholders in the query where your database will insert values: ``` SQL = '''\ SELECT Top 1 cr.ReleaseLabel FROM [ODS].[v000001].[ComponentRevisions] cr WHERE cr.ComponentId = ? ORDER BY cr.CreatedOn DESC ''' resp_rows_obj = db_cursor.execute(SQL, (component_id,)) ``` Values for the `?` placeholders are sourced from the second argument to the `cursor.execute()` function, a sequence of values. Here you only have one value, so I used a one-element tuple. Note that you probably want `argv[1]`, **not** `argv[0]`; the latter is the script name, not the first argument.
to retrieve 1st command line argument do `component_id=argv[1]` instead of 0 which is the script name... better yet, look at [argparse](https://docs.python.org/2/howto/argparse.html)
23,922,691
I am trying to add argv[0] as variable to the SQL query below and running into compilation error below,what is the syntax to fix this? ``` #!/usr/bin/python import pypyodbc as pyodbc from sys import argv component_id=argv[0] server_name='odsdb.company.com' database_name='ODS' cnx = pyodbc.connect("DRIVER={SQL Server};SERVER="+server_name+";DATABASE="+database_name) db_cursor=cnx.cursor() SQL = 'SELECT Top 1 cr.ReleaseLabel ' + \ 'FROM [ODS].[v000001].[ComponentRevisions] cr ' + \ 'WHERE cr.ComponentId=' + component_id + \ 'ORDER BY cr.CreatedOn DESC' resp_rows_obj=db_cursor.execute(SQL) print '('+', '.join([column_heading_tuple[0] for column_heading_tuple in resp_rows_obj.description])+')' for row in resp_rows_obj: print row ``` Error:- ``` pypyodbc.ProgrammingError: (u'42000', u"[42000] [Microsoft][ODBC SQL Server Driver][SQL Server]Incorrect syntax near the keyword 'BY'.") ```
2014/05/28
[ "https://Stackoverflow.com/questions/23922691", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3654069/" ]
Don't use string interpolation. Use SQL parameters; these are placeholders in the query where your database will insert values: ``` SQL = '''\ SELECT Top 1 cr.ReleaseLabel FROM [ODS].[v000001].[ComponentRevisions] cr WHERE cr.ComponentId = ? ORDER BY cr.CreatedOn DESC ''' resp_rows_obj = db_cursor.execute(SQL, (component_id,)) ``` Values for the `?` placeholders are sourced from the second argument to the `cursor.execute()` function, a sequence of values. Here you only have one value, so I used a one-element tuple. Note that you probably want `argv[1]`, **not** `argv[0]`; the latter is the script name, not the first argument.
We had a hyphen in the database name that was being used in a T-SQL query being called from Python code. So we just added square brackets because SQL Server cannot interpolate the hyphen without them. Before: ``` SELECT * FROM DBMS-NAME.dbo.TABLE_NAME ``` After: ``` SELECT * FROM [DBMS-NAME].dbo.TABLE_NAME ```
24,093,888
I am looking to do a large number of reverse DNS lookups in a small amount of time. I currently have implemented an asynchronous lookup using socket.gethostbyaddr and concurrent.futures thread pool, but am still not seeing the desired performance. For example, the script took about 22 minutes to complete on 2500 IP addresses. I was wondering if there is any quicker way to do this without resorting to something like adns-python. I found this <http://blog.schmichael.com/2007/09/18/a-lesson-on-python-dns-and-threads/> which provided some additional background. Code Snippet: ``` ips = [...] with concurrent.futures.ThreadPoolExecutor(max_workers = 16) as pool: list(pool.map(get_hostname_from_ip, ips)) def get_hostname_from_ip(ip): try: return socket.gethostbyaddr(ip)[0] except: return "" ``` I think part of the issue is that many of the IP addresses are not resolving and timing out. I tried: ``` socket.setdefaulttimeout(2.0) ``` but it seems to have no effect.
2014/06/07
[ "https://Stackoverflow.com/questions/24093888", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2521829/" ]
Because of the [Global Interpreter Lock](https://docs.python.org/dev/glossary.html#term-global-interpreter-lock), you should use `ProcessPoolExecutor` instead. <https://docs.python.org/dev/library/concurrent.futures.html#processpoolexecutor>
please, use [asynchronous DNS](http://code.google.com/p/adns-python/), everything else will give you a very poor performance.
24,093,888
I am looking to do a large number of reverse DNS lookups in a small amount of time. I currently have implemented an asynchronous lookup using socket.gethostbyaddr and concurrent.futures thread pool, but am still not seeing the desired performance. For example, the script took about 22 minutes to complete on 2500 IP addresses. I was wondering if there is any quicker way to do this without resorting to something like adns-python. I found this <http://blog.schmichael.com/2007/09/18/a-lesson-on-python-dns-and-threads/> which provided some additional background. Code Snippet: ``` ips = [...] with concurrent.futures.ThreadPoolExecutor(max_workers = 16) as pool: list(pool.map(get_hostname_from_ip, ips)) def get_hostname_from_ip(ip): try: return socket.gethostbyaddr(ip)[0] except: return "" ``` I think part of the issue is that many of the IP addresses are not resolving and timing out. I tried: ``` socket.setdefaulttimeout(2.0) ``` but it seems to have no effect.
2014/06/07
[ "https://Stackoverflow.com/questions/24093888", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2521829/" ]
I discovered my main issue was IPs failing to resolve and thus sockets not obeying their set timeouts and failing after 30 seconds. See [Python 2.6 urlib2 timeout issue](https://stackoverflow.com/questions/14127115/python-2-6-urlib2-timeout-issue). *adns-python* was a no-go because of its lack of support for IPv6 (without patches). After searching around I found this: [Reverse DNS Lookups with dnspython](http://spareclockcycles.org/2010/04/13/reverse-dns-lookups-with-dnspython/) and implemented a similar version in my code (his code also uses an optional thread pool and implements a timeout). In the end I used dnspython with a concurrent.futures thread pool for asynchronous reverse DNS lookups (see [Python: Reverse DNS Lookup in a shared hosting](https://stackoverflow.com/questions/19867548/python-reverse-dns-lookup-in-a-shared-hosting) and [Dnspython: Setting query timeout/lifetime](https://stackoverflow.com/questions/8989457/dnspython-setting-query-timeout-lifetime)). With a timeout of 1 second this cut runtime from about 22 minutes to about 16 seconds on 2500 IP addresses. The large difference can probably be attributed to the [Global Interpreter Lock](https://docs.python.org/dev/glossary.html#term-global-interpreter-lock) on sockets and the 30 second timeouts. Code Snippet: ``` import concurrent.futures from dns import resolver, reversename dns_resolver = resolver.Resolver() dns_resolver.timeout = 1 dns_resolver.lifetime = 1 ips = [...] results = [] with concurrent.futures.ThreadPoolExecutor(max_workers = 16) as pool: results = list(pool.map(get_hostname_from_ip, ips)) def get_hostname_from_ip(ip): try: reverse_name = reversename.from_address(ip) return dns_resolver.query(reverse_name, "PTR")[0].to_text()[:-1] except: return "" ```
please, use [asynchronous DNS](http://code.google.com/p/adns-python/), everything else will give you a very poor performance.
24,093,888
I am looking to do a large number of reverse DNS lookups in a small amount of time. I currently have implemented an asynchronous lookup using socket.gethostbyaddr and concurrent.futures thread pool, but am still not seeing the desired performance. For example, the script took about 22 minutes to complete on 2500 IP addresses. I was wondering if there is any quicker way to do this without resorting to something like adns-python. I found this <http://blog.schmichael.com/2007/09/18/a-lesson-on-python-dns-and-threads/> which provided some additional background. Code Snippet: ``` ips = [...] with concurrent.futures.ThreadPoolExecutor(max_workers = 16) as pool: list(pool.map(get_hostname_from_ip, ips)) def get_hostname_from_ip(ip): try: return socket.gethostbyaddr(ip)[0] except: return "" ``` I think part of the issue is that many of the IP addresses are not resolving and timing out. I tried: ``` socket.setdefaulttimeout(2.0) ``` but it seems to have no effect.
2014/06/07
[ "https://Stackoverflow.com/questions/24093888", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2521829/" ]
I discovered my main issue was IPs failing to resolve and thus sockets not obeying their set timeouts and failing after 30 seconds. See [Python 2.6 urlib2 timeout issue](https://stackoverflow.com/questions/14127115/python-2-6-urlib2-timeout-issue). *adns-python* was a no-go because of its lack of support for IPv6 (without patches). After searching around I found this: [Reverse DNS Lookups with dnspython](http://spareclockcycles.org/2010/04/13/reverse-dns-lookups-with-dnspython/) and implemented a similar version in my code (his code also uses an optional thread pool and implements a timeout). In the end I used dnspython with a concurrent.futures thread pool for asynchronous reverse DNS lookups (see [Python: Reverse DNS Lookup in a shared hosting](https://stackoverflow.com/questions/19867548/python-reverse-dns-lookup-in-a-shared-hosting) and [Dnspython: Setting query timeout/lifetime](https://stackoverflow.com/questions/8989457/dnspython-setting-query-timeout-lifetime)). With a timeout of 1 second this cut runtime from about 22 minutes to about 16 seconds on 2500 IP addresses. The large difference can probably be attributed to the [Global Interpreter Lock](https://docs.python.org/dev/glossary.html#term-global-interpreter-lock) on sockets and the 30 second timeouts. Code Snippet: ``` import concurrent.futures from dns import resolver, reversename dns_resolver = resolver.Resolver() dns_resolver.timeout = 1 dns_resolver.lifetime = 1 ips = [...] results = [] with concurrent.futures.ThreadPoolExecutor(max_workers = 16) as pool: results = list(pool.map(get_hostname_from_ip, ips)) def get_hostname_from_ip(ip): try: reverse_name = reversename.from_address(ip) return dns_resolver.query(reverse_name, "PTR")[0].to_text()[:-1] except: return "" ```
Because of the [Global Interpreter Lock](https://docs.python.org/dev/glossary.html#term-global-interpreter-lock), you should use `ProcessPoolExecutor` instead. <https://docs.python.org/dev/library/concurrent.futures.html#processpoolexecutor>
61,380,858
I want to create pandas data frame with multiple lists with different length. Below is my python code. ``` import pandas as pd A=[1,2] B=[1,2,3] C=[1,2,3,4,5,6] lenA = len(A) lenB = len(B) lenC = len(C) df = pd.DataFrame(columns=['A', 'B','C']) for i,v1 in enumerate(A): for j,v2 in enumerate(B): for k, v3 in enumerate(C): if(i<random.randint(0, lenA)): if(j<random.randint(0, lenB)): if (k < random.randint(0, lenC)): df = df.append({'A': v1, 'B': v2,'C':v3}, ignore_index=True) print(df) ``` My lists are as below: ``` A=[1,2] B=[1,2,3] C=[1,2,3,4,5,6,7] ``` In each run I got different output and which is correct. But not covers all list items in each run. In one run I got below output as: ``` A B C 0 1 1 3 1 1 2 1 2 1 2 2 3 2 2 5 ``` In the above output 'A' list's all items (1,2) are there. But 'B' list has only (1,2) items, the item 3 is missing. Also list 'C' has (1,2,3,5) items only. (4,6,7) items are missing in 'C' list. My expectation is: in each list each item should be in the data frame at least once and 'C' list items should be in data frame only once. My expected sample output is as below: ``` A B C 0 1 1 3 1 1 2 1 2 1 2 2 3 2 2 5 4 2 3 4 5 1 1 7 6 2 3 6 ``` Guide me to get my expected output. Thanks in advance.
2020/04/23
[ "https://Stackoverflow.com/questions/61380858", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1999109/" ]
You can add random values of each list to total length and then use [`DataFrame.sample`](http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.sample.html): ``` A=[1,2] B=[1,2,3] C=[1,2,3,4,5,6] L = [A,B,C] m = max(len(x) for x in L) print (m) 6 a = [np.hstack((np.random.choice(x, m - len(x)), x)) for x in L] df = pd.DataFrame(a, index=['A', 'B', 'C']).T.sample(frac=1) print (df) A B C 2 2 2 3 0 2 1 1 3 1 1 4 4 1 2 5 5 2 3 6 1 2 2 2 ```
You can use transpose to achieve the same. EDIT: Used random to randomize the output as requested. ``` import pandas as pd from random import shuffle, choice A=[1,2] B=[1,2,3] C=[1,2,3,4,5,6] shuffle(A) shuffle(B) shuffle(C) data = [A,B,C] df = pd.DataFrame(data) df = df.transpose() df.columns = ['A', 'B', 'C'] df.loc[:,'A'].fillna(choice(A), inplace=True) df.loc[:,'B'].fillna(choice(B), inplace=True) ``` This should give the below output ``` A B C 0 1.0 1.0 1.0 1 2.0 2.0 2.0 2 NaN 3.0 3.0 3 NaN 4.0 4.0 4 NaN NaN 5.0 5 NaN NaN 6.0 ```
47,726,664
I am trying to send messages from one python script to another using MQTT. One script is a publisher. The second script is a subscriber. I send messages every 0.1 second. Publisher: ``` client = mqtt.Client('DataReaderPub') client.connect('127.0.0.1', 1883, 60) print("MQTT parameters set.") # Read from all files count = 0 for i in range(1,51): payload = "Hello world" + str(count) client.publish(testtopic, payload, int(publisherqos)) client.loop() count = count+1 print(count, ' msg sent: ', payload) sleep(0.1) ``` Subscriber: ``` subclient = mqtt.Client("DynamicDetectorSub") subclient.on_message = on_message subclient.connect('127.0.0.1') subclient.subscribe(testtopic, int(subscriberqos)) subclient.loop_forever() ``` mosquitto broker version - 3.1 mosquitto.conf has max inflight messages set to 0, persistence true. publisher QOS = 2 subscriber QOS = 2 topic = 'test' in both scripts When I run subscriber and publisher in the same script, the messages are sent and received as expected. But when they are in separate scripts, I do not receive all the messages and sometimes no messages. I run subscriber first and then publisher. I have tried subscriber with loop.start() and loop.stop() with waiting for few minutes. I am unable to debug this problem. Any pointers would be great! EDIT: 1. I included client.loop() after publish. -> Same output as before 2. When I printed out statements in 'on\_connect' and 'on\_disconnect', I noticed that client mqtt connection gets established and disconnects almost immediately. This happens every second. I even got this message once - [WinError 10053] An established connection was aborted by the software in your host machine Keep Alive = 60 Is there any other parameter I should look at?
2017/12/09
[ "https://Stackoverflow.com/questions/47726664", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2892909/" ]
You need to call the network loop function in the publisher as well so the client actually gets some time to do the IO (And the dual handshake for the QOS2). Add `client.loop()` after the call to `client.publish()` in the client: ``` import paho.mqtt.client as mqtt import time client = mqtt.Client('DataReaderPub') client.connect('127.0.0.1', 1883, 60) print("MQTT parameters set.") # Read from all files count = 0 for i in range(1,51): payload = "Hello world" + str(count) client.publish("test", payload, 2) client.loop() count = count+1 print(count, ' msg sent: ', payload) time.sleep(0.1) ``` Subscriber code: ``` import paho.mqtt.client as mqtt def on_message(client, userdata, msg): print(msg.topic + " " + str(msg.payload)) subclient = mqtt.Client("DynamicDetectorSub") subclient.on_message = on_message subclient.connect('127.0.0.1') subclient.subscribe("test", 2) subclient.loop_forever() ```
When I ran your code, the subscriber was often missing the last packet. I was not otherwise able to reproduce the problems you described. If I rewrite the publisher like this instead... ``` from time import sleep import paho.mqtt.client as mqtt client = mqtt.Client('DataReaderPub') client.connect('127.0.0.1', 1883, 60) print("MQTT parameters set.") client.loop_start() # Read from all files count = 0 for i in range(1,51): payload = "Hello world" + str(count) client.publish('test', payload, 2) count = count+1 print(count, ' msg sent: ', payload) sleep(0.1) client.loop_stop() client.disconnect() ``` ...then I no longer see the dropped packet. I'm using the `start_loop`/`stop_loop` methods here, which run the mqtt loop asynchronously. I'm not sure exactly what was causing your dropped packet, but I suspect that the final message was still in the publisher's send queue when the code exits.
47,726,664
I am trying to send messages from one python script to another using MQTT. One script is a publisher. The second script is a subscriber. I send messages every 0.1 second. Publisher: ``` client = mqtt.Client('DataReaderPub') client.connect('127.0.0.1', 1883, 60) print("MQTT parameters set.") # Read from all files count = 0 for i in range(1,51): payload = "Hello world" + str(count) client.publish(testtopic, payload, int(publisherqos)) client.loop() count = count+1 print(count, ' msg sent: ', payload) sleep(0.1) ``` Subscriber: ``` subclient = mqtt.Client("DynamicDetectorSub") subclient.on_message = on_message subclient.connect('127.0.0.1') subclient.subscribe(testtopic, int(subscriberqos)) subclient.loop_forever() ``` mosquitto broker version - 3.1 mosquitto.conf has max inflight messages set to 0, persistence true. publisher QOS = 2 subscriber QOS = 2 topic = 'test' in both scripts When I run subscriber and publisher in the same script, the messages are sent and received as expected. But when they are in separate scripts, I do not receive all the messages and sometimes no messages. I run subscriber first and then publisher. I have tried subscriber with loop.start() and loop.stop() with waiting for few minutes. I am unable to debug this problem. Any pointers would be great! EDIT: 1. I included client.loop() after publish. -> Same output as before 2. When I printed out statements in 'on\_connect' and 'on\_disconnect', I noticed that client mqtt connection gets established and disconnects almost immediately. This happens every second. I even got this message once - [WinError 10053] An established connection was aborted by the software in your host machine Keep Alive = 60 Is there any other parameter I should look at?
2017/12/09
[ "https://Stackoverflow.com/questions/47726664", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2892909/" ]
You need to call the network loop function in the publisher as well so the client actually gets some time to do the IO (And the dual handshake for the QOS2). Add `client.loop()` after the call to `client.publish()` in the client: ``` import paho.mqtt.client as mqtt import time client = mqtt.Client('DataReaderPub') client.connect('127.0.0.1', 1883, 60) print("MQTT parameters set.") # Read from all files count = 0 for i in range(1,51): payload = "Hello world" + str(count) client.publish("test", payload, 2) client.loop() count = count+1 print(count, ' msg sent: ', payload) time.sleep(0.1) ``` Subscriber code: ``` import paho.mqtt.client as mqtt def on_message(client, userdata, msg): print(msg.topic + " " + str(msg.payload)) subclient = mqtt.Client("DynamicDetectorSub") subclient.on_message = on_message subclient.connect('127.0.0.1') subclient.subscribe("test", 2) subclient.loop_forever() ```
It turned out to be a silly bug. As hardillb suggested I looked at the broker logs. It showed that the subscriber client was already connected. I am using Pycharm after a really really long time. So I had accidentally ran publisher and subscriber so many times that they were running in parallel in the output console. No wonder they got disconnected since the client IDs were the same. Sorry for the trouble. BTW client.loop() after publish is not needed. Thanks hardillb.
47,486,930
The following script generates a 2d list in python: ``` matrix = [[0 for row in range (5)] for col in range (5)] i = 2 matrix[i][i] = 1 matrix[i+1][i] = 1 matrix[i][i+1] = 1 matrix[i+1][i+1] = 1 for row in matrix: for item in row: print(item,end=" ") print() print() ``` The generated 2d list looks like this: ``` 0 0 0 0 0 0 0 0 0 0 0 0 1 1 0 0 0 1 1 0 0 0 0 0 0 ``` How can I find if I have a square with the same number (number must be 1) like shown up? The square with the same number must be 2x2
2017/11/25
[ "https://Stackoverflow.com/questions/47486930", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3913519/" ]
In order for this combination to work you need to make sure your `virtual-repeat-container` is kept in sync. If you write a simple 'refresh' function that gets called on open select: ``` function () { return $timeout(function () { $scope.$broadcast("$md-resize"); }, 100); }; ``` it should be enough. Working example: ```js angular.module("app", ["ngMaterial", "ngSanitize", "ngAnimate"]) .controller("MainController", function($scope, $timeout) { // refresh virtual container $scope.refresh = function() { return $timeout(function() { $scope.$broadcast("$md-resize"); }, 100); }; $scope.infiniteItems = { _pageSize: 10000, toLoad_: 0, items: [], getItemAtIndex(index) { if (index > this.items.length) { this.fetchMoreItems_(index); return null; } return this.items[index]; }, getLength() { return this.items.length + 5; }, fetchMoreItems_(index) { if (this.toLoad_ < index) { this.toLoad_ += this._pageSize; // simulate $http request $timeout(angular.noop, 300) .then(() => { for (let i = 0; i < this._pageSize; i++) { this.items.push(i) } }); } } }; }); ``` ```css #vertical-container { height: 256px; } ``` ```html <!DOCTYPE html> <html> <head> <meta charset="utf-8"> <title>select with md-virtual-repeat</title> <link rel="stylesheet" type="text/css" href="https://cdnjs.cloudflare.com/ajax/libs/angular-material/1.0.9/angular-material.min.css"> <script src="https://cdnjs.cloudflare.com/ajax/libs/angular.js/1.5.7/angular.min.js"></script> <script src="https://cdnjs.cloudflare.com/ajax/libs/angular.js/1.5.7/angular-aria.min.js"></script> <script src="https://cdnjs.cloudflare.com/ajax/libs/angular.js/1.5.7/angular-animate.min.js"></script> <script src="https://cdnjs.cloudflare.com/ajax/libs/angular.js/1.5.7/angular-sanitize.min.js"></script> <script src="https://cdnjs.cloudflare.com/ajax/libs/angular-material/1.0.9/angular-material.min.js"></script> </head> <body> <div class="main" ng-app="app" ng-controller="MainController" layout="column" layout-align="center center" layout-fill> <md-input-container> <label>Select an option</label> <md-select ng-model="haha" md-on-open="refresh()"> <md-virtual-repeat-container id="vertical-container"> <md-option md-virtual-repeat="item in infiniteItems" md-on-demand="" ng-value="item" ng-selected="haha==item">{{item}}</md-option> </md-virtual-repeat-container> </md-select> </md-input-container> </div> </script> </body> </html> ``` You can also check out [this](https://stackoverflow.com/questions/34912151/is-there-a-way-to-refresh-virtual-repeat-container) similar answer.
According to <https://github.com/angular/material/issues/10868> this post, different angularjs version has different behaviour. Return $timeout function should have also `window.dispatchEvent(new Event('resize'));` statement. Final $timeout function looks like this. ``` return $timeout(function() { $scope.$broadcast("$md-resize"); window.dispatchEvent(new Event('resize')); },100); ```
55,399,396
My searches lead me to the Pywin32 which should be able to mute/unmute the sound and detect its state (on Windows 10, using Python 3+). I found a way using an AutoHotkey script, but I'm looking for a pythonic way. More specifically, I'm not interested in playing with the Windows GUI. *Pywin32 works using a Windows DLL*. so far, I am able to do it by calling an ahk script: In the python script: ``` import subprocess subprocess.call([ahkexe, ahkscript]) ``` In the AutoHotkey script: ``` SoundGet, sound_mute, Master, mute if sound_mute = On ; if the sound is muted Send {Volume_Mute} ; press the "mute button" to unmute SoundSet 30 ; set the sound level at 30 ```
2019/03/28
[ "https://Stackoverflow.com/questions/55399396", "https://Stackoverflow.com", "https://Stackoverflow.com/users/7227370/" ]
You can use the Windows Sound Manager by paradoxis (<https://github.com/Paradoxis/Windows-Sound-Manager>). ``` from sound import Sound Sound.mute() ``` Every call to `Sound.mute()` will toggle mute on or off. Have a look at the `main.py` to see how to use the setter and getter methods.
If you're also building a GUI, wxPython (and I would believe other GUI frameworks) have access to the windows audio mute "button".
10,868,410
I'm a little new to web crawlers and such, though I've been programming for a year already. So please bear with me as I try to explain my problem here. I'm parsing info from Yahoo! News, and I've managed to get most of what I want, but there's a little portion that has stumped me. For example: <http://news.yahoo.com/record-nm-blaze-test-forest-management-225730172.html> I want to get the numbers beside the thumbs up and thumbs down icons in the comments. When I use "Inspect Element" in my Chrome browser, I can clearly see the things that I have to look for - namely, an em tag under the div class 'ugccmt-rate'. However, I'm not able to find this in my python program. In trying to track down the root of the problem, I clicked to view source of the page, and it seems that this tag is not there. Do you guys know how I should approach this problem? Does this have something to do with the javascript on the page that displays the info only after it runs? I'd appreciate some pointers in the right direction. Thanks.
2012/06/03
[ "https://Stackoverflow.com/questions/10868410", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1433227/" ]
The page is being generated via JavaScript. Check if there is a mobile version of the website first. If not, check for any APIs or RSS/Atom feeds. If there's *nothing* else, you'll either have to manually figure out what the JavaScript is loading and from where, or use [Selenium](http://seleniumhq.org/) to automate a browser that renders the JavaScript for you for parsing.
Using the Web Console in Firefox you can pretty easily see what requests the page is actually making as it runs its scripts, and figure out what URI returns the data you want. Then you can request that URI directly in your Python script and tease the data out of it. It is probably in a format that Python already has a library to parse, such as JSON. Yahoo! may have some stuff on their server side to try to prevent you from accessing these data files in a script, such as checking the browser (user-agent header), cookies, or referrer. These can all be faked with enough perseverance, but you should take their existence as a sign that you should tread lightly. (They may also limit the number of requests you can make in a given time period, which is impossible to get around.)
46,382,384
I'm playing around with [Chalice](http://chalice.readthedocs.io/en/latest/) for the first time as I am trying to evaluate it as a possible replacement framework to migrate my existing Python Flask APIs from EC2 to Lambda. From an Amazon Linux EC2 instance, I added some dependencies to a virtualenv I'm playing with. I then created a requirements.txt: ``` botocore==1.7.11 chalice==1.0.2 click==6.6 docutils==0.14 jmespath==0.9.3 MySQL-python==1.2.5 PyMySQL==0.7.11 python-dateutil==2.6.1 six==1.10.0 SQLAlchemy==1.1.14 typing==3.5.3.0 ``` I then tried to deploy with `chalice deploy` and got: ``` Creating deployment package. Could not install dependencies: MySQL-python==1.2.5 typing==3.5.3.0 You will have to build these yourself and vendor them in the chalice vendor folder. Your deployment will continue but may not work correctly if missing dependencies are not present. For more information: http://chalice.readthedocs.io/en/latest/topics/packaging.html ........ ``` I then tried to follow linked the [docs](http://chalice.readthedocs.io/en/latest/topics/packaging.html) and for the first problematic dependency `MySQL-python==1.2.5` I did the following: ``` cd vendor/ pip download MySQL-python==1.2.5 pip wheel MySQL-python-1.2.5.zip rm rm MySQL-python-1.2.5.zip unzip MySQL_python-1.2.5-cp27-cp27mu-linux_x86_64.whl rm MySQL_python-1.2.5-cp27-cp27mu-linux_x86_64.whl ``` My vendor folder looks like: ``` ls vendor MySQLdb _mysql_exceptions.py MySQL_python-1.2.5.dist-info _mysql.so ``` and now when I run chalice deploy I get: ``` (test)[ec2-user@ip-172-31-26-155 test]$ chalice deploy Creating deployment package. Could not install dependencies: MySQL-python==1.2.5 You will have to build these yourself and vendor them in the chalice vendor folder. Your deployment will continue but may not work correctly if missing dependencies are not present. For more information: http://chalice.readthedocs.io/en/latest/topics/packaging.html /home/ec2-user/test/test/local/lib/python2.7/site-packages/chalice/deploy/packager.py:110: UserWarning: Duplicate name: '_mysql.so' zipped.write(full_path, zip_path) /home/ec2-user/test/test/local/lib/python2.7/site-packages/chalice/deploy/packager.py:110: UserWarning: Duplicate name: '_mysql_exceptions.py' zipped.write(full_path, zip_path) /home/ec2-user/test/test/local/lib/python2.7/site-packages/chalice/deploy/packager.py:110: UserWarning: Duplicate name: 'MySQL_python-1.2.5.dist-info/RECORD' zipped.write(full_path, zip_path) /home/ec2-user/test/test/local/lib/python2.7/site-packages/chalice/deploy/packager.py:110: UserWarning: Duplicate name: 'MySQL_python-1.2.5.dist-info/DESCRIPTION.rst' zipped.write(full_path, zip_path) /home/ec2-user/test/test/local/lib/python2.7/site-packages/chalice/deploy/packager.py:110: UserWarning: Duplicate name: 'MySQL_python-1.2.5.dist-info/WHEEL' zipped.write(full_path, zip_path) /home/ec2-user/test/test/local/lib/python2.7/site-packages/chalice/deploy/packager.py:110: UserWarning: Duplicate name: 'MySQL_python-1.2.5.dist-info/top_level.txt' zipped.write(full_path, zip_path) /home/ec2-user/test/test/local/lib/python2.7/site-packages/chalice/deploy/packager.py:110: UserWarning: Duplicate name: 'MySQL_python-1.2.5.dist-info/METADATA' zipped.write(full_path, zip_path) /home/ec2-user/test/test/local/lib/python2.7/site-packages/chalice/deploy/packager.py:110: UserWarning: Duplicate name: 'MySQL_python-1.2.5.dist-info/metadata.json' zipped.write(full_path, zip_path) /home/ec2-user/test/test/local/lib/python2.7/site-packages/chalice/deploy/packager.py:110: UserWarning: Duplicate name: 'MySQLdb/converters.py' zipped.write(full_path, zip_path) /home/ec2-user/test/test/local/lib/python2.7/site-packages/chalice/deploy/packager.py:110: UserWarning: Duplicate name: 'MySQLdb/release.py' zipped.write(full_path, zip_path) /home/ec2-user/test/test/local/lib/python2.7/site-packages/chalice/deploy/packager.py:110: UserWarning: Duplicate name: 'MySQLdb/__init__.py' zipped.write(full_path, zip_path) /home/ec2-user/test/test/local/lib/python2.7/site-packages/chalice/deploy/packager.py:110: UserWarning: Duplicate name: 'MySQLdb/cursors.py' zipped.write(full_path, zip_path) /home/ec2-user/test/test/local/lib/python2.7/site-packages/chalice/deploy/packager.py:110: UserWarning: Duplicate name: 'MySQLdb/times.py' zipped.write(full_path, zip_path) /home/ec2-user/test/test/local/lib/python2.7/site-packages/chalice/deploy/packager.py:110: UserWarning: Duplicate name: 'MySQLdb/connections.py' zipped.write(full_path, zip_path) /home/ec2-user/test/test/local/lib/python2.7/site-packages/chalice/deploy/packager.py:110: UserWarning: Duplicate name: 'MySQLdb/constants/REFRESH.py' zipped.write(full_path, zip_path) /home/ec2-user/test/test/local/lib/python2.7/site-packages/chalice/deploy/packager.py:110: UserWarning: Duplicate name: 'MySQLdb/constants/CLIENT.py' zipped.write(full_path, zip_path) /home/ec2-user/test/test/local/lib/python2.7/site-packages/chalice/deploy/packager.py:110: UserWarning: Duplicate name: 'MySQLdb/constants/ER.py' zipped.write(full_path, zip_path) /home/ec2-user/test/test/local/lib/python2.7/site-packages/chalice/deploy/packager.py:110: UserWarning: Duplicate name: 'MySQLdb/constants/FIELD_TYPE.py' zipped.write(full_path, zip_path) /home/ec2-user/test/test/local/lib/python2.7/site-packages/chalice/deploy/packager.py:110: UserWarning: Duplicate name: 'MySQLdb/constants/__init__.py' zipped.write(full_path, zip_path) /home/ec2-user/test/test/local/lib/python2.7/site-packages/chalice/deploy/packager.py:110: UserWarning: Duplicate name: 'MySQLdb/constants/CR.py' zipped.write(full_path, zip_path) /home/ec2-user/test/test/local/lib/python2.7/site-packages/chalice/deploy/packager.py:110: UserWarning: Duplicate name: 'MySQLdb/constants/FLAG.py' zipped.write(full_path, zip_path) ``` From the documentation, it's not clear to me what I'm doing wrong. Can someone help?
2017/09/23
[ "https://Stackoverflow.com/questions/46382384", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2620746/" ]
You can remove "MySQL-python==1.2.5" from your requirements.txt (since it's already present in your vendor directory) See this [issue](https://github.com/aws/chalice/issues/626) in the Chalice repo for more info.
Looking at what you have in your directory listing you provided, I noticed you don't have a **init**.py file. This file identifies the folder as a library file. Put that in your vendors directory.
61,874,962
Running into installation error in python 3.8 for tensorflow and i'm wondering how to downgrade without losing my environments in pycharm.
2020/05/18
[ "https://Stackoverflow.com/questions/61874962", "https://Stackoverflow.com", "https://Stackoverflow.com/users/13435259/" ]
1. [Download](https://www.python.org/downloads/) and install Python 3.7 2. In PyCharm, go to 'File' -> 'Settings' -> 'Project: <...>' -> 'Project Interpreter', and select 'Python 3.7' in the 'Project Interpreter' dropdown. 3. If you don't see it, click on the settings icon next to it, go to the 'System Interpreter' tab, and browse to and select 'python.exe' from the **Python37** folder
Step1 : **Go to Preferences:** [![enter image description here](https://i.stack.imgur.com/IBt7K.png)](https://i.stack.imgur.com/IBt7K.png) Step 2: Go to Python Interpreter [![enter image description here](https://i.stack.imgur.com/S5bVB.png)](https://i.stack.imgur.com/S5bVB.png) Step 3: click Show All [![enter image description here](https://i.stack.imgur.com/MmQWL.png)](https://i.stack.imgur.com/MmQWL.png) Step 4: Click on + button [![enter image description here](https://i.stack.imgur.com/RzSvO.png)](https://i.stack.imgur.com/RzSvO.png) Step 5: select the Version you have installed on your computer. [![enter image description here](https://i.stack.imgur.com/oNNHa.png)](https://i.stack.imgur.com/oNNHa.png) Step 6: Select the added environment and click ok [![enter image description here](https://i.stack.imgur.com/6RJGU.png)](https://i.stack.imgur.com/6RJGU.png)
61,874,962
Running into installation error in python 3.8 for tensorflow and i'm wondering how to downgrade without losing my environments in pycharm.
2020/05/18
[ "https://Stackoverflow.com/questions/61874962", "https://Stackoverflow.com", "https://Stackoverflow.com/users/13435259/" ]
1. [Download](https://www.python.org/downloads/) and install Python 3.7 2. In PyCharm, go to 'File' -> 'Settings' -> 'Project: <...>' -> 'Project Interpreter', and select 'Python 3.7' in the 'Project Interpreter' dropdown. 3. If you don't see it, click on the settings icon next to it, go to the 'System Interpreter' tab, and browse to and select 'python.exe' from the **Python37** folder
Step 1: run following command on Terminal cse572 is the environment Name: ``` conda create -n cse572 python=3.7 scikit-learn=0.21.2 pandas=0.25.1 ``` Step 2: `conda activate cse572` Step 3: `python --version` should show python version 3.7