qid
int64
46k
74.7M
question
stringlengths
54
37.8k
date
stringlengths
10
10
metadata
listlengths
3
3
response_j
stringlengths
17
26k
response_k
stringlengths
26
26k
24,053,152
I want to convert date like Jun 28 in datetime format like 2014-06-28. I tried following code and many more variation which gives me correct output in ipython but I m unable to save the record in database. It throws error as value has an invalid date format. It must be in YYYY-MM-DD format. Can anyone help me to fix this issue ? Following is the code snippet ``` m = "Jun" d = 28 y = datetime.datetime.now().year m = strptime(m,'%b').tm_mon if m > datetime.datetime.now().month: y=y-1 new_date = str(d)+" "+str(m)+" "+str(y) new_date = datetime.datetime.strptime(new_date, '%b %d %Y').date() ``` my models.py is as ``` class Profile(models.Model): Name = models.CharField(max_length = 256, null = True, blank = True) Location = models.CharField(max_length = 256, null = True, blank = True) Degree = models.CharField(max_length = 256, null = True, blank = True) Updated_on = models.DateField(null = True, blank = True) ``` Code that saves to model is like ``` def save_record(self): try: record = Profile(Name= indeed.name, Location = loc, Degree = degree, Updated_on = new_date, ) record.save() print "Record added" except Exception as err: print "Record not added ",err pass ``` Thanks in advance
2014/06/05
[ "https://Stackoverflow.com/questions/24053152", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3446107/" ]
Once you have a `date` object, you can use the `strftime()` function to [format](https://docs.python.org/2/library/datetime.html#strftime-and-strptime-behavior) it into a string. Let's say `new_date` is your date object from your question. Then you can do: ``` new_date.strftime('%Y-%m-%d') ``` Btw, you can do the same with a `datetime` object too. **EDIT:** Double check whether your `Updated_on` field uses `DateField` or `DateTimeField`. That will affect whether you use a `datetime.date()` object or `datetime.datetime()` object, respectively.
I tried on console: ``` >>import datetime >>datetime.datetime.strptime("Jun-08-2013", '%b-%d-%Y').date() datetime.date(2013, 6, 8) ``` There are several errors in the code. So solution should be: ``` m = "Jun" d = 28 if datetime.datetime.strptime("Aug",'%b').month > datetime.datetime.now().month: y= (datetime.datetime.now() - relativedelta(years=1)).year #from dateutil.relativedelta import relativedelta else: y=datetime.datetime.now().year new_date = str(m)+"-"+str(d)+"-"+str(y) new_date = datetime.datetime.strptime(new_date, '%b-%d-%Y').date() ``` `new_date` is a date object, so it should be saved to `models.DateField()` without any problem(including format issues).
64,956,344
I have created a python program that works fine. but I want to make it more neat, I ask the user for numbers, number1, number2 etc. up to 5. I want to just do this with a for loop though, like this ``` for i in range(0,5): number(i) = int(input("what is the number")) ``` I realise that this code docent actually work but that is kind of what I want to do.
2020/11/22
[ "https://Stackoverflow.com/questions/64956344", "https://Stackoverflow.com", "https://Stackoverflow.com/users/14640673/" ]
What your code is, cannot be done per se but it can be done using lists like this: ``` number_list = [] for i in range(5); number = int(input("What is the number?")) number_list.append(number) // Now you can access the list using indexes for j in len(number_list): print(number_list[j]) ```
If you want to save also variable names you can use python `dictionaries` like this: ``` number = {} for i in range(0,5): number[f'number{i}'] = int(input("what is the number ")) print(number) ``` output: ``` {'number0': 5, 'number1': 4, 'number2': 5, 'number3': 6, 'number4': 3} ```
64,956,344
I have created a python program that works fine. but I want to make it more neat, I ask the user for numbers, number1, number2 etc. up to 5. I want to just do this with a for loop though, like this ``` for i in range(0,5): number(i) = int(input("what is the number")) ``` I realise that this code docent actually work but that is kind of what I want to do.
2020/11/22
[ "https://Stackoverflow.com/questions/64956344", "https://Stackoverflow.com", "https://Stackoverflow.com/users/14640673/" ]
Code: ``` number = [] for i in range(5): number.append(int(input("what is the number "))) print(number) ```
If you want to save also variable names you can use python `dictionaries` like this: ``` number = {} for i in range(0,5): number[f'number{i}'] = int(input("what is the number ")) print(number) ``` output: ``` {'number0': 5, 'number1': 4, 'number2': 5, 'number3': 6, 'number4': 3} ```
64,956,344
I have created a python program that works fine. but I want to make it more neat, I ask the user for numbers, number1, number2 etc. up to 5. I want to just do this with a for loop though, like this ``` for i in range(0,5): number(i) = int(input("what is the number")) ``` I realise that this code docent actually work but that is kind of what I want to do.
2020/11/22
[ "https://Stackoverflow.com/questions/64956344", "https://Stackoverflow.com", "https://Stackoverflow.com/users/14640673/" ]
The easiest I could think at the top of my had way would just to create a list and append the items: ``` numbers = [] for i in range(0,5): numbers += [int(input("what is the number"))] ```
If you want to save also variable names you can use python `dictionaries` like this: ``` number = {} for i in range(0,5): number[f'number{i}'] = int(input("what is the number ")) print(number) ``` output: ``` {'number0': 5, 'number1': 4, 'number2': 5, 'number3': 6, 'number4': 3} ```
849,058
Consider this python program: ``` import sys lc = 0 for line in open(sys.argv[1]): lc = lc + 1 print lc, sys.argv[1] ``` Running it on my 6GB text file, it completes in ~ 2minutes. Question: **is it possible to go faster?** Note that the same time is required by: ``` wc -l myfile.txt ``` so, I suspect the anwer to my quesion is just a plain "no". Note also that my real program is doing something more interesting than just counting the lines, so please give a generic answer, **not** line-counting-tricks (like keeping a line count metadata in the file) PS: I tagged "linux" this question, because I'm interested only in linux-specific answers. Feel free to give OS-agnostic, or even other-OS answers, if you have them. See also the [follow-up question](https://stackoverflow.com/questions/860893/is-python-automagically-parallelizing-io-and-cpu-or-memory-bound-sections)
2009/05/11
[ "https://Stackoverflow.com/questions/849058", "https://Stackoverflow.com", "https://Stackoverflow.com/users/25891/" ]
as others have said - "no" Almost all of your time is spent waiting for IO. If this is something that you need to do more than once, *and* you have a machine with tons of ram, you could keep the file in memory. If your machine has 16GB of ram, you'll have 8GB available at /dev/shm to play with. Another option: If you have multiple machines, this problem is trivial to parallelize. Split the it among multiple machines, each of them count their newlines, and add the results.
Note that Python I/O is implemented in C, so there is not much luck speeding it up further.
849,058
Consider this python program: ``` import sys lc = 0 for line in open(sys.argv[1]): lc = lc + 1 print lc, sys.argv[1] ``` Running it on my 6GB text file, it completes in ~ 2minutes. Question: **is it possible to go faster?** Note that the same time is required by: ``` wc -l myfile.txt ``` so, I suspect the anwer to my quesion is just a plain "no". Note also that my real program is doing something more interesting than just counting the lines, so please give a generic answer, **not** line-counting-tricks (like keeping a line count metadata in the file) PS: I tagged "linux" this question, because I'm interested only in linux-specific answers. Feel free to give OS-agnostic, or even other-OS answers, if you have them. See also the [follow-up question](https://stackoverflow.com/questions/860893/is-python-automagically-parallelizing-io-and-cpu-or-memory-bound-sections)
2009/05/11
[ "https://Stackoverflow.com/questions/849058", "https://Stackoverflow.com", "https://Stackoverflow.com/users/25891/" ]
You can't get any faster than the maximum disk read speed. In order to reach the maximum disk speed you can use the following two tips: 1. Read the file in with a big buffer. This can either be coded "manually" or simply by using io.BufferedReader ( available in python2.6+ ). 2. Do the newline counting in another thread, in parallel.
**plain "no".** You've pretty much reached maximum disk speed. I mean, you could [mmap](http://docs.python.org/library/mmap.html) the file, or read it in binary chunks, and use `.count('\n')` or something. But that is unlikely to give major improvements.
849,058
Consider this python program: ``` import sys lc = 0 for line in open(sys.argv[1]): lc = lc + 1 print lc, sys.argv[1] ``` Running it on my 6GB text file, it completes in ~ 2minutes. Question: **is it possible to go faster?** Note that the same time is required by: ``` wc -l myfile.txt ``` so, I suspect the anwer to my quesion is just a plain "no". Note also that my real program is doing something more interesting than just counting the lines, so please give a generic answer, **not** line-counting-tricks (like keeping a line count metadata in the file) PS: I tagged "linux" this question, because I'm interested only in linux-specific answers. Feel free to give OS-agnostic, or even other-OS answers, if you have them. See also the [follow-up question](https://stackoverflow.com/questions/860893/is-python-automagically-parallelizing-io-and-cpu-or-memory-bound-sections)
2009/05/11
[ "https://Stackoverflow.com/questions/849058", "https://Stackoverflow.com", "https://Stackoverflow.com/users/25891/" ]
**Throw hardware at the problem.** As gs pointed out, your bottleneck is the hard disk transfer rate. So, no you can't use a better algorithm to improve your time, but you can buy a faster hard drive. **Edit:** Another good point by gs; you could also use a [RAID](http://en.wikipedia.org/wiki/Redundant_array_of_independent_disks) configuration to improve your speed. This can be done either with [hardware](http://www.pcguide.com/ref/hdd/perf/raid/conf/ctrlHardware-c.html) or software (e.g. [OS X](http://www.frozennorth.org/C2011481421/E20060221212020/index.html), [Linux](http://linux-raid.osdl.org/index.php/Linux_Raid), [Windows Server](http://www.techotopia.com/index.php/Creating_and_Managing_Windows_Server_2008_Striped_(RAID_0)_Volumes), etc). --- **Governing Equation** `(Amount to transfer) / (transfer rate) = (time to transfer)` `(6000 MB) / (60 MB/s) = 100 seconds` `(6000 MB) / (125 MB/s) = 48 seconds` --- **Hardware Solutions** [The ioDrive Duo](http://www.computerworld.com/action/article.do?command=viewArticleBasic&taxonomyName=storage&articleId=9129644&taxonomyId=19&intsrc=kc_top) is supposedly the fastest solution for a corporate setting, and "will be available in April 2009". Or you could check out the WD Velociraptor hard drive (10,000 rpm). Also, I hear the Seagate [Cheetah](http://www.seagate.com/docs/pdf/datasheet/disc/ds_cheetah_15k_5.pdf) is a good option (15,000 rpm with sustained 125MB/s transfer rate).
The trick is not to make electrons move faster (that's hard to do) but to get more work done per unit of time. First, be sure your 6GB file read is I/O bound, not CPU bound. If It's I/O bound, consider the "Fan-Out" design pattern. * A parent process spawns a bunch of children. * The parent reads the 6Gb file, and deals rows out to the children by writing to their STDIN pipes. The 6GB read time will remain constant. The row dealing should involve as little parent processing as possible. Very simple filters or counts should be used. A pipe is an in-memory channel for communication. It's a shared buffer with a reader and a writer. * Each child reads a row from STDIN, and does appropriate work. Each child should probably write a simple disk file with the final (summarized, reduce) results. Later, the results in those files can be consolidated.
849,058
Consider this python program: ``` import sys lc = 0 for line in open(sys.argv[1]): lc = lc + 1 print lc, sys.argv[1] ``` Running it on my 6GB text file, it completes in ~ 2minutes. Question: **is it possible to go faster?** Note that the same time is required by: ``` wc -l myfile.txt ``` so, I suspect the anwer to my quesion is just a plain "no". Note also that my real program is doing something more interesting than just counting the lines, so please give a generic answer, **not** line-counting-tricks (like keeping a line count metadata in the file) PS: I tagged "linux" this question, because I'm interested only in linux-specific answers. Feel free to give OS-agnostic, or even other-OS answers, if you have them. See also the [follow-up question](https://stackoverflow.com/questions/860893/is-python-automagically-parallelizing-io-and-cpu-or-memory-bound-sections)
2009/05/11
[ "https://Stackoverflow.com/questions/849058", "https://Stackoverflow.com", "https://Stackoverflow.com/users/25891/" ]
If you assume that a disk can read 60MB/s you'd need 6000 / 60 = 100 seconds, which is 1 minute 40 seconds. I don't think that you can get any faster because the disk is the bottleneck.
Note that Python I/O is implemented in C, so there is not much luck speeding it up further.
849,058
Consider this python program: ``` import sys lc = 0 for line in open(sys.argv[1]): lc = lc + 1 print lc, sys.argv[1] ``` Running it on my 6GB text file, it completes in ~ 2minutes. Question: **is it possible to go faster?** Note that the same time is required by: ``` wc -l myfile.txt ``` so, I suspect the anwer to my quesion is just a plain "no". Note also that my real program is doing something more interesting than just counting the lines, so please give a generic answer, **not** line-counting-tricks (like keeping a line count metadata in the file) PS: I tagged "linux" this question, because I'm interested only in linux-specific answers. Feel free to give OS-agnostic, or even other-OS answers, if you have them. See also the [follow-up question](https://stackoverflow.com/questions/860893/is-python-automagically-parallelizing-io-and-cpu-or-memory-bound-sections)
2009/05/11
[ "https://Stackoverflow.com/questions/849058", "https://Stackoverflow.com", "https://Stackoverflow.com/users/25891/" ]
**Throw hardware at the problem.** As gs pointed out, your bottleneck is the hard disk transfer rate. So, no you can't use a better algorithm to improve your time, but you can buy a faster hard drive. **Edit:** Another good point by gs; you could also use a [RAID](http://en.wikipedia.org/wiki/Redundant_array_of_independent_disks) configuration to improve your speed. This can be done either with [hardware](http://www.pcguide.com/ref/hdd/perf/raid/conf/ctrlHardware-c.html) or software (e.g. [OS X](http://www.frozennorth.org/C2011481421/E20060221212020/index.html), [Linux](http://linux-raid.osdl.org/index.php/Linux_Raid), [Windows Server](http://www.techotopia.com/index.php/Creating_and_Managing_Windows_Server_2008_Striped_(RAID_0)_Volumes), etc). --- **Governing Equation** `(Amount to transfer) / (transfer rate) = (time to transfer)` `(6000 MB) / (60 MB/s) = 100 seconds` `(6000 MB) / (125 MB/s) = 48 seconds` --- **Hardware Solutions** [The ioDrive Duo](http://www.computerworld.com/action/article.do?command=viewArticleBasic&taxonomyName=storage&articleId=9129644&taxonomyId=19&intsrc=kc_top) is supposedly the fastest solution for a corporate setting, and "will be available in April 2009". Or you could check out the WD Velociraptor hard drive (10,000 rpm). Also, I hear the Seagate [Cheetah](http://www.seagate.com/docs/pdf/datasheet/disc/ds_cheetah_15k_5.pdf) is a good option (15,000 rpm with sustained 125MB/s transfer rate).
2 minutes sounds about right to read an entire 6gb file. Theres not really much you can do to the algorithm or the OS to speed things up. I think you have two options: 1. Throw money at the problem and get better hardware. Probably the best option if this project is for your job. 2. Don't read the entire file. I don't know what your are trying to do with the data, so maybe you don't have any option but to read the whole thing. On the other hand if you are scanning the whole file for one particular thing, then maybe putting some metadata in there at the start would be helpful.
849,058
Consider this python program: ``` import sys lc = 0 for line in open(sys.argv[1]): lc = lc + 1 print lc, sys.argv[1] ``` Running it on my 6GB text file, it completes in ~ 2minutes. Question: **is it possible to go faster?** Note that the same time is required by: ``` wc -l myfile.txt ``` so, I suspect the anwer to my quesion is just a plain "no". Note also that my real program is doing something more interesting than just counting the lines, so please give a generic answer, **not** line-counting-tricks (like keeping a line count metadata in the file) PS: I tagged "linux" this question, because I'm interested only in linux-specific answers. Feel free to give OS-agnostic, or even other-OS answers, if you have them. See also the [follow-up question](https://stackoverflow.com/questions/860893/is-python-automagically-parallelizing-io-and-cpu-or-memory-bound-sections)
2009/05/11
[ "https://Stackoverflow.com/questions/849058", "https://Stackoverflow.com", "https://Stackoverflow.com/users/25891/" ]
The trick is not to make electrons move faster (that's hard to do) but to get more work done per unit of time. First, be sure your 6GB file read is I/O bound, not CPU bound. If It's I/O bound, consider the "Fan-Out" design pattern. * A parent process spawns a bunch of children. * The parent reads the 6Gb file, and deals rows out to the children by writing to their STDIN pipes. The 6GB read time will remain constant. The row dealing should involve as little parent processing as possible. Very simple filters or counts should be used. A pipe is an in-memory channel for communication. It's a shared buffer with a reader and a writer. * Each child reads a row from STDIN, and does appropriate work. Each child should probably write a simple disk file with the final (summarized, reduce) results. Later, the results in those files can be consolidated.
2 minutes sounds about right to read an entire 6gb file. Theres not really much you can do to the algorithm or the OS to speed things up. I think you have two options: 1. Throw money at the problem and get better hardware. Probably the best option if this project is for your job. 2. Don't read the entire file. I don't know what your are trying to do with the data, so maybe you don't have any option but to read the whole thing. On the other hand if you are scanning the whole file for one particular thing, then maybe putting some metadata in there at the start would be helpful.
849,058
Consider this python program: ``` import sys lc = 0 for line in open(sys.argv[1]): lc = lc + 1 print lc, sys.argv[1] ``` Running it on my 6GB text file, it completes in ~ 2minutes. Question: **is it possible to go faster?** Note that the same time is required by: ``` wc -l myfile.txt ``` so, I suspect the anwer to my quesion is just a plain "no". Note also that my real program is doing something more interesting than just counting the lines, so please give a generic answer, **not** line-counting-tricks (like keeping a line count metadata in the file) PS: I tagged "linux" this question, because I'm interested only in linux-specific answers. Feel free to give OS-agnostic, or even other-OS answers, if you have them. See also the [follow-up question](https://stackoverflow.com/questions/860893/is-python-automagically-parallelizing-io-and-cpu-or-memory-bound-sections)
2009/05/11
[ "https://Stackoverflow.com/questions/849058", "https://Stackoverflow.com", "https://Stackoverflow.com/users/25891/" ]
as others have said - "no" Almost all of your time is spent waiting for IO. If this is something that you need to do more than once, *and* you have a machine with tons of ram, you could keep the file in memory. If your machine has 16GB of ram, you'll have 8GB available at /dev/shm to play with. Another option: If you have multiple machines, this problem is trivial to parallelize. Split the it among multiple machines, each of them count their newlines, and add the results.
PyPy provides optimised input/output faster up to 7 times.
849,058
Consider this python program: ``` import sys lc = 0 for line in open(sys.argv[1]): lc = lc + 1 print lc, sys.argv[1] ``` Running it on my 6GB text file, it completes in ~ 2minutes. Question: **is it possible to go faster?** Note that the same time is required by: ``` wc -l myfile.txt ``` so, I suspect the anwer to my quesion is just a plain "no". Note also that my real program is doing something more interesting than just counting the lines, so please give a generic answer, **not** line-counting-tricks (like keeping a line count metadata in the file) PS: I tagged "linux" this question, because I'm interested only in linux-specific answers. Feel free to give OS-agnostic, or even other-OS answers, if you have them. See also the [follow-up question](https://stackoverflow.com/questions/860893/is-python-automagically-parallelizing-io-and-cpu-or-memory-bound-sections)
2009/05/11
[ "https://Stackoverflow.com/questions/849058", "https://Stackoverflow.com", "https://Stackoverflow.com/users/25891/" ]
If you assume that a disk can read 60MB/s you'd need 6000 / 60 = 100 seconds, which is 1 minute 40 seconds. I don't think that you can get any faster because the disk is the bottleneck.
2 minutes sounds about right to read an entire 6gb file. Theres not really much you can do to the algorithm or the OS to speed things up. I think you have two options: 1. Throw money at the problem and get better hardware. Probably the best option if this project is for your job. 2. Don't read the entire file. I don't know what your are trying to do with the data, so maybe you don't have any option but to read the whole thing. On the other hand if you are scanning the whole file for one particular thing, then maybe putting some metadata in there at the start would be helpful.
849,058
Consider this python program: ``` import sys lc = 0 for line in open(sys.argv[1]): lc = lc + 1 print lc, sys.argv[1] ``` Running it on my 6GB text file, it completes in ~ 2minutes. Question: **is it possible to go faster?** Note that the same time is required by: ``` wc -l myfile.txt ``` so, I suspect the anwer to my quesion is just a plain "no". Note also that my real program is doing something more interesting than just counting the lines, so please give a generic answer, **not** line-counting-tricks (like keeping a line count metadata in the file) PS: I tagged "linux" this question, because I'm interested only in linux-specific answers. Feel free to give OS-agnostic, or even other-OS answers, if you have them. See also the [follow-up question](https://stackoverflow.com/questions/860893/is-python-automagically-parallelizing-io-and-cpu-or-memory-bound-sections)
2009/05/11
[ "https://Stackoverflow.com/questions/849058", "https://Stackoverflow.com", "https://Stackoverflow.com/users/25891/" ]
If you assume that a disk can read 60MB/s you'd need 6000 / 60 = 100 seconds, which is 1 minute 40 seconds. I don't think that you can get any faster because the disk is the bottleneck.
PyPy provides optimised input/output faster up to 7 times.
849,058
Consider this python program: ``` import sys lc = 0 for line in open(sys.argv[1]): lc = lc + 1 print lc, sys.argv[1] ``` Running it on my 6GB text file, it completes in ~ 2minutes. Question: **is it possible to go faster?** Note that the same time is required by: ``` wc -l myfile.txt ``` so, I suspect the anwer to my quesion is just a plain "no". Note also that my real program is doing something more interesting than just counting the lines, so please give a generic answer, **not** line-counting-tricks (like keeping a line count metadata in the file) PS: I tagged "linux" this question, because I'm interested only in linux-specific answers. Feel free to give OS-agnostic, or even other-OS answers, if you have them. See also the [follow-up question](https://stackoverflow.com/questions/860893/is-python-automagically-parallelizing-io-and-cpu-or-memory-bound-sections)
2009/05/11
[ "https://Stackoverflow.com/questions/849058", "https://Stackoverflow.com", "https://Stackoverflow.com/users/25891/" ]
**plain "no".** You've pretty much reached maximum disk speed. I mean, you could [mmap](http://docs.python.org/library/mmap.html) the file, or read it in binary chunks, and use `.count('\n')` or something. But that is unlikely to give major improvements.
This is a bit of an old question, but one idea I've recently tested out in my petabyte project was the speed benefit of compressing data, then using compute to decompress it into memory. I used a gigabyte as a standard, but using [`zlib`](https://docs.python.org/3/library/zlib.html) you can get really impressive file size reductions. Once you've reduced your file size, when you go to iterate through this file you just: 1. Load the smaller file into memory (or use stream object). 2. Decompress it (as a whole, or using the stream object to get chunks of decompressed data). 3. Work on the decompressed file data as you wish. I've found this process is 3x faster in the best best case than using native I/O bound tasks. It's a bit outside of the question, but it's an old one and people may find it useful. --- Example: `compress.py` ```py import zlib with open("big.csv", "rb") as f: compressed = zlib.compress(f.read()) open("big_comp.csv", "wb").write(compressed) ``` `iterate.py` ```py import zlib with open("big_comp.csv", "rb") as f: big = zlib.decompress(f.read()) for line in big.split("\n"): line = reversed(line) ```
58,837,883
I was automating software installation using python's pyautogui model. So, I crop some images from installation screen like for clicking next, accept the terms and conditions. Using image search I am able to locate the image on the screen and able to click on the right areas. Works fine in my system. However, the script does not work in other systems as image search is unsuccessful. May be because image is cropped in my system and being searched in other system. The resolutions of both the systems are same but the screen size is different(like 15 inch, 17 inch). My question was does the function locateOnScreen is compatible across different machines? How can I resolve this problem given that I need to deploy this automation across multiple systems in the company? The code is pasted below: ``` import os import time import pyautogui from pywinauto.application import Application fsv = Application(backend="win32").start("sandra_24.61.exe") while(1): s = pyautogui.locateOnScreen("C:\\WV_Project\\testcaseAutomation\\images\\ok.png") if (s==None): print("wait for 1 sec for ok button to come") time.sleep(1) else: pyautogui.click(s.left,s.top) print("Ok clicked") break while(1): s = pyautogui.locateOnScreen("C:\\WV_Project\\testcaseAutomation\\images\\acceptRadio.png") if (s==None): print("wait for 1 sec for accept radio button to come") time.sleep(1) else: x=s.left y=s.top pyautogui.click(s.left,s.top) print("accept clicked") break; time.sleep(2) x = x+366 y=y+78 pyautogui.click(x,y) print("next clicked") time.sleep(2) pyautogui.click(x,y) time.sleep(2) print("next clicked") time.sleep(2) pyautogui.click(x,y) print("next clicked") time.sleep(2) pyautogui.click(x,y) time.sleep(2) print("next clicked") pyautogui.click(x,y) time.sleep(2) print("next clicked") pyautogui.click(x,y) print("install clicked") time.sleep(50) while(1): time.sleep(2) try: x,y = pyautogui.locateCenterOnScreen("C:\\WV_Project\\testcaseAutomation\\images\\finish.png") pyautogui.click(x,y) break except: print("Exception occurred") print("Sandra is successfully installed.") ```
2019/11/13
[ "https://Stackoverflow.com/questions/58837883", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2853557/" ]
As far as I can assume! The problem is with image resolution. In my company, I also have a robot that automates some complex task. All the monitors here are same but still, I was facing some trouble matching images. Cropped image from one PC was not working on another. So what I am doing right now is using "SNIPPING TOOLS" to take screenshots in every pc. This solves the problem easily but this solution takes time. If you are not using more than 10 or 20 different PC's then this solution may help. If the problem exists then you may try by reducing the **CONFIDENCE LEVEL** like below: ``` x,y = pyautogui.locateCenterOnScreen("C:\\WV_Project\\testcaseAutomation\\images\\finish.png", grayscale=True, confidence=.5) ``` Try using different confidence levels. You will also need OPENCV for using CONFIDENCE. use "pip install opencv-python" for installing OPENCV from command prompt.
You can change the resolution if you make script in `1366x768`and trying in 1920x1080 it will never work because you need to add new image there are two ways you can use: 1:> replace the image 2:> add the x,y position **Way:** ``` xy = pyautogui.locateCenterOnScreen('your directory', grayscale=True, confidence=.5) pg.click(xy) ``` 3:> you can capture icon with inspect **you can also right click andd click on inspect and go to applications and find that image in image folder save it and it will work on all screen**
41,734,584
I have some background in machine learning and python, but I am just learning TensorFlow. I am going through the [tutorial on deep convolutional neural nets](https://www.tensorflow.org/tutorials/deep_cnn/) to teach myself how to use it for image classification. Along the way there is an exercise, which I am having trouble completing. **EXERCISE:** The model architecture in inference() differs slightly from the CIFAR-10 model specified in cuda-convnet. In particular, the top layers of Alex's original model are locally connected and not fully connected. Try editing the architecture to exactly reproduce the locally connected architecture in the top layer. The exercise refers to the inference() function in the [cifar10.py model](https://github.com/tensorflow/models/blob/master/tutorials/image/cifar10/cifar10.py). The 2nd to last layer (called local4) has a shape=[384, 192], and the top layer has a shape=[192, NUM\_CLASSES], where NUM\_CLASSES=10 of course. I think the code that we are asked to edit is somewhere in the code defining the top layer: ``` with tf.variable_scope('softmax_linear') as scope: weights = _variable_with_weight_decay('weights', [192, NUM_CLASSES], stddev=1/192.0, wd=0.0) biases = _variable_on_cpu('biases', [NUM_CLASSES], tf.constant_initializer(0.0)) softmax_linear = tf.add(tf.matmul(local4, weights), biases,name=scope.name _activation_summary(softmax_linear) ``` But I don't see any code that determines the probability of connecting between layers, so I don't know how we can change the model from fully connected to locally connected. Does somebody know how to do this?
2017/01/19
[ "https://Stackoverflow.com/questions/41734584", "https://Stackoverflow.com", "https://Stackoverflow.com/users/6862133/" ]
I'm also working on this exercise. I'll try and explain my approach properly, rather than just give the solution. It's worth looking back at the mathematics of a fully connected layer (<https://www.tensorflow.org/get_started/mnist/beginners>). So the linear algebra for a fully connected layer is: *y = W \* x + b* where *x* is the n dimensional input vector, *b* is an *n* dimensional vector of biases, and *W* is an *n*-by-*n* matrix of weights. The *i* th element of *y* is the sum of the *i* th row of W multiplied element-wise with *x*. So....if you only want *y[i]* connected to *x[i-1]*, *x[i]*, and *x[i+1]*, you simply set all values in the *i* th row of *W* to zero, apart from the *(i-1)* th, *i* th and *(i+1)* th column of that row. Therefore to create a locally connected layer, you simply enforce *W* to be a banded matrix (<https://en.wikipedia.org/wiki/Band_matrix>), where the size of the band is equal to the size of the locally connected neighbourhoods you want. Tensorflow has a function for setting a matrix to be banded (`tf.batch_matrix_band_part(input, num_lower, num_upper, name=None)`). This seems to me to be the simplest mathematical solution to the exercise.
I'll try to answer your question although I'm not 100% I got it right as well. Looking at the cuda-convnet [architecture](https://github.com/akrizhevsky/cuda-convnet2/blob/master/layers/layers-cifar10-11pct.cfg) we can see that the TensorFlow and cuda-convnet implementations start to differ after the second pooling layer. TensorFlow implementation implements two fully connected layers and softmax classifier. cuda-convnet implements two locally connected layers, one fully connected layer and softmax classifier. The code snippet you included refers only to the softmax classifier and is in fact shared between the two implementations. To reproduce the cuda-convnet implementation using TensorFlow we have to replace the existing fully connected layers with two locally connected layers and a fully connected one. Since Tensor doesn't have locally connected layers as part of the SDK we have to figure out a way to implement it using the existing tools. Here is my attempt to implement the first locally connected layers: ``` with tf.variable_scope('local3') as scope: shape = pool2.get_shape() h = shape[1].value w = shape[2].value sz_local = 3 # kernel size sz_patch = (sz_local**2)*shape[3].value n_channels = 64 # Extract 3x3 tensor patches patches = tf.extract_image_patches(pool2, [1,sz_local,sz_local,1], [1,1,1,1], [1,1,1,1], 'SAME') weights = _variable_with_weight_decay('weights', shape=[1,h,w,sz_patch, n_channels], stddev=5e-2, wd=0.0) biases = _variable_on_cpu('biases', [h,w,n_channels], tf.constant_initializer(0.1)) # "Filter" each patch with its own kernel mul = tf.multiply(tf.expand_dims(patches, axis=-1), weights) ssum = tf.reduce_sum(mul, axis=3) pre_activation = tf.add(ssum, biases) local3 = tf.nn.relu(pre_activation, name=scope.name) ```
2,860,106
Would it be possible to create a class interface in python and various implementations of the interface. Example: I want to create a class for pop3 access (and all methods etc.). If I go with a commercial component, I want to wrap it to adhere to a contract. In the future, if I want to use another component or code my own, I want to be able to swap things out and not have things very tightly coupled. Possible? I'm new to python.
2010/05/18
[ "https://Stackoverflow.com/questions/2860106", "https://Stackoverflow.com", "https://Stackoverflow.com/users/39677/" ]
Of course. There is no need to create a base class or an interface in this case either, as everything is dynamic.
Yes, this is possible. There are typically no impediments to doing so: just keep a stable API and change how you implement it.
2,860,106
Would it be possible to create a class interface in python and various implementations of the interface. Example: I want to create a class for pop3 access (and all methods etc.). If I go with a commercial component, I want to wrap it to adhere to a contract. In the future, if I want to use another component or code my own, I want to be able to swap things out and not have things very tightly coupled. Possible? I'm new to python.
2010/05/18
[ "https://Stackoverflow.com/questions/2860106", "https://Stackoverflow.com", "https://Stackoverflow.com/users/39677/" ]
For people coming from a strongly typed language background, Python does not need a class interface. You can simulate it using a base class. ``` class BaseAccess: def open(arg): raise NotImplementedError() class Pop3Access(BaseAccess): def open(arg): ... class AlternateAccess(BaseAccess): def open(arg): ... ``` But you can easily write the same code without using BaseAccess. Strongly typed language needs the interface for type checking during compile time. For Python, this is not necessary because everything is looked up dynamically in run time. [Google 'duck typing'](http://www.google.com/search?ie=UTF-8&q='duck+typing') for its philosophy. There is a Abstract Base Classes module added in Python 2.6. But I haven't have used it.
Yes, this is possible. There are typically no impediments to doing so: just keep a stable API and change how you implement it.
2,860,106
Would it be possible to create a class interface in python and various implementations of the interface. Example: I want to create a class for pop3 access (and all methods etc.). If I go with a commercial component, I want to wrap it to adhere to a contract. In the future, if I want to use another component or code my own, I want to be able to swap things out and not have things very tightly coupled. Possible? I'm new to python.
2010/05/18
[ "https://Stackoverflow.com/questions/2860106", "https://Stackoverflow.com", "https://Stackoverflow.com/users/39677/" ]
One option is to use [zope interfaces](http://docs.zope.org/zope2/zdgbook/ComponentsAndInterfaces.html#python-interfaces). However, as was stated by [Wai Yip Tung](https://stackoverflow.com/questions/2860106/creating-an-interface-and-swappable-implementations-in-python/2860419#2860419), you do not need to use interfaces to achieve the same results. The `zope.interface` package is really more a tool for discovering how to interact with objects (generally within large code bases with multiple developers).
Yes, this is possible. There are typically no impediments to doing so: just keep a stable API and change how you implement it.
2,860,106
Would it be possible to create a class interface in python and various implementations of the interface. Example: I want to create a class for pop3 access (and all methods etc.). If I go with a commercial component, I want to wrap it to adhere to a contract. In the future, if I want to use another component or code my own, I want to be able to swap things out and not have things very tightly coupled. Possible? I'm new to python.
2010/05/18
[ "https://Stackoverflow.com/questions/2860106", "https://Stackoverflow.com", "https://Stackoverflow.com/users/39677/" ]
For people coming from a strongly typed language background, Python does not need a class interface. You can simulate it using a base class. ``` class BaseAccess: def open(arg): raise NotImplementedError() class Pop3Access(BaseAccess): def open(arg): ... class AlternateAccess(BaseAccess): def open(arg): ... ``` But you can easily write the same code without using BaseAccess. Strongly typed language needs the interface for type checking during compile time. For Python, this is not necessary because everything is looked up dynamically in run time. [Google 'duck typing'](http://www.google.com/search?ie=UTF-8&q='duck+typing') for its philosophy. There is a Abstract Base Classes module added in Python 2.6. But I haven't have used it.
Of course. There is no need to create a base class or an interface in this case either, as everything is dynamic.
2,860,106
Would it be possible to create a class interface in python and various implementations of the interface. Example: I want to create a class for pop3 access (and all methods etc.). If I go with a commercial component, I want to wrap it to adhere to a contract. In the future, if I want to use another component or code my own, I want to be able to swap things out and not have things very tightly coupled. Possible? I'm new to python.
2010/05/18
[ "https://Stackoverflow.com/questions/2860106", "https://Stackoverflow.com", "https://Stackoverflow.com/users/39677/" ]
Of course. There is no need to create a base class or an interface in this case either, as everything is dynamic.
One option is to use [zope interfaces](http://docs.zope.org/zope2/zdgbook/ComponentsAndInterfaces.html#python-interfaces). However, as was stated by [Wai Yip Tung](https://stackoverflow.com/questions/2860106/creating-an-interface-and-swappable-implementations-in-python/2860419#2860419), you do not need to use interfaces to achieve the same results. The `zope.interface` package is really more a tool for discovering how to interact with objects (generally within large code bases with multiple developers).
2,860,106
Would it be possible to create a class interface in python and various implementations of the interface. Example: I want to create a class for pop3 access (and all methods etc.). If I go with a commercial component, I want to wrap it to adhere to a contract. In the future, if I want to use another component or code my own, I want to be able to swap things out and not have things very tightly coupled. Possible? I'm new to python.
2010/05/18
[ "https://Stackoverflow.com/questions/2860106", "https://Stackoverflow.com", "https://Stackoverflow.com/users/39677/" ]
For people coming from a strongly typed language background, Python does not need a class interface. You can simulate it using a base class. ``` class BaseAccess: def open(arg): raise NotImplementedError() class Pop3Access(BaseAccess): def open(arg): ... class AlternateAccess(BaseAccess): def open(arg): ... ``` But you can easily write the same code without using BaseAccess. Strongly typed language needs the interface for type checking during compile time. For Python, this is not necessary because everything is looked up dynamically in run time. [Google 'duck typing'](http://www.google.com/search?ie=UTF-8&q='duck+typing') for its philosophy. There is a Abstract Base Classes module added in Python 2.6. But I haven't have used it.
One option is to use [zope interfaces](http://docs.zope.org/zope2/zdgbook/ComponentsAndInterfaces.html#python-interfaces). However, as was stated by [Wai Yip Tung](https://stackoverflow.com/questions/2860106/creating-an-interface-and-swappable-implementations-in-python/2860419#2860419), you do not need to use interfaces to achieve the same results. The `zope.interface` package is really more a tool for discovering how to interact with objects (generally within large code bases with multiple developers).
6,256,369
I want to create a "full file name" variable from several other variables, but the string concatenation and string format operations aren't behaving the way I expect. My code is below: ``` file_date = str(input("Enter file date: ")) root_folder = "\\\\SERVER\\FOLDER\\" file_prefix = "sample_file_" file_extension = ".txt" print("") print("Full file name with concatenation: ") print(root_folder + file_prefix + file_date + file_extension) print("Full file name with concatenation, without file_extension: ") print(root_folder + file_prefix + file_date) print("") print("") print("Full file name with string formatting: ") print("%s%s%s%s" % (root_folder, file_prefix, file_date, file_extension)) print("Full file name with string formatting, without file_extension: ") print("%s%s%s" % (root_folder, file_prefix, file_date)) print("") ``` The output when I run the script is: ``` C:\Temp>python test.py Enter file date: QT1 Full file name with concatenation: .txtRVER\FOLDER\sample_file_QT1 Full file name with concatenation, without file_extension: \\SERVER\FOLDER\sample_file_QT1 Full file name with string formatting: .txtRVER\FOLDER\sample_file_QT1 Full file name with string formatting, without file_extension: \\SERVER\FOLDER\sample_file_QT1 ``` I was expecting it to concatenate the ".txt" at the very end, except it's replacing the first four characters of the string with it instead. How do I concatenate the extension variable to the end of the string instead of having it replace the first n characters of the string? In addition to how to solve this particular problem, I'd like to know why I ran into it in the first place. What did I do wrong/what Python 3.2 behavior am I not aware of?
2011/06/06
[ "https://Stackoverflow.com/questions/6256369", "https://Stackoverflow.com", "https://Stackoverflow.com/users/467055/" ]
I think the method input used in your example, like so: ``` file_date = str(input("Enter file date: ")) ``` may be returning a carriage return character at the end. This causes the cursor to go back to the start of the line when you try to print it out. You may want to trim the return value of input().
Use this line instead to get rid of the line feed: ``` file_date = str(input("Enter file date: ")).rstrip() ```
27,666,846
I try to run [this example](http://scikit-learn.org/stable/modules/tree.html) for decision tree learning, but get the following error message: > > File "coco.py", line 18, in > graph.write\_pdf("iris.pdf") File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/pydot.py", > line 1602, in > lambda path, f=frmt, prog=self.prog : self.write(path, format=f, prog=prog)) File > "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/pydot.py", > line 1696, in write > dot\_fd.write(self.create(prog, format)) File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/pydot.py", > line 1727, in create > 'GraphViz\'s executables not found' ) pydot.InvocationException: GraphViz's executables not found > > > I saw [this post](https://stackoverflow.com/questions/18438997/why-is-pydot-unable-to-find-graphvizs-executables-in-windows-8) about a similar error, but even when I follow their solution (uninstall and then reinstall graphviz and pydot in the opposite order) the problem continues... I'm using MacOS (Yosemite). Any ideas? Would appreciate the help.
2014/12/27
[ "https://Stackoverflow.com/questions/27666846", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4397694/" ]
Worked for me on Ubuntu 18.04 as well: ``` $ sudo apt-get install graphviz ```
On mac, use Brew to install graphviz and not pip, see links: graphviz information: <http://www.graphviz.org/download/> brew installation: <https://brew.sh/> So typing the following in the terminal after you install brew should work: ``` brew install graphviz ```
27,666,846
I try to run [this example](http://scikit-learn.org/stable/modules/tree.html) for decision tree learning, but get the following error message: > > File "coco.py", line 18, in > graph.write\_pdf("iris.pdf") File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/pydot.py", > line 1602, in > lambda path, f=frmt, prog=self.prog : self.write(path, format=f, prog=prog)) File > "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/pydot.py", > line 1696, in write > dot\_fd.write(self.create(prog, format)) File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/pydot.py", > line 1727, in create > 'GraphViz\'s executables not found' ) pydot.InvocationException: GraphViz's executables not found > > > I saw [this post](https://stackoverflow.com/questions/18438997/why-is-pydot-unable-to-find-graphvizs-executables-in-windows-8) about a similar error, but even when I follow their solution (uninstall and then reinstall graphviz and pydot in the opposite order) the problem continues... I'm using MacOS (Yosemite). Any ideas? Would appreciate the help.
2014/12/27
[ "https://Stackoverflow.com/questions/27666846", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4397694/" ]
For windows users: 1.install Graphviz 2.Add Graphviz path to PATH variable 3.Restart PyCharm or other compiler. As of version 2.31, the Visual Studio package no longer alters the PATH variable or accesses the registry at all. If you wish to use the command-line interface to Graphviz or are using some other program that calls a Graphviz program, you will need to set the PATH variable yourself.
i would suggest avoid graphviz. use the following alternate approach ``` from sklearn.tree import plot_tree plt.figure(figsize=(60,30)) plot_tree(dt, filled=True); ```
27,666,846
I try to run [this example](http://scikit-learn.org/stable/modules/tree.html) for decision tree learning, but get the following error message: > > File "coco.py", line 18, in > graph.write\_pdf("iris.pdf") File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/pydot.py", > line 1602, in > lambda path, f=frmt, prog=self.prog : self.write(path, format=f, prog=prog)) File > "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/pydot.py", > line 1696, in write > dot\_fd.write(self.create(prog, format)) File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/pydot.py", > line 1727, in create > 'GraphViz\'s executables not found' ) pydot.InvocationException: GraphViz's executables not found > > > I saw [this post](https://stackoverflow.com/questions/18438997/why-is-pydot-unable-to-find-graphvizs-executables-in-windows-8) about a similar error, but even when I follow their solution (uninstall and then reinstall graphviz and pydot in the opposite order) the problem continues... I'm using MacOS (Yosemite). Any ideas? Would appreciate the help.
2014/12/27
[ "https://Stackoverflow.com/questions/27666846", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4397694/" ]
``` brew install graphviz pip install -U pydotplus ``` ... worked for me on MacOSX
Worked for me on Ubuntu 18.04 as well: ``` $ sudo apt-get install graphviz ```
27,666,846
I try to run [this example](http://scikit-learn.org/stable/modules/tree.html) for decision tree learning, but get the following error message: > > File "coco.py", line 18, in > graph.write\_pdf("iris.pdf") File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/pydot.py", > line 1602, in > lambda path, f=frmt, prog=self.prog : self.write(path, format=f, prog=prog)) File > "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/pydot.py", > line 1696, in write > dot\_fd.write(self.create(prog, format)) File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/pydot.py", > line 1727, in create > 'GraphViz\'s executables not found' ) pydot.InvocationException: GraphViz's executables not found > > > I saw [this post](https://stackoverflow.com/questions/18438997/why-is-pydot-unable-to-find-graphvizs-executables-in-windows-8) about a similar error, but even when I follow their solution (uninstall and then reinstall graphviz and pydot in the opposite order) the problem continues... I'm using MacOS (Yosemite). Any ideas? Would appreciate the help.
2014/12/27
[ "https://Stackoverflow.com/questions/27666846", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4397694/" ]
cel, answered this in the comment: > > Graphviz is not a python tool. The python packages at pypi provide a > convenient way of using Graphviz in python code. You still have to > install the Graphviz executables, which are not pythonic, thus not > shipped with these packages. You can install those e.g. with a > general-purpose package manager such as homebrew > > > For me personally, on ubuntu 14.04, all I had to do is: ``` sudo apt-get install graphviz ```
On Windows 8 this solved the same problem for me: ``` import os os.environ["PATH"] += os.pathsep + 'C:/Program Files (x86)/Graphviz2.38/bin/' ```
27,666,846
I try to run [this example](http://scikit-learn.org/stable/modules/tree.html) for decision tree learning, but get the following error message: > > File "coco.py", line 18, in > graph.write\_pdf("iris.pdf") File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/pydot.py", > line 1602, in > lambda path, f=frmt, prog=self.prog : self.write(path, format=f, prog=prog)) File > "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/pydot.py", > line 1696, in write > dot\_fd.write(self.create(prog, format)) File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/pydot.py", > line 1727, in create > 'GraphViz\'s executables not found' ) pydot.InvocationException: GraphViz's executables not found > > > I saw [this post](https://stackoverflow.com/questions/18438997/why-is-pydot-unable-to-find-graphvizs-executables-in-windows-8) about a similar error, but even when I follow their solution (uninstall and then reinstall graphviz and pydot in the opposite order) the problem continues... I'm using MacOS (Yosemite). Any ideas? Would appreciate the help.
2014/12/27
[ "https://Stackoverflow.com/questions/27666846", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4397694/" ]
On Windows 8 this solved the same problem for me: ``` import os os.environ["PATH"] += os.pathsep + 'C:/Program Files (x86)/Graphviz2.38/bin/' ```
If you are on mac operating system then you might face this issue . I have installed graphviz with pip but dint work . So I had to install it with brew again and worked for me. **use following command** > > brew install graphviz > > >
27,666,846
I try to run [this example](http://scikit-learn.org/stable/modules/tree.html) for decision tree learning, but get the following error message: > > File "coco.py", line 18, in > graph.write\_pdf("iris.pdf") File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/pydot.py", > line 1602, in > lambda path, f=frmt, prog=self.prog : self.write(path, format=f, prog=prog)) File > "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/pydot.py", > line 1696, in write > dot\_fd.write(self.create(prog, format)) File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/pydot.py", > line 1727, in create > 'GraphViz\'s executables not found' ) pydot.InvocationException: GraphViz's executables not found > > > I saw [this post](https://stackoverflow.com/questions/18438997/why-is-pydot-unable-to-find-graphvizs-executables-in-windows-8) about a similar error, but even when I follow their solution (uninstall and then reinstall graphviz and pydot in the opposite order) the problem continues... I'm using MacOS (Yosemite). Any ideas? Would appreciate the help.
2014/12/27
[ "https://Stackoverflow.com/questions/27666846", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4397694/" ]
On Windows 8 this solved the same problem for me: ``` import os os.environ["PATH"] += os.pathsep + 'C:/Program Files (x86)/Graphviz2.38/bin/' ```
i would suggest avoid graphviz. use the following alternate approach ``` from sklearn.tree import plot_tree plt.figure(figsize=(60,30)) plot_tree(dt, filled=True); ```
27,666,846
I try to run [this example](http://scikit-learn.org/stable/modules/tree.html) for decision tree learning, but get the following error message: > > File "coco.py", line 18, in > graph.write\_pdf("iris.pdf") File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/pydot.py", > line 1602, in > lambda path, f=frmt, prog=self.prog : self.write(path, format=f, prog=prog)) File > "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/pydot.py", > line 1696, in write > dot\_fd.write(self.create(prog, format)) File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/pydot.py", > line 1727, in create > 'GraphViz\'s executables not found' ) pydot.InvocationException: GraphViz's executables not found > > > I saw [this post](https://stackoverflow.com/questions/18438997/why-is-pydot-unable-to-find-graphvizs-executables-in-windows-8) about a similar error, but even when I follow their solution (uninstall and then reinstall graphviz and pydot in the opposite order) the problem continues... I'm using MacOS (Yosemite). Any ideas? Would appreciate the help.
2014/12/27
[ "https://Stackoverflow.com/questions/27666846", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4397694/" ]
``` brew install graphviz pip install -U pydotplus ``` ... worked for me on MacOSX
I had the same issue when installing pydot and graphviz with pip, then I found the answer [here](https://groups.google.com/forum/#!topic/caffe-users/4r5dxoFpWxk). In particular, I first uninstalled pydot and graphviz which I separately installed using pip (using `sudo pip uninstall pydot` and the same for `graphviz`). Then, I run `sudo apt-get install python-pydot` which fixed the issue.
27,666,846
I try to run [this example](http://scikit-learn.org/stable/modules/tree.html) for decision tree learning, but get the following error message: > > File "coco.py", line 18, in > graph.write\_pdf("iris.pdf") File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/pydot.py", > line 1602, in > lambda path, f=frmt, prog=self.prog : self.write(path, format=f, prog=prog)) File > "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/pydot.py", > line 1696, in write > dot\_fd.write(self.create(prog, format)) File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/pydot.py", > line 1727, in create > 'GraphViz\'s executables not found' ) pydot.InvocationException: GraphViz's executables not found > > > I saw [this post](https://stackoverflow.com/questions/18438997/why-is-pydot-unable-to-find-graphvizs-executables-in-windows-8) about a similar error, but even when I follow their solution (uninstall and then reinstall graphviz and pydot in the opposite order) the problem continues... I'm using MacOS (Yosemite). Any ideas? Would appreciate the help.
2014/12/27
[ "https://Stackoverflow.com/questions/27666846", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4397694/" ]
I was facing the same issues, my problem got resolved using: 1. Run the command `sudo port install graphviz` 2. If error is coming for the port then first install port from below based on the version you are using <https://guide.macports.org/chunked/installing.macports.html> 3. After installation of port run command `sudo port install graphviz` Restart the python kernel if you are using iPython and run again.
i would suggest avoid graphviz. use the following alternate approach ``` from sklearn.tree import plot_tree plt.figure(figsize=(60,30)) plot_tree(dt, filled=True); ```
27,666,846
I try to run [this example](http://scikit-learn.org/stable/modules/tree.html) for decision tree learning, but get the following error message: > > File "coco.py", line 18, in > graph.write\_pdf("iris.pdf") File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/pydot.py", > line 1602, in > lambda path, f=frmt, prog=self.prog : self.write(path, format=f, prog=prog)) File > "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/pydot.py", > line 1696, in write > dot\_fd.write(self.create(prog, format)) File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/pydot.py", > line 1727, in create > 'GraphViz\'s executables not found' ) pydot.InvocationException: GraphViz's executables not found > > > I saw [this post](https://stackoverflow.com/questions/18438997/why-is-pydot-unable-to-find-graphvizs-executables-in-windows-8) about a similar error, but even when I follow their solution (uninstall and then reinstall graphviz and pydot in the opposite order) the problem continues... I'm using MacOS (Yosemite). Any ideas? Would appreciate the help.
2014/12/27
[ "https://Stackoverflow.com/questions/27666846", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4397694/" ]
I had the same issue when installing pydot and graphviz with pip, then I found the answer [here](https://groups.google.com/forum/#!topic/caffe-users/4r5dxoFpWxk). In particular, I first uninstalled pydot and graphviz which I separately installed using pip (using `sudo pip uninstall pydot` and the same for `graphviz`). Then, I run `sudo apt-get install python-pydot` which fixed the issue.
For windows users: 1.install Graphviz 2.Add Graphviz path to PATH variable 3.Restart PyCharm or other compiler. As of version 2.31, the Visual Studio package no longer alters the PATH variable or accesses the registry at all. If you wish to use the command-line interface to Graphviz or are using some other program that calls a Graphviz program, you will need to set the PATH variable yourself.
27,666,846
I try to run [this example](http://scikit-learn.org/stable/modules/tree.html) for decision tree learning, but get the following error message: > > File "coco.py", line 18, in > graph.write\_pdf("iris.pdf") File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/pydot.py", > line 1602, in > lambda path, f=frmt, prog=self.prog : self.write(path, format=f, prog=prog)) File > "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/pydot.py", > line 1696, in write > dot\_fd.write(self.create(prog, format)) File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/pydot.py", > line 1727, in create > 'GraphViz\'s executables not found' ) pydot.InvocationException: GraphViz's executables not found > > > I saw [this post](https://stackoverflow.com/questions/18438997/why-is-pydot-unable-to-find-graphvizs-executables-in-windows-8) about a similar error, but even when I follow their solution (uninstall and then reinstall graphviz and pydot in the opposite order) the problem continues... I'm using MacOS (Yosemite). Any ideas? Would appreciate the help.
2014/12/27
[ "https://Stackoverflow.com/questions/27666846", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4397694/" ]
Worked for me on Ubuntu 18.04 as well: ``` $ sudo apt-get install graphviz ```
I did face similar issue and the way it was corrected was by changing the path. This is what I did : Copy paste "graphiz" path from your computer to Environment variable>Path from the control panel Example: Graphiz path : C:\Apps\Program Files\Continuum\Anaconda2\Library\bin\graphviz) (I had installed it on Apps folder. It could be in a diff path for you) Setting path in Environment Variable : Go to Control Panel>Control Panel\System and Security\System .Click Advanced Setting and then Advanced .You'll find Environment variables at bottom right. Click Path to edit and save it . Close your IDE and re open it . It worked for me .
3,076,928
Using sqlite3 and Django I want to change to PostgreSQL and keep all data intact. I used `./manage.py dumpdata > dump.json` to dump the data, and changed settings to use PostgreSQL. With an empty database `./manage.py loaddata dump.json` resulted in errors about tables not existing, so I ran `./manage.py syncdb` and tried again. That results in this error: ``` Problem installing fixture 'dump.json': Traceback (most recent call last): File "/usr/lib/python2.6/site-packages/django/core/management/commands/loaddata.py", line 163, in handle obj.save() File "/usr/lib/python2.6/site-packages/django/core/serializers/base.py", line 163, in save models.Model.save_base(self.object, raw=True) File "/usr/lib/python2.6/site-packages/django/db/models/base.py", line 495, in save_base rows = manager.filter(pk=pk_val)._update(values) File "/usr/lib/python2.6/site-packages/django/db/models/query.py", line 448, in _update return query.execute_sql(None) File "/usr/lib/python2.6/site-packages/django/db/models/sql/subqueries.py", line 124, in execute_sql cursor = super(UpdateQuery, self).execute_sql(result_type) File "/usr/lib/python2.6/site-packages/django/db/models/sql/query.py", line 2347, in execute_sql cursor.execute(sql, params) File "/usr/lib/python2.6/site-packages/django/db/backends/util.py", line 19, in execute return self.cursor.execute(sql, params) IntegrityError: duplicate key value violates unique constraint "django_content_type_app_label_key" ``` Is this not the correct way to move data from one database to another? How should I switch database backend safely?
2010/06/19
[ "https://Stackoverflow.com/questions/3076928", "https://Stackoverflow.com", "https://Stackoverflow.com/users/66107/" ]
The problem is simply that you're getting the content types defined twice - once when you do `syncdb`, and once from the exported data you're trying to import. Since you may well have other items in your database that depend on the original content type definitions, I would recommend keeping those. So, after running `syncdb`, do `manage.py dbshell` and in your database do `TRUNCATE django_content_type;` to remove all the newly-defined content types. Then you shouldn't get any conflicts - on that part of the process, in any case.
There is a big discussion about it on the [Django ticket 7052](http://code.djangoproject.com/ticket/7052). The right way now is to use the `--natural` parameter, example: `./manage.py dumpdata --natural --format=xml --indent=2 > fixture.xml` In order for `--natural` to work with your models, they must implement `natural_key` and `get_by_natural_key`, as described on [the Django documentation regarding natural keys](http://docs.djangoproject.com/en/1.3/topics/serialization/). Having said that, you might still need to edit the data before importing it with `./manage.py loaddata`. For instance, if your applications changed, `syncdb` will populate the table `django_content_type` and you might want to delete the respective entries from the xml-file before loading it.
3,076,928
Using sqlite3 and Django I want to change to PostgreSQL and keep all data intact. I used `./manage.py dumpdata > dump.json` to dump the data, and changed settings to use PostgreSQL. With an empty database `./manage.py loaddata dump.json` resulted in errors about tables not existing, so I ran `./manage.py syncdb` and tried again. That results in this error: ``` Problem installing fixture 'dump.json': Traceback (most recent call last): File "/usr/lib/python2.6/site-packages/django/core/management/commands/loaddata.py", line 163, in handle obj.save() File "/usr/lib/python2.6/site-packages/django/core/serializers/base.py", line 163, in save models.Model.save_base(self.object, raw=True) File "/usr/lib/python2.6/site-packages/django/db/models/base.py", line 495, in save_base rows = manager.filter(pk=pk_val)._update(values) File "/usr/lib/python2.6/site-packages/django/db/models/query.py", line 448, in _update return query.execute_sql(None) File "/usr/lib/python2.6/site-packages/django/db/models/sql/subqueries.py", line 124, in execute_sql cursor = super(UpdateQuery, self).execute_sql(result_type) File "/usr/lib/python2.6/site-packages/django/db/models/sql/query.py", line 2347, in execute_sql cursor.execute(sql, params) File "/usr/lib/python2.6/site-packages/django/db/backends/util.py", line 19, in execute return self.cursor.execute(sql, params) IntegrityError: duplicate key value violates unique constraint "django_content_type_app_label_key" ``` Is this not the correct way to move data from one database to another? How should I switch database backend safely?
2010/06/19
[ "https://Stackoverflow.com/questions/3076928", "https://Stackoverflow.com", "https://Stackoverflow.com/users/66107/" ]
The problem is simply that you're getting the content types defined twice - once when you do `syncdb`, and once from the exported data you're trying to import. Since you may well have other items in your database that depend on the original content type definitions, I would recommend keeping those. So, after running `syncdb`, do `manage.py dbshell` and in your database do `TRUNCATE django_content_type;` to remove all the newly-defined content types. Then you shouldn't get any conflicts - on that part of the process, in any case.
This worked for me. You probably want to ensure the server is stopped so no new data is lost. Dump it: ``` $ python manage.py dumpdata --exclude auth.permission --exclude contenttypes --natural > db.json ``` Make sure your models don't have signals (e.g. post\_save) or anything that creates models. If you do, comment it out momentarily. Edit settings.py to point to the new database and set it up: ``` $ python manage.py syncdb $ python manage.py migrate ``` Load the data: ``` ./manage.py loaddata db.json ```
3,076,928
Using sqlite3 and Django I want to change to PostgreSQL and keep all data intact. I used `./manage.py dumpdata > dump.json` to dump the data, and changed settings to use PostgreSQL. With an empty database `./manage.py loaddata dump.json` resulted in errors about tables not existing, so I ran `./manage.py syncdb` and tried again. That results in this error: ``` Problem installing fixture 'dump.json': Traceback (most recent call last): File "/usr/lib/python2.6/site-packages/django/core/management/commands/loaddata.py", line 163, in handle obj.save() File "/usr/lib/python2.6/site-packages/django/core/serializers/base.py", line 163, in save models.Model.save_base(self.object, raw=True) File "/usr/lib/python2.6/site-packages/django/db/models/base.py", line 495, in save_base rows = manager.filter(pk=pk_val)._update(values) File "/usr/lib/python2.6/site-packages/django/db/models/query.py", line 448, in _update return query.execute_sql(None) File "/usr/lib/python2.6/site-packages/django/db/models/sql/subqueries.py", line 124, in execute_sql cursor = super(UpdateQuery, self).execute_sql(result_type) File "/usr/lib/python2.6/site-packages/django/db/models/sql/query.py", line 2347, in execute_sql cursor.execute(sql, params) File "/usr/lib/python2.6/site-packages/django/db/backends/util.py", line 19, in execute return self.cursor.execute(sql, params) IntegrityError: duplicate key value violates unique constraint "django_content_type_app_label_key" ``` Is this not the correct way to move data from one database to another? How should I switch database backend safely?
2010/06/19
[ "https://Stackoverflow.com/questions/3076928", "https://Stackoverflow.com", "https://Stackoverflow.com/users/66107/" ]
The problem is simply that you're getting the content types defined twice - once when you do `syncdb`, and once from the exported data you're trying to import. Since you may well have other items in your database that depend on the original content type definitions, I would recommend keeping those. So, after running `syncdb`, do `manage.py dbshell` and in your database do `TRUNCATE django_content_type;` to remove all the newly-defined content types. Then you shouldn't get any conflicts - on that part of the process, in any case.
I used pgloader, just take a few seconds to migrate successfully: ``` $ pgloader project.load ``` project.load file with: ``` load database from sqlite:////path/to/dev.db into postgresql://user:pwd@localhost/db_name with include drop, create tables, create indexes, reset sequences set work_mem to '16MB', maintenance_work_mem to '512 MB'; ```
3,076,928
Using sqlite3 and Django I want to change to PostgreSQL and keep all data intact. I used `./manage.py dumpdata > dump.json` to dump the data, and changed settings to use PostgreSQL. With an empty database `./manage.py loaddata dump.json` resulted in errors about tables not existing, so I ran `./manage.py syncdb` and tried again. That results in this error: ``` Problem installing fixture 'dump.json': Traceback (most recent call last): File "/usr/lib/python2.6/site-packages/django/core/management/commands/loaddata.py", line 163, in handle obj.save() File "/usr/lib/python2.6/site-packages/django/core/serializers/base.py", line 163, in save models.Model.save_base(self.object, raw=True) File "/usr/lib/python2.6/site-packages/django/db/models/base.py", line 495, in save_base rows = manager.filter(pk=pk_val)._update(values) File "/usr/lib/python2.6/site-packages/django/db/models/query.py", line 448, in _update return query.execute_sql(None) File "/usr/lib/python2.6/site-packages/django/db/models/sql/subqueries.py", line 124, in execute_sql cursor = super(UpdateQuery, self).execute_sql(result_type) File "/usr/lib/python2.6/site-packages/django/db/models/sql/query.py", line 2347, in execute_sql cursor.execute(sql, params) File "/usr/lib/python2.6/site-packages/django/db/backends/util.py", line 19, in execute return self.cursor.execute(sql, params) IntegrityError: duplicate key value violates unique constraint "django_content_type_app_label_key" ``` Is this not the correct way to move data from one database to another? How should I switch database backend safely?
2010/06/19
[ "https://Stackoverflow.com/questions/3076928", "https://Stackoverflow.com", "https://Stackoverflow.com/users/66107/" ]
There is a big discussion about it on the [Django ticket 7052](http://code.djangoproject.com/ticket/7052). The right way now is to use the `--natural` parameter, example: `./manage.py dumpdata --natural --format=xml --indent=2 > fixture.xml` In order for `--natural` to work with your models, they must implement `natural_key` and `get_by_natural_key`, as described on [the Django documentation regarding natural keys](http://docs.djangoproject.com/en/1.3/topics/serialization/). Having said that, you might still need to edit the data before importing it with `./manage.py loaddata`. For instance, if your applications changed, `syncdb` will populate the table `django_content_type` and you might want to delete the respective entries from the xml-file before loading it.
This worked for me. You probably want to ensure the server is stopped so no new data is lost. Dump it: ``` $ python manage.py dumpdata --exclude auth.permission --exclude contenttypes --natural > db.json ``` Make sure your models don't have signals (e.g. post\_save) or anything that creates models. If you do, comment it out momentarily. Edit settings.py to point to the new database and set it up: ``` $ python manage.py syncdb $ python manage.py migrate ``` Load the data: ``` ./manage.py loaddata db.json ```
3,076,928
Using sqlite3 and Django I want to change to PostgreSQL and keep all data intact. I used `./manage.py dumpdata > dump.json` to dump the data, and changed settings to use PostgreSQL. With an empty database `./manage.py loaddata dump.json` resulted in errors about tables not existing, so I ran `./manage.py syncdb` and tried again. That results in this error: ``` Problem installing fixture 'dump.json': Traceback (most recent call last): File "/usr/lib/python2.6/site-packages/django/core/management/commands/loaddata.py", line 163, in handle obj.save() File "/usr/lib/python2.6/site-packages/django/core/serializers/base.py", line 163, in save models.Model.save_base(self.object, raw=True) File "/usr/lib/python2.6/site-packages/django/db/models/base.py", line 495, in save_base rows = manager.filter(pk=pk_val)._update(values) File "/usr/lib/python2.6/site-packages/django/db/models/query.py", line 448, in _update return query.execute_sql(None) File "/usr/lib/python2.6/site-packages/django/db/models/sql/subqueries.py", line 124, in execute_sql cursor = super(UpdateQuery, self).execute_sql(result_type) File "/usr/lib/python2.6/site-packages/django/db/models/sql/query.py", line 2347, in execute_sql cursor.execute(sql, params) File "/usr/lib/python2.6/site-packages/django/db/backends/util.py", line 19, in execute return self.cursor.execute(sql, params) IntegrityError: duplicate key value violates unique constraint "django_content_type_app_label_key" ``` Is this not the correct way to move data from one database to another? How should I switch database backend safely?
2010/06/19
[ "https://Stackoverflow.com/questions/3076928", "https://Stackoverflow.com", "https://Stackoverflow.com/users/66107/" ]
There is a big discussion about it on the [Django ticket 7052](http://code.djangoproject.com/ticket/7052). The right way now is to use the `--natural` parameter, example: `./manage.py dumpdata --natural --format=xml --indent=2 > fixture.xml` In order for `--natural` to work with your models, they must implement `natural_key` and `get_by_natural_key`, as described on [the Django documentation regarding natural keys](http://docs.djangoproject.com/en/1.3/topics/serialization/). Having said that, you might still need to edit the data before importing it with `./manage.py loaddata`. For instance, if your applications changed, `syncdb` will populate the table `django_content_type` and you might want to delete the respective entries from the xml-file before loading it.
I used pgloader, just take a few seconds to migrate successfully: ``` $ pgloader project.load ``` project.load file with: ``` load database from sqlite:////path/to/dev.db into postgresql://user:pwd@localhost/db_name with include drop, create tables, create indexes, reset sequences set work_mem to '16MB', maintenance_work_mem to '512 MB'; ```
74,451,481
I have built a python script that uses python socket to build a connection between my python application and my python server. I have encrypted the data sent between the two systems. I was wondering if I should think of any other things related to security against hackers. Can they do something that could possibly steal data from my computer. thanks in advance for the effort. I have encrypted the data sent between the two systems.
2022/11/15
[ "https://Stackoverflow.com/questions/74451481", "https://Stackoverflow.com", "https://Stackoverflow.com/users/20513644/" ]
To make this slightly more extensible, you can convert it to an object: ```js function process(input) { let data = input.split("\n\n"); // split by double new line data = data.map(i => i.split("\n")); // split each pair data = data.map(i => i.reduce((obj, cur) => { const [key, val] = cur.split(": "); // get the key and value obj[key.toLowerCase()] = val; // lowercase the value to make it a nice object return obj; }, {})); return data; } const input = `Package: apple Settings: scim Architecture: amd32 Size: 2312312312 Package: banana Architecture: xsl64 Version: 94.3223.2 Size: 23232 Package: orange Architecture: bbl64 Version: 14.3223.2 Description: Something descrip more description to orange Package: friday SHA215: d3d223d3f2ddf2323d3 Person: XCXCS Size: 2312312312`; const data = process(input); const { version } = data.find(({ package }) => package === "banana"); // query data console.log("Banana version:", version); ```
These kinds of text extraction are always pretty fragile, so let me know if this works for your real inputs... Anyways, if we split by empty lines (which are really just double line breaks, `\n\n`), and then split each "paragraph" by `\n`, we get chunks of lines we can work with. Then we can just find the chunk that has the banana package, and then inside that chunk, we find the line that contains the version. Finally, we slice off `Version:` to get the version text. ```js const text = `\ Package: apple Settings: scim Architecture: amd32 Size: 2312312312 Package: banana Architecture: xsl64 Version: 94.3223.2 Size: 23232 Package: orange Architecture: bbl64 Version: 14.3223.2 Description: Something descrip more description to orange SHA215: d3d223d3f2ddf2323d3 Person: XCXCS Size: 2312312312 `; const chunks = text.split("\n\n").map((p) => p.split("\n")); const version = chunks .find((info) => info.some((line) => line === "Package: banana") ) .find((line) => line.startsWith("Version: ") ) .slice("Version: ".length); console.log(version); ```
73,430,007
I'm building a website using python and Django, but when I looked in the admin, the names of the model items aren't showing up. [![enter image description here](https://i.stack.imgur.com/bPvtX.png)](https://i.stack.imgur.com/bPvtX.png) [![enter image description here](https://i.stack.imgur.com/Hfm0y.png)](https://i.stack.imgur.com/Hfm0y.png) So, the objects that I am building in the admin aren't showing their names. admin.py: ``` from .models import Article, Author # Register your models here. @admin.register(Article) class ArticleAdmin(admin.ModelAdmin): list_display = ['title', 'main_txt', 'date_of_publication'] list_display_links = None list_editable = ['title', 'main_txt'] def __str__(self): return self.title @admin.register(Author) class AuthorAdmin(admin.ModelAdmin): list_display = ['first_name', 'last_name', 'join_date', 'email', 'phone_num'] list_display_links = ['join_date'] list_editable = ['email', 'phone_num', ] def __str__(self): return f"{self.first_name} {self.last_name[0]}" ``` models.py: ``` # Create your models here. class Author(models.Model): first_name = models.CharField(max_length=100) last_name = models.CharField(max_length=100) date_of_birth = models.DateField() email = models.CharField(max_length=300) phone_num = models.CharField(max_length=15) join_date = models.DateField() participated_art = models.ManyToManyField('Article', blank=True) class Article(models.Model): title = models.CharField(max_length=500) date_of_publication = models.DateField() creaters = models.ManyToManyField('Author', blank=False) main_txt = models.TextField() notes = models.TextField() ```
2022/08/20
[ "https://Stackoverflow.com/questions/73430007", "https://Stackoverflow.com", "https://Stackoverflow.com/users/19771403/" ]
Add [`__str__()`](https://docs.python.org/3/reference/datamodel.html#object.__str__) method in the model itself instead of admin.py, so: ```py class Author(models.Model): first_name = models.CharField(max_length=100) last_name = models.CharField(max_length=100) date_of_birth = models.DateField() email = models.CharField(max_length=300) phone_num = models.CharField(max_length=15) join_date = models.DateField() participated_art = models.ManyToManyField('Article', blank=True) def __str__(self): return f"{self.first_name} {self.last_name}" class Article(models.Model): title = models.CharField(max_length=500) date_of_publication = models.DateField() creaters = models.ManyToManyField('Author', blank=False) main_txt = models.TextField() notes = models.TextField() def __str__(self): return self.title ```
You need to specify a `def __str__(self)` method. Example: ``` class Author(models..): .... def __str__(self): return self.first_name + ' ' + self.last_name ```
57,124,038
I am a beginner with Python and I am learning how to treat images. Given a square image (NxN), I would like to make it into a (N+2)x(N+2) image with a new layer of zeros around it. I would prefer not to use numpy and only stick with the basic python programming. Any idea on how to do so ? Right now, I used .extend to add zeros on the right side and on the bottom but can't do it up and left. Thank you for your help!
2019/07/20
[ "https://Stackoverflow.com/questions/57124038", "https://Stackoverflow.com", "https://Stackoverflow.com/users/10154400/" ]
We can create a padding function that adds layers of zeros around an image (padding it). ``` def pad(img,layers): #img should be rectangular return [[0]*(len(img[0])+2*layers)]*layers + \ [[0]*layers+r+[0]*layers for r in img] + \ [[0]*(len(img[0])+2*layers)]*layers ``` We can test with a sample image, such as: ``` i = [[1, 2, 3], [4, 5, 6], [7, 8, 9]] ``` So, ``` pad(i,2) ``` gives: ``` [[0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0], [0, 0, 1, 2, 3, 0, 0], [0, 0, 4, 5, 6, 0, 0], [0, 0, 7, 8, 9, 0, 0], [0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0]] ```
Im assuming that by image we're talking about a matrix, in that case you could do this: ``` img = [[5, 5, 5], [5, 5, 5], [5, 5, 5]] row_len = len(img) col_len = len(img[0]) new_image = list() for n in range(col_len+2): # Adding two more rows if n == 0 or n == col_len + 1: new_image.append([0] * (row_len + 2)) # First and last row is just zeroes else: new_image.append([0] + img[n - 1] + [0]) # Append a zero to the front and back of each row print(new_image) # [[0, 0, 0, 0, 0], [0, 5, 5, 5, 0], [0, 5, 5, 5, 0], [0, 5, 5, 5, 0], [0, 0, 0, 0, 0]] ```
53,440,086
I am trying to right click with mouse and click save as Image in selenium python. I was able to perform right click with follwing method, however the next action to perform right click does not work any more. How can I solve this problem? ``` from selenium.webdriver import ActionChains from selenium.webdriver.common.keys import Keys from selenium import webdriver driver.get(url) # get the image source img = driver.find_element_by_xpath('//img') actionChains = ActionChains(driver) actionChains.context_click(img).send_keys(Keys.ARROW_DOWN).send_keys(Keys.ARROW_DOWN).send_keys(Keys.RETURN).perform() ```
2018/11/23
[ "https://Stackoverflow.com/questions/53440086", "https://Stackoverflow.com", "https://Stackoverflow.com/users/8263870/" ]
You can do the same functionality using pyautogui. Assuming you are using Windows. -->pyautogui.position() (187, 567) #prints the current cursor position -->pyautogui.moveTo(100, 200)#move to location where right click is req. -->pyautogui.click(button='right') -->pyautogui.hotkey('ctrl', 'c') - Ctrl+C in keyboard(Copy shortcut) Refer to below link for further <https://pyautogui.readthedocs.io/en/latest/keyboard.html>
You have to first move to the element where you want to perform the context click ```py from selenium.webdriver import ActionChains from selenium.webdriver.common.keys import Keys from selenium import webdriver driver.get(url) # get the image source img = driver.find_element_by_xpath('//img') actionChains = ActionChains(driver) actionChains.move_to_element(img).context_click().send_keys(Keys.ARROW_DOWN).send_keys(Keys.ARROW_DOWN).send_keys(Keys.RETURN).perform() ```
53,440,086
I am trying to right click with mouse and click save as Image in selenium python. I was able to perform right click with follwing method, however the next action to perform right click does not work any more. How can I solve this problem? ``` from selenium.webdriver import ActionChains from selenium.webdriver.common.keys import Keys from selenium import webdriver driver.get(url) # get the image source img = driver.find_element_by_xpath('//img') actionChains = ActionChains(driver) actionChains.context_click(img).send_keys(Keys.ARROW_DOWN).send_keys(Keys.ARROW_DOWN).send_keys(Keys.RETURN).perform() ```
2018/11/23
[ "https://Stackoverflow.com/questions/53440086", "https://Stackoverflow.com", "https://Stackoverflow.com/users/8263870/" ]
The problem is that the send\_keys() method, after having created the context menu, sends the keys to the window, and not to the menu. So, there is no way to access the menu items. I had a similar problem with downloading a canvas created in a webpage. Finally, I was able to download the image executing a javascript. I created a download element in order to manage the image. As it was a canvas, I had previously to execute the toDataURL method. Here is my python code: ``` script_js = 'var dataURL = document.getElementsByClassName("_cx6")[0].toDataURL("image/png");' \ 'var link = document.createElement("a"); ' \ 'link.download = "{}_{}";' \ 'link.href = dataURL;' \ 'document.body.appendChild(link);' \ 'link.click();' \ 'document.body.removeChild(link);' \ 'delete link;'.format( n, prefijo_nombre_archivo, sufijo_nombre_archivo ) driver.execute_script(script_js) ``` I hope it may help!
You can do the same functionality using pyautogui. Assuming you are using Windows. -->pyautogui.position() (187, 567) #prints the current cursor position -->pyautogui.moveTo(100, 200)#move to location where right click is req. -->pyautogui.click(button='right') -->pyautogui.hotkey('ctrl', 'c') - Ctrl+C in keyboard(Copy shortcut) Refer to below link for further <https://pyautogui.readthedocs.io/en/latest/keyboard.html>
53,440,086
I am trying to right click with mouse and click save as Image in selenium python. I was able to perform right click with follwing method, however the next action to perform right click does not work any more. How can I solve this problem? ``` from selenium.webdriver import ActionChains from selenium.webdriver.common.keys import Keys from selenium import webdriver driver.get(url) # get the image source img = driver.find_element_by_xpath('//img') actionChains = ActionChains(driver) actionChains.context_click(img).send_keys(Keys.ARROW_DOWN).send_keys(Keys.ARROW_DOWN).send_keys(Keys.RETURN).perform() ```
2018/11/23
[ "https://Stackoverflow.com/questions/53440086", "https://Stackoverflow.com", "https://Stackoverflow.com/users/8263870/" ]
The problem is that the send\_keys() method, after having created the context menu, sends the keys to the window, and not to the menu. So, there is no way to access the menu items. I had a similar problem with downloading a canvas created in a webpage. Finally, I was able to download the image executing a javascript. I created a download element in order to manage the image. As it was a canvas, I had previously to execute the toDataURL method. Here is my python code: ``` script_js = 'var dataURL = document.getElementsByClassName("_cx6")[0].toDataURL("image/png");' \ 'var link = document.createElement("a"); ' \ 'link.download = "{}_{}";' \ 'link.href = dataURL;' \ 'document.body.appendChild(link);' \ 'link.click();' \ 'document.body.removeChild(link);' \ 'delete link;'.format( n, prefijo_nombre_archivo, sufijo_nombre_archivo ) driver.execute_script(script_js) ``` I hope it may help!
You have to first move to the element where you want to perform the context click ```py from selenium.webdriver import ActionChains from selenium.webdriver.common.keys import Keys from selenium import webdriver driver.get(url) # get the image source img = driver.find_element_by_xpath('//img') actionChains = ActionChains(driver) actionChains.move_to_element(img).context_click().send_keys(Keys.ARROW_DOWN).send_keys(Keys.ARROW_DOWN).send_keys(Keys.RETURN).perform() ```
32,406,711
I'm trying to write a python script using BeautifulSoup that crawls through a webpage <http://tbc-python.fossee.in/completed-books/> and collects necessary data from it. Basically it has to fetch all the `page loading errors, SyntaxErrors, NameErrors, AttributeErrors, etc` present in the chapters of all the books to a text file `errors.txt`. There are around 273 books. The script written is doing the task well. I am using bandwidth with good speed. But the code takes much time to scrape through all the books. Please help me to optimize the python script with necessary tweaks, maybe use of functions, etc. Thanks ``` import urllib2, urllib from bs4 import BeautifulSoup website = "http://tbc-python.fossee.in/completed-books/" soup = BeautifulSoup(urllib2.urlopen(website)) errors = open('errors.txt','w') # Completed books webpage has data stored in table format BookTable = soup.find('table', {'class': 'table table-bordered table-hover'}) for BookCount, BookRow in enumerate(BookTable.find_all('tr'), start = 1): # Grab book names BookCol = BookRow.find_all('td') BookName = BookCol[1].a.string.strip() print "%d: %s" % (BookCount, BookName) # Open each book BookSrc = BeautifulSoup(urllib2.urlopen('http://tbc-python.fossee.in%s' %(BookCol[1].a.get("href")))) ChapTable = BookSrc.find('table', {'class': 'table table-bordered table-hover'}) # Check if each chapter page opens, if not store book & chapter name in error.txt for ChapRow in ChapTable.find_all('tr'): ChapCol = ChapRow.find_all('td') ChapName = (ChapCol[0].a.string.strip()).encode('ascii', 'ignore') # ignores error : 'ascii' codec can't encode character u'\xef' ChapLink = 'http://tbc-python.fossee.in%s' %(ChapCol[0].a.get("href")) try: ChapSrc = BeautifulSoup(urllib2.urlopen(ChapLink)) except: print '\t%s\n\tPage error' %(ChapName) errors.write("Page; %s;%s;%s;%s" %(BookCount, BookName, ChapName, ChapLink)) continue # Check for errors in chapters and store the errors in error.txt EgError = ChapSrc.find_all('div', {'class': 'output_subarea output_text output_error'}) if EgError: for e, i in enumerate(EgError, start=1): errors.write("Example;%s;%s;%s;%s\n" %(BookCount,BookName,ChapName,ChapLink)) if 'ipython-input' or 'Error' in i.pre.get_text() else None print '\t%s\n\tExample errors: %d' %(ChapName, e) errors.close() ```
2015/09/04
[ "https://Stackoverflow.com/questions/32406711", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5283513/" ]
Try to change your buttons to: ``` { display: "Hello There", action: functionA } ``` And to invoke: ``` btn[i].action(); ``` I changed the name `function` to `action` because `function` is a reserved word and cannot be used as an object property name.
You can store references to the functions in your array, just lose the `"` signs around their names *(which currently makes them strings instead of function references)*, creating the array like this: ``` var btn = [{ x: 50, y: 100, width: 80, height: 50, display: "Hello There", function: functionA }, { x: 150, y: 100, width: 80, height: 50, display: "Why Not?", function: functionB }] ``` Then you can call either by writing `btn[i].function()`.
32,406,711
I'm trying to write a python script using BeautifulSoup that crawls through a webpage <http://tbc-python.fossee.in/completed-books/> and collects necessary data from it. Basically it has to fetch all the `page loading errors, SyntaxErrors, NameErrors, AttributeErrors, etc` present in the chapters of all the books to a text file `errors.txt`. There are around 273 books. The script written is doing the task well. I am using bandwidth with good speed. But the code takes much time to scrape through all the books. Please help me to optimize the python script with necessary tweaks, maybe use of functions, etc. Thanks ``` import urllib2, urllib from bs4 import BeautifulSoup website = "http://tbc-python.fossee.in/completed-books/" soup = BeautifulSoup(urllib2.urlopen(website)) errors = open('errors.txt','w') # Completed books webpage has data stored in table format BookTable = soup.find('table', {'class': 'table table-bordered table-hover'}) for BookCount, BookRow in enumerate(BookTable.find_all('tr'), start = 1): # Grab book names BookCol = BookRow.find_all('td') BookName = BookCol[1].a.string.strip() print "%d: %s" % (BookCount, BookName) # Open each book BookSrc = BeautifulSoup(urllib2.urlopen('http://tbc-python.fossee.in%s' %(BookCol[1].a.get("href")))) ChapTable = BookSrc.find('table', {'class': 'table table-bordered table-hover'}) # Check if each chapter page opens, if not store book & chapter name in error.txt for ChapRow in ChapTable.find_all('tr'): ChapCol = ChapRow.find_all('td') ChapName = (ChapCol[0].a.string.strip()).encode('ascii', 'ignore') # ignores error : 'ascii' codec can't encode character u'\xef' ChapLink = 'http://tbc-python.fossee.in%s' %(ChapCol[0].a.get("href")) try: ChapSrc = BeautifulSoup(urllib2.urlopen(ChapLink)) except: print '\t%s\n\tPage error' %(ChapName) errors.write("Page; %s;%s;%s;%s" %(BookCount, BookName, ChapName, ChapLink)) continue # Check for errors in chapters and store the errors in error.txt EgError = ChapSrc.find_all('div', {'class': 'output_subarea output_text output_error'}) if EgError: for e, i in enumerate(EgError, start=1): errors.write("Example;%s;%s;%s;%s\n" %(BookCount,BookName,ChapName,ChapLink)) if 'ipython-input' or 'Error' in i.pre.get_text() else None print '\t%s\n\tExample errors: %d' %(ChapName, e) errors.close() ```
2015/09/04
[ "https://Stackoverflow.com/questions/32406711", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5283513/" ]
Try to change your buttons to: ``` { display: "Hello There", action: functionA } ``` And to invoke: ``` btn[i].action(); ``` I changed the name `function` to `action` because `function` is a reserved word and cannot be used as an object property name.
Don't put the name of the function in the array, put a reference to the function itself: ``` var btn = [{ x: 50, y: 100, width: 80, height: 50, display: "Hello There", 'function': functionA }, { x: 150, y: 100, width: 80, height: 50, display: "Why Not?", 'function': functionB }, { x: 250, y: 100, width: 80, height: 50, display: "Let's Go!", 'function': functionC }]; ``` To call the function, you do: ``` btn[i]['function'](); ``` I've put `function` in quotes in the literal, and used array notation to access it, because it's a reserved keyword.
44,412,844
I have been trying to use CloudFormation to deploy to API Gateway, however, I constantly run into the same issue with my method resources. The stack deployments keep failing with 'Invalid Resource identifier specified'. Here is my method resource from my CloudFormation template: ``` "UsersPut": { "Type": "AWS::ApiGateway::Method", "Properties": { "ResourceId": "UsersResource", "RestApiId": "MyApi", "ApiKeyRequired": true, "AuthorizationType": "NONE", "HttpMethod": "PUT", "Integration": { "Type": "AWS_PROXY", "IntegrationHttpMethod": "POST", "Uri": { "Fn::Join": ["", ["arn:aws:apigateway:", { "Ref": "AWS::Region" }, ":lambda:path/2015-03-31/functions/", { "Fn::GetAtt": ["MyLambdaFunc", "Arn"] }, "/invocations"]] } }, "MethodResponses": [{ "StatusCode": 200 }] } } ``` Is anyone able to help me figure out why this keeps failing the stack deployment? UPDATE: I forgot to mention that I had also tried using references to add the resource ID, that also gave me the same error: ``` "UsersPut": { "Type": "AWS::ApiGateway::Method", "Properties": { "ResourceId": { "Ref": "UsersResource" }, "RestApiId": "MyApi", "ApiKeyRequired": true, "AuthorizationType": "NONE", "HttpMethod": "PUT", "Integration": { "Type": "AWS_PROXY", "IntegrationHttpMethod": "POST", "Uri": { "Fn::Join": ["", ["arn:aws:apigateway:", { "Ref": "AWS::Region" }, ":lambda:path/2015-03-31/functions/", { "Fn::GetAtt": ["MyLambdaFunc", "Arn"] }, "/invocations"]] } }, "MethodResponses": [{ "StatusCode": 200 }] } } ``` Here is the full CloudFormation template: ``` { "AWSTemplateFormatVersion": "2010-09-09", "Resources": { "LambdaDynamoDBRole": { "Type": "AWS::IAM::Role", "Properties": { "AssumeRolePolicyDocument": { "Version": "2012-10-17", "Statement": [{ "Effect": "Allow", "Principal": { "Service": [ "lambda.amazonaws.com" ] }, "Action": [ "sts:AssumeRole" ] }] }, "Path": "/", "Policies": [{ "PolicyName": "DynamoReadWritePolicy", "PolicyDocument": { "Version": "2012-10-17", "Statement": [{ "Sid": "1", "Action": [ "dynamodb:DeleteItem", "dynamodb:GetItem", "dynamodb:PutItem", "dynamodb:Query", "dynamodb:Scan", "dynamodb:UpdateItem" ], "Effect": "Allow", "Resource": "*" }, { "Sid": "2", "Resource": "*", "Action": [ "logs:CreateLogGroup", "logs:CreateLogStream", "logs:PutLogEvents" ], "Effect": "Allow" }] } }] } }, "MyFirstLambdaFn": { "Type": "AWS::Lambda::Function", "Properties": { "Code": { "S3Bucket": "myfirstlambdafn", "S3Key": "lambda_handler.py.zip" }, "Description": "", "FunctionName": "MyFirstLambdaFn", "Handler": "lambda_function.lambda_handler", "MemorySize": 512, "Role": { "Fn::GetAtt": [ "LambdaDynamoDBRole", "Arn" ] }, "Runtime": "python2.7", "Timeout": 3 }, "DependsOn": "LambdaDynamoDBRole" }, "MySecondLambdaFn": { "Type": "AWS::Lambda::Function", "Properties": { "Code": { "S3Bucket": "mysecondlambdafn", "S3Key": "lambda_handler.py.zip" }, "Description": "", "FunctionName": "MySecondLambdaFn", "Handler": "lambda_function.lambda_handler", "MemorySize": 512, "Role": { "Fn::GetAtt": [ "LambdaDynamoDBRole", "Arn" ] }, "Runtime": "python2.7", "Timeout": 3 }, "DependsOn": "LambdaDynamoDBRole" }, "MyApi": { "Type": "AWS::ApiGateway::RestApi", "Properties": { "Name": "Project Test API", "Description": "Project Test API", "FailOnWarnings": true } }, "FirstUserPropertyModel": { "Type": "AWS::ApiGateway::Model", "Properties": { "ContentType": "application/json", "Name": "FirstUserPropertyModel", "RestApiId": { "Ref": "MyApi" }, "Schema": { "$schema": "http://json-schema.org/draft-04/schema#", "title": "FirstUserPropertyModel", "type": "object", "properties": { "Email": { "type": "string" } } } } }, "SecondUserPropertyModel": { "Type": "AWS::ApiGateway::Model", "Properties": { "ContentType": "application/json", "Name": "SecondUserPropertyModel", "RestApiId": { "Ref": "MyApi" }, "Schema": { "$schema": "http://json-schema.org/draft-04/schema#", "title": "SecondUserPropertyModel", "type": "object", "properties": { "Name": { "type": "string" } } } } }, "ErrorCfn": { "Type": "AWS::ApiGateway::Model", "Properties": { "ContentType": "application/json", "Name": "ErrorCfn", "RestApiId": { "Ref": "MyApi" }, "Schema": { "$schema": "http://json-schema.org/draft-04/schema#", "title": "Error Schema", "type": "object", "properties": { "message": { "type": "string" } } } } }, "UsersResource": { "Type": "AWS::ApiGateway::Resource", "Properties": { "RestApiId": { "Ref": "MyApi" }, "ParentId": { "Fn::GetAtt": ["MyApi", "RootResourceId"] }, "PathPart": "users" } }, "UsersPost": { "Type": "AWS::ApiGateway::Method", "Properties": { "ResourceId": { "Ref": "UsersResource" }, "RestApiId": "MyApi", "ApiKeyRequired": true, "AuthorizationType": "NONE", "HttpMethod": "POST", "Integration": { "Type": "AWS_PROXY", "IntegrationHttpMethod": "POST", "Uri": { "Fn::Join": ["", ["arn:aws:apigateway:", { "Ref": "AWS::Region" }, ":lambda:path/2015-03-31/functions/", { "Fn::GetAtt": ["MyFirstLambdaFn", "Arn"] }, "/invocations"]] } }, "MethodResponses": [{ "ResponseModels": { "application/json": { "Ref": "FirstUserPropertyModel" } }, "StatusCode": 200 }, { "ResponseModels": { "application/json": { "Ref": "ErrorCfn" } }, "StatusCode": 404 }, { "ResponseModels": { "application/json": { "Ref": "ErrorCfn" } }, "StatusCode": 500 }] } }, "UsersPut": { "Type": "AWS::ApiGateway::Method", "Properties": { "ResourceId": { "Ref": "UsersResource" }, "RestApiId": "MyApi", "ApiKeyRequired": true, "AuthorizationType": "NONE", "HttpMethod": "PUT", "Integration": { "Type": "AWS_PROXY", "IntegrationHttpMethod": "POST", "Uri": { "Fn::Join": ["", ["arn:aws:apigateway:", { "Ref": "AWS::Region" }, ":lambda:path/2015-03-31/functions/", { "Fn::GetAtt": ["MySecondLambdaFn", "Arn"] }, "/invocations"]] } }, "MethodResponses": [{ "ResponseModels": { "application/json": { "Ref": "SecondUserPropertyModel" } }, "StatusCode": 200 }, { "ResponseModels": { "application/json": { "Ref": "ErrorCfn" } }, "StatusCode": 404 }, { "ResponseModels": { "application/json": { "Ref": "ErrorCfn" } }, "StatusCode": 500 }] } }, "RestApiDeployment": { "Type": "AWS::ApiGateway::Deployment", "Properties": { "RestApiId": { "Ref": "MyApi" }, "StageName": "Prod" }, "DependsOn": ["UsersPost", "UsersPut"] } }, "Description": "Project description" ``` }
2017/06/07
[ "https://Stackoverflow.com/questions/44412844", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3067870/" ]
ResourceId must be a reference to a cloudformation resource, not a simple string. e.g. ``` ResourceId: Ref: UsersResource ```
When you create an API resource(1), a default root resource(2) for the API is created for path /. In order to get the id for the MyApi resource(1) root resource(2) use: ``` "ResourceId": { "Fn::GetAtt": [ "MyApi", "RootResourceId" ] } ``` (1) The stack resource (2) The API resource
44,412,844
I have been trying to use CloudFormation to deploy to API Gateway, however, I constantly run into the same issue with my method resources. The stack deployments keep failing with 'Invalid Resource identifier specified'. Here is my method resource from my CloudFormation template: ``` "UsersPut": { "Type": "AWS::ApiGateway::Method", "Properties": { "ResourceId": "UsersResource", "RestApiId": "MyApi", "ApiKeyRequired": true, "AuthorizationType": "NONE", "HttpMethod": "PUT", "Integration": { "Type": "AWS_PROXY", "IntegrationHttpMethod": "POST", "Uri": { "Fn::Join": ["", ["arn:aws:apigateway:", { "Ref": "AWS::Region" }, ":lambda:path/2015-03-31/functions/", { "Fn::GetAtt": ["MyLambdaFunc", "Arn"] }, "/invocations"]] } }, "MethodResponses": [{ "StatusCode": 200 }] } } ``` Is anyone able to help me figure out why this keeps failing the stack deployment? UPDATE: I forgot to mention that I had also tried using references to add the resource ID, that also gave me the same error: ``` "UsersPut": { "Type": "AWS::ApiGateway::Method", "Properties": { "ResourceId": { "Ref": "UsersResource" }, "RestApiId": "MyApi", "ApiKeyRequired": true, "AuthorizationType": "NONE", "HttpMethod": "PUT", "Integration": { "Type": "AWS_PROXY", "IntegrationHttpMethod": "POST", "Uri": { "Fn::Join": ["", ["arn:aws:apigateway:", { "Ref": "AWS::Region" }, ":lambda:path/2015-03-31/functions/", { "Fn::GetAtt": ["MyLambdaFunc", "Arn"] }, "/invocations"]] } }, "MethodResponses": [{ "StatusCode": 200 }] } } ``` Here is the full CloudFormation template: ``` { "AWSTemplateFormatVersion": "2010-09-09", "Resources": { "LambdaDynamoDBRole": { "Type": "AWS::IAM::Role", "Properties": { "AssumeRolePolicyDocument": { "Version": "2012-10-17", "Statement": [{ "Effect": "Allow", "Principal": { "Service": [ "lambda.amazonaws.com" ] }, "Action": [ "sts:AssumeRole" ] }] }, "Path": "/", "Policies": [{ "PolicyName": "DynamoReadWritePolicy", "PolicyDocument": { "Version": "2012-10-17", "Statement": [{ "Sid": "1", "Action": [ "dynamodb:DeleteItem", "dynamodb:GetItem", "dynamodb:PutItem", "dynamodb:Query", "dynamodb:Scan", "dynamodb:UpdateItem" ], "Effect": "Allow", "Resource": "*" }, { "Sid": "2", "Resource": "*", "Action": [ "logs:CreateLogGroup", "logs:CreateLogStream", "logs:PutLogEvents" ], "Effect": "Allow" }] } }] } }, "MyFirstLambdaFn": { "Type": "AWS::Lambda::Function", "Properties": { "Code": { "S3Bucket": "myfirstlambdafn", "S3Key": "lambda_handler.py.zip" }, "Description": "", "FunctionName": "MyFirstLambdaFn", "Handler": "lambda_function.lambda_handler", "MemorySize": 512, "Role": { "Fn::GetAtt": [ "LambdaDynamoDBRole", "Arn" ] }, "Runtime": "python2.7", "Timeout": 3 }, "DependsOn": "LambdaDynamoDBRole" }, "MySecondLambdaFn": { "Type": "AWS::Lambda::Function", "Properties": { "Code": { "S3Bucket": "mysecondlambdafn", "S3Key": "lambda_handler.py.zip" }, "Description": "", "FunctionName": "MySecondLambdaFn", "Handler": "lambda_function.lambda_handler", "MemorySize": 512, "Role": { "Fn::GetAtt": [ "LambdaDynamoDBRole", "Arn" ] }, "Runtime": "python2.7", "Timeout": 3 }, "DependsOn": "LambdaDynamoDBRole" }, "MyApi": { "Type": "AWS::ApiGateway::RestApi", "Properties": { "Name": "Project Test API", "Description": "Project Test API", "FailOnWarnings": true } }, "FirstUserPropertyModel": { "Type": "AWS::ApiGateway::Model", "Properties": { "ContentType": "application/json", "Name": "FirstUserPropertyModel", "RestApiId": { "Ref": "MyApi" }, "Schema": { "$schema": "http://json-schema.org/draft-04/schema#", "title": "FirstUserPropertyModel", "type": "object", "properties": { "Email": { "type": "string" } } } } }, "SecondUserPropertyModel": { "Type": "AWS::ApiGateway::Model", "Properties": { "ContentType": "application/json", "Name": "SecondUserPropertyModel", "RestApiId": { "Ref": "MyApi" }, "Schema": { "$schema": "http://json-schema.org/draft-04/schema#", "title": "SecondUserPropertyModel", "type": "object", "properties": { "Name": { "type": "string" } } } } }, "ErrorCfn": { "Type": "AWS::ApiGateway::Model", "Properties": { "ContentType": "application/json", "Name": "ErrorCfn", "RestApiId": { "Ref": "MyApi" }, "Schema": { "$schema": "http://json-schema.org/draft-04/schema#", "title": "Error Schema", "type": "object", "properties": { "message": { "type": "string" } } } } }, "UsersResource": { "Type": "AWS::ApiGateway::Resource", "Properties": { "RestApiId": { "Ref": "MyApi" }, "ParentId": { "Fn::GetAtt": ["MyApi", "RootResourceId"] }, "PathPart": "users" } }, "UsersPost": { "Type": "AWS::ApiGateway::Method", "Properties": { "ResourceId": { "Ref": "UsersResource" }, "RestApiId": "MyApi", "ApiKeyRequired": true, "AuthorizationType": "NONE", "HttpMethod": "POST", "Integration": { "Type": "AWS_PROXY", "IntegrationHttpMethod": "POST", "Uri": { "Fn::Join": ["", ["arn:aws:apigateway:", { "Ref": "AWS::Region" }, ":lambda:path/2015-03-31/functions/", { "Fn::GetAtt": ["MyFirstLambdaFn", "Arn"] }, "/invocations"]] } }, "MethodResponses": [{ "ResponseModels": { "application/json": { "Ref": "FirstUserPropertyModel" } }, "StatusCode": 200 }, { "ResponseModels": { "application/json": { "Ref": "ErrorCfn" } }, "StatusCode": 404 }, { "ResponseModels": { "application/json": { "Ref": "ErrorCfn" } }, "StatusCode": 500 }] } }, "UsersPut": { "Type": "AWS::ApiGateway::Method", "Properties": { "ResourceId": { "Ref": "UsersResource" }, "RestApiId": "MyApi", "ApiKeyRequired": true, "AuthorizationType": "NONE", "HttpMethod": "PUT", "Integration": { "Type": "AWS_PROXY", "IntegrationHttpMethod": "POST", "Uri": { "Fn::Join": ["", ["arn:aws:apigateway:", { "Ref": "AWS::Region" }, ":lambda:path/2015-03-31/functions/", { "Fn::GetAtt": ["MySecondLambdaFn", "Arn"] }, "/invocations"]] } }, "MethodResponses": [{ "ResponseModels": { "application/json": { "Ref": "SecondUserPropertyModel" } }, "StatusCode": 200 }, { "ResponseModels": { "application/json": { "Ref": "ErrorCfn" } }, "StatusCode": 404 }, { "ResponseModels": { "application/json": { "Ref": "ErrorCfn" } }, "StatusCode": 500 }] } }, "RestApiDeployment": { "Type": "AWS::ApiGateway::Deployment", "Properties": { "RestApiId": { "Ref": "MyApi" }, "StageName": "Prod" }, "DependsOn": ["UsersPost", "UsersPut"] } }, "Description": "Project description" ``` }
2017/06/07
[ "https://Stackoverflow.com/questions/44412844", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3067870/" ]
I figured that actually it was the RestApiId which needed to be a reference too: ``` "RestApiId": { "Ref": "MyApi" }, ```
When you create an API resource(1), a default root resource(2) for the API is created for path /. In order to get the id for the MyApi resource(1) root resource(2) use: ``` "ResourceId": { "Fn::GetAtt": [ "MyApi", "RootResourceId" ] } ``` (1) The stack resource (2) The API resource
18,150,518
So i have a python script which generates an image, and saves over the old image which used to be the background image. I tried to make it run using `crontab`, but couldn't get that to work, so now i just have a bash script which runs once in my `.bashrc` whdn i first log in (i have a `if [ firstRun ]` kind of thing in there). The problem is, that every now and then, when the background updates, it flashes black before it does - which is not very nice! I currently have it running once a second, but i don't think it's the python that is causing the black screens, and more the way the image is changed over... Is there a way I can prevent these ugly black screens between updates? Here's all the code to run it, if you want to try it out... from PIL import Image, ImageDraw, ImageFilter import colorsys from random import gauss ``` xSize, ySize = 1600,900 im = Image.new('RGBA', (xSize, ySize), (0, 0, 0, 0)) draw = ImageDraw.Draw(im) class Cube(object): def __init__(self): self.tl = (0,0) self.tm = (0,0) self.tr = (0,0) self.tb = (0,0) self.bl = (0,0) self.bm = (0,0) self.br = (0,0) def intify(self): for prop in [self.tl, self.tm, self.tr, self.tb, self.bl, self.bm, self.br]: prop = [int(i) for i in prop] def drawCube((x,y), size, colour=(255,0,0)): p = Cube() colours = [list(colorsys.rgb_to_hls(*[c/255.0 for c in colour])) for _ in range(3)] colours[0][1] -= 0 colours[1][1] -= 0.2 colours[2][1] -= 0.4 colours = [tuple([int(i*255) for i in colorsys.hls_to_rgb(*colour)]) for colour in colours] p.tl = x,y #Top Left p.tm = x+size/2, y-size/4 #Top Middle p.tr = x+size, y #Top Right p.tb = x+size/2, y+size/4 #Top Bottom p.bl = x, y+size/2 #Bottom Left p.bm = x+size/2, y+size*3/4 #Bottom Middle p.br = x+size, y+size/2 #Bottom Right p.intify() draw.polygon((p.tl, p.tm, p.tr, p.tb), fill=colours[0]) draw.polygon((p.tl, p.bl, p.bm, p.tb), fill=colours[1]) draw.polygon((p.tb, p.tr, p.br, p.bm), fill=colours[2]) lineColour = (0,0,0) lineThickness = 2 draw.line((p.tl, p.tm), fill=lineColour, width=lineThickness) draw.line((p.tl, p.tb), fill=lineColour, width=lineThickness) draw.line((p.tm, p.tr), fill=lineColour, width=lineThickness) draw.line((p.tb, p.tr), fill=lineColour, width=lineThickness) draw.line((p.tl, p.bl), fill=lineColour, width=lineThickness) draw.line((p.tb, p.bm), fill=lineColour, width=lineThickness) draw.line((p.tr, p.br), fill=lineColour, width=lineThickness) draw.line((p.bl, p.bm), fill=lineColour, width=lineThickness) draw.line((p.bm, p.br), fill=lineColour, width=lineThickness) # -------- Actually do the drawing size = 100 #Read in file of all colours, and random walk them with open("/home/will/Documents/python/cubeWall/oldColours.dat") as coloursFile: for line in coloursFile: oldColours = [int(i) for i in line.split()] oldColours = [int(round(c + gauss(0,1.5)))%255 for c in oldColours] colours = [[ int(c*255) for c in colorsys.hsv_to_rgb(i/255.0, 1, 1)] for i in oldColours] with open("/home/will/Documents/python/cubeWall/oldColours.dat", "w") as coloursFile: coloursFile.write(" ".join([str(i) for i in oldColours]) + "\n") for i in range(xSize/size+2): for j in range(2*ySize/size+2): if j%3 == 0: drawCube((i*size,j*size/2), size, colour=colours[(i+j)%3]) elif j%3 == 1: drawCube(((i-0.5)*size,(0.5*j+0.25)*size), size, colour=colours[(i+j)%3]) im2 = im.filter(ImageFilter.SMOOTH) im2.save("cubes.png") #im2.show() ``` and then just run this: #!/bin/sh ``` while [ 1 ] do python drawCubes.py sleep 1 done ``` And set the desktop image to be `cubes.png`
2013/08/09
[ "https://Stackoverflow.com/questions/18150518", "https://Stackoverflow.com", "https://Stackoverflow.com/users/432913/" ]
Well, you can change the current wallaper (in Gnome 3 compatible desktops) by running ``` import os os.system("gsettings set org.gnome.desktop.background picture-uri file://%(path)s" % {'path':absolute_path}) os.system("gsettings set org.gnome.desktop.background picture-options wallpaper") ```
If you're using MATE, you're using a fork of Gnome 2.x. The method I found for Gnome 2 is : `gconftool-2 --set --type string --set /desktop/gnome/background/picture_filename <absolute image path>`. The method we tried before would have worked in Gnome Shell.
64,963,033
I am using below code to excute a python script every 5 minutes but when it executes next time its not excecuting at excact time as before. example if i am executing it at exact 9:00:00 AM, next time it executes at 9:05:25 AM and next time 9:10:45 AM. as i run the python script every 5 minutes for long time its not able to record at exact time. import schedule import time from datetime import datetime ``` # Functions setup def geeks(): print("Shaurya says Geeksforgeeks") now = datetime.now() current_time = now.strftime("%H:%M:%S") print("Current Time =", current_time) # Task scheduling # After every 10mins geeks() is called. schedule.every(2).minutes.do(geeks) # Loop so that the scheduling task # keeps on running all time. while True: # Checks whether a scheduled task # is pending to run or not schedule.run_pending() time.sleep(1) ``` Is there any easy fix for this so that the script runs exactly at 5 minutes next time. please don't suggest me to use crontab as I have tried crontabs ut not working for me. I am using python script in different os
2020/11/23
[ "https://Stackoverflow.com/questions/64963033", "https://Stackoverflow.com", "https://Stackoverflow.com/users/14642703/" ]
You can use the `ObservableList.contains` method to quickly check any occurrence of a similar item: ``` public void diplaysubjects() { String item = select_subject.getSelectionModel().getSelectedItem(); ObservableList<String> courses = course_list.getItems(); // Only here to clarify the code if (!courses.contains(item)) courses.add(item); } ``` I don't know which pattern did you use for your code but if it is a MVC-like, I suggest you to work directly with the model's observable list instead of calling `course_list.getItems()`.
I suggest you to use observable list with ListView. Do like this ``` public class Controller implements Initializable { @FXML private ComboBox<String> select_subject; @FXML private ListView<String> course_list; private ObservableList<String> list = FXCollections.observableArrayList(); @Override public void initialize(URL url, ResourceBundle resourceBundle) { course_list.setItems(list); } public void diplaysubjects() { String course = select_subject.getSelectionModel().getSelectedItem(); //Instead of adding directly to listview add to observablelist //course_list.getItems().add(course.toString()); list.add(course); } //To check the list contains the item private boolean doesExists(String element){ return list.contains(element); } } ```
64,963,033
I am using below code to excute a python script every 5 minutes but when it executes next time its not excecuting at excact time as before. example if i am executing it at exact 9:00:00 AM, next time it executes at 9:05:25 AM and next time 9:10:45 AM. as i run the python script every 5 minutes for long time its not able to record at exact time. import schedule import time from datetime import datetime ``` # Functions setup def geeks(): print("Shaurya says Geeksforgeeks") now = datetime.now() current_time = now.strftime("%H:%M:%S") print("Current Time =", current_time) # Task scheduling # After every 10mins geeks() is called. schedule.every(2).minutes.do(geeks) # Loop so that the scheduling task # keeps on running all time. while True: # Checks whether a scheduled task # is pending to run or not schedule.run_pending() time.sleep(1) ``` Is there any easy fix for this so that the script runs exactly at 5 minutes next time. please don't suggest me to use crontab as I have tried crontabs ut not working for me. I am using python script in different os
2020/11/23
[ "https://Stackoverflow.com/questions/64963033", "https://Stackoverflow.com", "https://Stackoverflow.com/users/14642703/" ]
You can use the `ObservableList.contains` method to quickly check any occurrence of a similar item: ``` public void diplaysubjects() { String item = select_subject.getSelectionModel().getSelectedItem(); ObservableList<String> courses = course_list.getItems(); // Only here to clarify the code if (!courses.contains(item)) courses.add(item); } ``` I don't know which pattern did you use for your code but if it is a MVC-like, I suggest you to work directly with the model's observable list instead of calling `course_list.getItems()`.
You can add an arrayList() where you add the String if it doesn't contain it and then add it to the ListView. ```java @FXML private ComboBox<String> select_subject; @FXML private ListView<String> course_list; public void diplaysubjects() { ArrayList<String> s = new ArrayList<>(); String course = select_subject.getSelectionModel().getSelectedItem(); if (!s.contains(course)){ s.add(course); course_list.getItems().add(course.toString()); } } ```
58,040,556
I have a Pandas DataFrame with date columns. The data is imported from a csv file. When I try to fit the regression model, I get the error `ValueError: could not convert string to float: '2019-08-30 07:51:21`. . How can I get rid of it? Here is dataframe. **source.csv** ``` event_id tsm_id rssi_ts rssi batl batl_ts ts_diff 0 417736018 4317714 2019-09-05 20:00:07 140 100.0 2019-09-05 18:11:49 01:48:18 1 417735986 4317714 2019-09-05 20:00:07 132 100.0 2019-09-05 18:11:49 01:48:18 2 418039386 4317714 2019-09-06 01:00:08 142 100.0 2019-09-06 00:11:50 00:48:18 3 418039385 4317714 2019-09-06 01:00:08 122 100.0 2019-09-06 00:11:50 00:48:18 4 420388010 4317714 2019-09-07 15:31:07 143 100.0 2019-09-07 12:11:50 03:19:17 ``` Here is my code: ``` model = pd.read_csv("source.csv") model.describe() event_id tsm_id. rssi batl count 5.000000e+03 5.000000e+03 5000.000000 3784.000000 mean 3.982413e+08 4.313492e+06 168.417200 94.364429 std 2.200899e+07 2.143570e+03 35.319516 13.609917 min 3.443084e+08 4.310312e+06 0.000000 16.000000 25% 3.852882e+08 4.310315e+06 144.000000 97.000000 50% 4.007999e+08 4.314806e+06 170.000000 100.000000 75% 4.171803e+08 4.314815e+06 195.000000 100.000000 max 4.258451e+08 4.317714e+06 242.000000 100.000000 labels_b = np.array(model['batl']) features_r= model.drop('batl', axis = 1) features_r = np.array(features_r) from sklearn.model_selection import train_test_split train_features, test_features, train_labels, test_labels = train_test_split(features_r, labels_b, test_size = 0.25, random_state = 42) from sklearn.ensemble import RandomForestRegressor rf = RandomForestRegressor(n_estimators = 1000, random_state = 42) rf.fit(train_features, train_labels); ``` **Here is error msg:** ``` ValueError Traceback (most recent call last) <ipython-input-28-bc774a9d8239> in <module> 4 rf = RandomForestRegressor(n_estimators = 1000, random_state = 42) 5 # Train the model on training data ----> 6 rf.fit(train_features, train_labels); ~/ml/env/lib/python3.7/site-packages/sklearn/ensemble/forest.py in fit(self, X, y, sample_weight) 247 248 # Validate or convert input data --> 249 X = check_array(X, accept_sparse="csc", dtype=DTYPE) 250 y = check_array(y, accept_sparse='csc', ensure_2d=False, dtype=None) 251 if sample_weight is not None: ~/ml/env/lib/python3.7/site-packages/sklearn/utils/validation.py in check_array(array, accept_sparse, accept_large_sparse, dtype, order, copy, force_all_finite, ensure_2d, allow_nd, ensure_min_samples, ensure_min_features, warn_on_dtype, estimator) 494 try: 495 warnings.simplefilter('error', ComplexWarning) --> 496 array = np.asarray(array, dtype=dtype, order=order) 497 except ComplexWarning: 498 raise ValueError("Complex data not supported\n" ~/ml/env/lib/python3.7/site-packages/numpy/core/numeric.py in asarray(a, dtype, order) 536 537 """ --> 538 return array(a, dtype, copy=False, order=order) 539 540 ValueError: could not convert string to float: '2019-08-30 07:51:21' ```
2019/09/21
[ "https://Stackoverflow.com/questions/58040556", "https://Stackoverflow.com", "https://Stackoverflow.com/users/451435/" ]
The last entry in your list is missing the item property which should be the URL for the page it references. I suspect the last one is the page itself, which is not needed in the list anyhow.
Just for the record, we found the same error message which seemed to be because the referred URL was not available to the tool (it was inside our company network but not publicly available). Pointing to a public valid URL fixed the error message on our side.
30,400,777
I'm trying to install Pylzma via pip on python 2.7.9 and I'm getting the following error: ``` C:\Python27\Scripts>pip.exe install pylzma Downloading/unpacking pylzma Running setup.py (path:c:\users\username\appdata\local\temp\pip_build_username\pylzma\setup.py) egg_info for package pylzma warning: no files found matching '*.py' under directory 'test' warning: no files found matching '*.7z' under directory 'test' no previously-included directories found matching 'src\sdk.orig' Installing collected packages: pylzma Running setup.py install for pylzma adding support for multithreaded compression building 'pylzma' extension C:\Users\username\AppData\Local\Programs\Common\Microsoft\Visual C++ for Python\9.0\VC\Bin\cl.exe /c /nologo /Ox /MD /W3 /GS- /DNDEBUG -DWITH_COMPAT=1 -DPYLZMA_VERSION="0.4.6" -DCOMPRESS_MF_MT=1 -Isrc/sdk -IC:\Python27\include -IC:\Python27\PC /Tcsrc/pylzma/pylzma.c /Fobuild\temp.win32-2.7\Release\src/pylzma/pylzma.obj /MT cl : Command line warning D9025 : overriding '/MD' with '/MT' pylzma.c src/pylzma/pylzma.c(284) : error C2440: 'function' : cannot convert from 'double' to 'const char *' src/pylzma/pylzma.c(284) : warning C4024: 'PyModule_AddStringConstant' : different types for formal and actual parameter 3 src/pylzma/pylzma.c(284) : error C2143: syntax error : missing ')' before 'constant' src/pylzma/pylzma.c(284) : error C2059: syntax error : ')' error: command 'C:\\Users\\username\\AppData\\Local\\Programs\\Common\\Microsoft\\Visual C++ for Python\\9.0\\VC\\Bin\\cl.exe' failed with exit status 2 Complete output from command C:\Python27\python.exe -c "import setuptools, tokenize;__file__='c:\\users\\username\\appdata\\local\\temp\\pip_build_username\\pylzma\\setup.py';exec(compile(getattr(tokenize, 'open', open)(__file__).read().replace('\r\n', '\n'), __file__, 'exec'))" install --record c:\users\username\appdata\local\temp\pip-4onyx_-record\install-record.txt --single-version-externally-managed --compile: running install running build running build_py creating build creating build\lib.win32-2.7 copying py7zlib.py -> build\lib.win32-2.7 running build_ext adding support for multithreaded compression building 'pylzma' extension creating build\temp.win32-2.7 creating build\temp.win32-2.7\Release creating build\temp.win32-2.7\Release\src creating build\temp.win32-2.7\Release\src\pylzma creating build\temp.win32-2.7\Release\src\sdk creating build\temp.win32-2.7\Release\src\7zip creating build\temp.win32-2.7\Release\src\7zip\C creating build\temp.win32-2.7\Release\src\compat C:\Users\username\AppData\Local\Programs\Common\Microsoft\Visual C++ for Python\9.0\VC\Bin\cl.exe /c /nologo /Ox /MD /W3 /GS- /DNDEBUG -DWITH_COMPAT=1 -DPYLZMA_VERSION="0.4.6" -DCOMPRESS_MF_MT=1 -Isrc/sdk -IC:\Python27\include -IC:\Python27\PC /Tcsrc/pylzma/pylzma.c /Fobuild\temp.win32-2.7\Release\src/pylzma/pylzma.obj /MT cl : Command line warning D9025 : overriding '/MD' with '/MT' pylzma.c src/pylzma/pylzma.c(284) : error C2440: 'function' : cannot convert from 'double' to 'const char *' src/pylzma/pylzma.c(284) : warning C4024: 'PyModule_AddStringConstant' : different types for formal and actual parameter 3 src/pylzma/pylzma.c(284) : error C2143: syntax error : missing ')' before 'constant' src/pylzma/pylzma.c(284) : error C2059: syntax error : ')' error: command 'C:\\Users\\username\\AppData\\Local\\Programs\\Common\\Microsoft\\Visual C++ for Python\\9.0\\VC\\Bin\\cl.exe' failed with exit status 2 ---------------------------------------- Cleaning up... Command C:\Python27\python.exe -c "import setuptools, tokenize;__file__='c:\\users\\username\\appdata\\local\\temp\\pip_build_username\\pylzma\\setup.py';exec(compile(getattr(tokenize, 'open', open)(__file__).read().replace('\r\n', '\n'), __file__, 'exec'))" install --record c:\users\username\appdata\local\temp\pip-4onyx_-record\install-record.txt --single-version-externally-managed --compile failed with error code 1 in c:\users\username\appdata\local\temp\pip_build_username\pylzma Storing debug log for failure in C:\Users\username\pip\pip.log ``` Here is the debug log: ``` ------------------------------------------------------------ C:\Python27\Scripts\pip run on 05/22/15 09:32:07 Downloading/unpacking pylzma Getting page https://pypi.python.org/simple/pylzma/ URLs to search for versions for pylzma: * https://pypi.python.org/simple/pylzma/ Analyzing links from page https://pypi.python.org/simple/pylzma/ Skipping link https://pypi.python.org/packages/2.3/p/pylzma/pylzma-0.3.0-py2.3-win32.egg#md5=68b539bc322e44e5a087c79c25d82543 (from https://pypi.python.org/simple/pylzma/); unknown archive format: .egg Skipping link https://pypi.python.org/packages/2.3/p/pylzma/pylzma-0.3.0.win32-py2.3.exe#md5=cbbaf0541e32c8d1394eea89ce3910b7 (from https://pypi.python.org/simple/pylzma/); unknown archive format: .exe Skipping link https://pypi.python.org/packages/2.3/p/pylzma/pylzma-0.4.1-py2.3-win32.egg#md5=03829ce881b4627d6ded08c138cc8997 (from https://pypi.python.org/simple/pylzma/); unknown archive format: .egg Skipping link https://pypi.python.org/packages/2.3/p/pylzma/pylzma-0.4.2-py2.3-win32.egg#md5=1ae4940ad183f220e5102e32a7f5b496 (from https://pypi.python.org/simple/pylzma/); unknown archive format: .egg Skipping link https://pypi.python.org/packages/2.3/p/pylzma/pylzma-0.4.4-py2.3-win32.egg#md5=26849b5afede8a44117e6b6cb0e4fc4d (from https://pypi.python.org/simple/pylzma/); unknown archive format: .egg Skipping link https://pypi.python.org/packages/2.4/p/pylzma/pylzma-0.3.0-py2.4-win32.egg#md5=221208a0e4e9bcbffbb2c0ce80eafc11 (from https://pypi.python.org/simple/pylzma/); unknown archive format: .egg Skipping link https://pypi.python.org/packages/2.4/p/pylzma/pylzma-0.3.0.win32-py2.4.exe#md5=7152a76c28905ada5290c8e6c459d715 (from https://pypi.python.org/simple/pylzma/); unknown archive format: .exe Skipping link https://pypi.python.org/packages/2.4/p/pylzma/pylzma-0.4.1-py2.4-win32.egg#md5=c773b74772799b8cc021ea8e7249db46 (from https://pypi.python.org/simple/pylzma/); unknown archive format: .egg Skipping link https://pypi.python.org/packages/2.4/p/pylzma/pylzma-0.4.2-py2.4-win32.egg#md5=bf837af2374358f167008585c19d2f26 (from https://pypi.python.org/simple/pylzma/); unknown archive format: .egg Skipping link https://pypi.python.org/packages/2.4/p/pylzma/pylzma-0.4.4-py2.4-win32.egg#md5=9a657211e107da0261ed7a2f029566c4 (from https://pypi.python.org/simple/pylzma/); unknown archive format: .egg Skipping link https://pypi.python.org/packages/2.5/p/pylzma/pylzma-0.3.0-py2.5-win32.egg#md5=911d4e0b3cbf27c8e62abea1b6ded60e (from https://pypi.python.org/simple/pylzma/); unknown archive format: .egg Skipping link https://pypi.python.org/packages/2.5/p/pylzma/pylzma-0.3.0.win32-py2.5.exe#md5=bc1c3d4a402984056acf85a251ba347c (from https://pypi.python.org/simple/pylzma/); unknown archive format: .exe Skipping link https://pypi.python.org/packages/2.5/p/pylzma/pylzma-0.4.1-py2.5-win32.egg#md5=429f2087bf14390191faf6d85292186c (from https://pypi.python.org/simple/pylzma/); unknown archive format: .egg Skipping link https://pypi.python.org/packages/2.5/p/pylzma/pylzma-0.4.2-py2.5-win32.egg#md5=bf8036d15fd61d6a47bb1caf0df45e69 (from https://pypi.python.org/simple/pylzma/); unknown archive format: .egg Skipping link https://pypi.python.org/packages/2.5/p/pylzma/pylzma-0.4.4-py2.5-win32.egg#md5=3c8f6361bee16292fdbfda70f1dc0006 (from https://pypi.python.org/simple/pylzma/); unknown archive format: .egg Skipping link https://pypi.python.org/packages/2.6/p/pylzma/pylzma-0.4.1-py2.6-win32.egg#md5=4248c0e618532f137860b021e6915b32 (from https://pypi.python.org/simple/pylzma/); unknown archive format: .egg Skipping link https://pypi.python.org/packages/2.6/p/pylzma/pylzma-0.4.2-py2.6-win32.egg#md5=2c5f136a75b3c114a042f5f61bdd5d8a (from https://pypi.python.org/simple/pylzma/); unknown archive format: .egg Skipping link https://pypi.python.org/packages/2.6/p/pylzma/pylzma-0.4.4-py2.6-win32.egg#md5=8c7ae08bafbfcfd9ecbdffe9e4c9c6c5 (from https://pypi.python.org/simple/pylzma/); unknown archive format: .egg Skipping link https://pypi.python.org/packages/2.7/p/pylzma/pylzma-0.4.4-py2.7-win32.egg#md5=caee91027d5c005b012e2132e434f425 (from https://pypi.python.org/simple/pylzma/); unknown archive format: .egg Found link https://pypi.python.org/packages/source/p/pylzma/pylzma-0.3.0.tar.gz#md5=7ab1a1706cf3e19f2d10579d795babf7 (from https://pypi.python.org/simple/pylzma/), version: 0.3.0 Found link https://pypi.python.org/packages/source/p/pylzma/pylzma-0.4.1.tar.gz#md5=b64557e8c4bcd0973f037bb4ddc413c6 (from https://pypi.python.org/simple/pylzma/), version: 0.4.1 Found link https://pypi.python.org/packages/source/p/pylzma/pylzma-0.4.2.tar.gz#md5=ab37d6ce2374f4308447bff963ae25ef (from https://pypi.python.org/simple/pylzma/), version: 0.4.2 Found link https://pypi.python.org/packages/source/p/pylzma/pylzma-0.4.3.tar.gz#md5=e53d40599ca2b039dedade6069724b7b (from https://pypi.python.org/simple/pylzma/), version: 0.4.3 Found link https://pypi.python.org/packages/source/p/pylzma/pylzma-0.4.4.tar.gz#md5=a2be89cb2288174ebb18bec68fa559fb (from https://pypi.python.org/simple/pylzma/), version: 0.4.4 Found link https://pypi.python.org/packages/source/p/pylzma/pylzma-0.4.5.tar.gz#md5=4fda4666c60faa9a092524fdda0e2f98 (from https://pypi.python.org/simple/pylzma/), version: 0.4.5 Found link https://pypi.python.org/packages/source/p/pylzma/pylzma-0.4.6.tar.gz#md5=140038c8c187770eecfe7041b34ec9b9 (from https://pypi.python.org/simple/pylzma/), version: 0.4.6 Using version 0.4.6 (newest of versions: 0.4.6, 0.4.5, 0.4.4, 0.4.3, 0.4.2, 0.4.1, 0.3.0) Downloading from URL https://pypi.python.org/packages/source/p/pylzma/pylzma-0.4.6.tar.gz#md5=140038c8c187770eecfe7041b34ec9b9 (from https://pypi.python.org/simple/pylzma/) Running setup.py (path:c:\users\username\appdata\local\temp\pip_build_username\pylzma\setup.py) egg_info for package pylzma running egg_info creating pip-egg-info\pylzma.egg-info writing requirements to pip-egg-info\pylzma.egg-info\requires.txt writing pip-egg-info\pylzma.egg-info\PKG-INFO writing top-level names to pip-egg-info\pylzma.egg-info\top_level.txt writing dependency_links to pip-egg-info\pylzma.egg-info\dependency_links.txt writing manifest file 'pip-egg-info\pylzma.egg-info\SOURCES.txt' warning: manifest_maker: standard file '-c' not found reading manifest file 'pip-egg-info\pylzma.egg-info\SOURCES.txt' reading manifest template 'MANIFEST.in' warning: no files found matching '*.py' under directory 'test' warning: no files found matching '*.7z' under directory 'test' no previously-included directories found matching 'src\sdk.orig' writing manifest file 'pip-egg-info\pylzma.egg-info\SOURCES.txt' Source in c:\users\username\appdata\local\temp\pip_build_username\pylzma has version 0.4.6, which satisfies requirement pylzma Installing collected packages: pylzma Running setup.py install for pylzma Running command C:\Python27\python.exe -c "import setuptools, tokenize;__file__='c:\\users\\username\\appdata\\local\\temp\\pip_build_username\\pylzma\\setup.py';exec(compile(getattr(tokenize, 'open', open)(__file__).read().replace('\r\n', '\n'), __file__, 'exec'))" install --record c:\users\username\appdata\local\temp\pip-4onyx_-record\install-record.txt --single-version-externally-managed --compile running install running build running build_py creating build creating build\lib.win32-2.7 copying py7zlib.py -> build\lib.win32-2.7 running build_ext adding support for multithreaded compression building 'pylzma' extension creating build\temp.win32-2.7 creating build\temp.win32-2.7\Release creating build\temp.win32-2.7\Release\src creating build\temp.win32-2.7\Release\src\pylzma creating build\temp.win32-2.7\Release\src\sdk creating build\temp.win32-2.7\Release\src\7zip creating build\temp.win32-2.7\Release\src\7zip\C creating build\temp.win32-2.7\Release\src\compat C:\Users\username\AppData\Local\Programs\Common\Microsoft\Visual C++ for Python\9.0\VC\Bin\cl.exe /c /nologo /Ox /MD /W3 /GS- /DNDEBUG -DWITH_COMPAT=1 -DPYLZMA_VERSION="0.4.6" -DCOMPRESS_MF_MT=1 -Isrc/sdk -IC:\Python27\include -IC:\Python27\PC /Tcsrc/pylzma/pylzma.c /Fobuild\temp.win32-2.7\Release\src/pylzma/pylzma.obj /MT cl : Command line warning D9025 : overriding '/MD' with '/MT' pylzma.c src/pylzma/pylzma.c(284) : error C2440: 'function' : cannot convert from 'double' to 'const char *' src/pylzma/pylzma.c(284) : warning C4024: 'PyModule_AddStringConstant' : different types for formal and actual parameter 3 src/pylzma/pylzma.c(284) : error C2143: syntax error : missing ')' before 'constant' src/pylzma/pylzma.c(284) : error C2059: syntax error : ')' error: command 'C:\\Users\\username\\AppData\\Local\\Programs\\Common\\Microsoft\\Visual C++ for Python\\9.0\\VC\\Bin\\cl.exe' failed with exit status 2 Complete output from command C:\Python27\python.exe -c "import setuptools, tokenize;__file__='c:\\users\\username\\appdata\\local\\temp\\pip_build_username\\pylzma\\setup.py';exec(compile(getattr(tokenize, 'open', open)(__file__).read().replace('\r\n', '\n'), __file__, 'exec'))" install --record c:\users\username\appdata\local\temp\pip-4onyx_-record\install-record.txt --single-version-externally-managed --compile: running install running build running build_py creating build creating build\lib.win32-2.7 copying py7zlib.py -> build\lib.win32-2.7 running build_ext adding support for multithreaded compression building 'pylzma' extension creating build\temp.win32-2.7 creating build\temp.win32-2.7\Release creating build\temp.win32-2.7\Release\src creating build\temp.win32-2.7\Release\src\pylzma creating build\temp.win32-2.7\Release\src\sdk creating build\temp.win32-2.7\Release\src\7zip creating build\temp.win32-2.7\Release\src\7zip\C creating build\temp.win32-2.7\Release\src\compat C:\Users\username\AppData\Local\Programs\Common\Microsoft\Visual C++ for Python\9.0\VC\Bin\cl.exe /c /nologo /Ox /MD /W3 /GS- /DNDEBUG -DWITH_COMPAT=1 -DPYLZMA_VERSION="0.4.6" -DCOMPRESS_MF_MT=1 -Isrc/sdk -IC:\Python27\include -IC:\Python27\PC /Tcsrc/pylzma/pylzma.c /Fobuild\temp.win32-2.7\Release\src/pylzma/pylzma.obj /MT cl : Command line warning D9025 : overriding '/MD' with '/MT' pylzma.c src/pylzma/pylzma.c(284) : error C2440: 'function' : cannot convert from 'double' to 'const char *' src/pylzma/pylzma.c(284) : warning C4024: 'PyModule_AddStringConstant' : different types for formal and actual parameter 3 src/pylzma/pylzma.c(284) : error C2143: syntax error : missing ')' before 'constant' src/pylzma/pylzma.c(284) : error C2059: syntax error : ')' error: command 'C:\\Users\\username\\AppData\\Local\\Programs\\Common\\Microsoft\\Visual C++ for Python\\9.0\\VC\\Bin\\cl.exe' failed with exit status 2 ---------------------------------------- Cleaning up... Removing temporary dir c:\users\username\appdata\local\temp\pip_build_username... Command C:\Python27\python.exe -c "import setuptools, tokenize;__file__='c:\\users\\username\\appdata\\local\\temp\\pip_build_username\\pylzma\\setup.py';exec(compile(getattr(tokenize, 'open', open)(__file__).read().replace('\r\n', '\n'), __file__, 'exec'))" install --record c:\users\username\appdata\local\temp\pip-4onyx_-record\install-record.txt --single-version-externally-managed --compile failed with error code 1 in c:\users\username\appdata\local\temp\pip_build_username\pylzma Exception information: Traceback (most recent call last): File "C:\Python27\lib\site-packages\pip\basecommand.py", line 122, in main status = self.run(options, args) File "C:\Python27\lib\site-packages\pip\commands\install.py", line 283, in run requirement_set.install(install_options, global_options, root=options.root_path) File "C:\Python27\lib\site-packages\pip\req.py", line 1435, in install requirement.install(install_options, global_options, *args, **kwargs) File "C:\Python27\lib\site-packages\pip\req.py", line 706, in install cwd=self.source_dir, filter_stdout=self._filter_install, show_stdout=False) File "C:\Python27\lib\site-packages\pip\util.py", line 697, in call_subprocess % (command_desc, proc.returncode, cwd)) InstallationError: Command C:\Python27\python.exe -c "import setuptools, tokenize;__file__='c:\\users\\username\\appdata\\local\\temp\\pip_build_username\\pylzma\\setup.py';exec(compile(getattr(tokenize, 'open', open)(__file__).read().replace('\r\n', '\n'), __file__, 'exec'))" install --record c:\users\username\appdata\local\temp\pip-4onyx_-record\install-record.txt --single-version-externally-managed --compile failed with error code 1 in c:\users\username\appdata\local\temp\pip_build_username\pylzma ``` I've made sure I'm running the prompt as admin, I've rebooted and I've googled and I can't find anything. Any suggestions?
2015/05/22
[ "https://Stackoverflow.com/questions/30400777", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4929649/" ]
The first step to do is to find where is your python. You can do it with which or where command (which for unix where for windows). Once you have this information you will know what is actually executed as "python" command. Then you need to change it for windows (i believe) you need to change the PATH variable in such a way that your python 3.4 will be found earlier then 2.6 For the unix you need to either do the same or link it in your package manager.
You need to use `python3` to use python 3.4. For example, to know version of Python use: ``` python3 -V ``` This will use python 3.4 to interpret your program or you can use the [shebang](http://en.wikipedia.org/wiki/Shebang_%28Unix%29) to make it executable. The first line of your program should be: ``` #!/usr/bin/env python3 ``` If you want python3 to be used when you type `python` on the terminal, you can use an alias. To add a new alias, open your `~/.bash_aliases` file using `gedit ~/.bash_aliases` and type the following: ``` alias python=python3 ``` and then save and exit and type ``` source ~/.bash_aliases ``` and then you can type ``` python -V ``` to use python3 as your default python interpreter.
34,643,747
**Is there a way to use and plot with opencv2 with ipython notebook?** I am fairly new to python image analysis. I decided to go with the notebook work flow to make nice record as I process and it has been working out quite well using matplotlib/pylab to plot things. An initial hurdle I had was how to plot things within the notebook. Easy, just use magic: ``` %matplotlib inline ``` Later, I wanted to perform manipulations with interactive plots but plotting in a dedicated window would always freeze. Fine, I learnt again that you need to use magic. Instead of just importing the modules: ``` %pylab ``` Now I have moved onto working with opencv. I am now back to the same problem, where I either want to plot inline or use dedicated, interactive windows depending on the task at hand. Is there similar magic to use? Is there another way to get things working? Or am I stuck and need to just go back to running a program from IDLE? As a side note: I know that opencv has installed correctly. Firstly, because I got no errors either installing or importing the cv2 module. Secondly, because I can read in images with cv2 and then plot them with something else.
2016/01/06
[ "https://Stackoverflow.com/questions/34643747", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5754595/" ]
This is my empty template: ``` import cv2 import matplotlib.pyplot as plt import numpy as np import sys %matplotlib inline im = cv2.imread('IMG_FILENAME',0) h,w = im.shape[:2] print(im.shape) plt.imshow(im,cmap='gray') plt.show() ``` [See online sample](https://colab.research.google.com/drive/1WbOfcIwShtxaw7-Ig5YppNINWgLucQ4i#scrollTo=vy7Be3RMreWG)
There is also that little function that was used into the Google Deepdream Notebook: ```python import cv2 import numpy as np from IPython.display import clear_output, Image, display from cStringIO import StringIO import PIL.Image def showarray(a, fmt='jpeg'): a = np.uint8(np.clip(a, 0, 255)) f = StringIO() PIL.Image.fromarray(a).save(f, fmt) display(Image(data=f.getvalue())) ``` Then you can do : ```python img = cv2.imread("an_image.jpg") ``` And simply : ```python showarray(img) ``` Each time you need to render the image in a cell
34,643,747
**Is there a way to use and plot with opencv2 with ipython notebook?** I am fairly new to python image analysis. I decided to go with the notebook work flow to make nice record as I process and it has been working out quite well using matplotlib/pylab to plot things. An initial hurdle I had was how to plot things within the notebook. Easy, just use magic: ``` %matplotlib inline ``` Later, I wanted to perform manipulations with interactive plots but plotting in a dedicated window would always freeze. Fine, I learnt again that you need to use magic. Instead of just importing the modules: ``` %pylab ``` Now I have moved onto working with opencv. I am now back to the same problem, where I either want to plot inline or use dedicated, interactive windows depending on the task at hand. Is there similar magic to use? Is there another way to get things working? Or am I stuck and need to just go back to running a program from IDLE? As a side note: I know that opencv has installed correctly. Firstly, because I got no errors either installing or importing the cv2 module. Secondly, because I can read in images with cv2 and then plot them with something else.
2016/01/06
[ "https://Stackoverflow.com/questions/34643747", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5754595/" ]
This is my empty template: ``` import cv2 import matplotlib.pyplot as plt import numpy as np import sys %matplotlib inline im = cv2.imread('IMG_FILENAME',0) h,w = im.shape[:2] print(im.shape) plt.imshow(im,cmap='gray') plt.show() ``` [See online sample](https://colab.research.google.com/drive/1WbOfcIwShtxaw7-Ig5YppNINWgLucQ4i#scrollTo=vy7Be3RMreWG)
For a Jupyter notebook running on Python 3.5 I had to modify this to: ``` import io import cv2 import numpy as np from IPython.display import clear_output, Image, display import PIL.Image def showarray(a, fmt='jpeg'): a = np.uint8(np.clip(a, 0, 255)) f = io.BytesIO() PIL.Image.fromarray(a).save(f, fmt) display(Image(data=f.getvalue())) ```
34,643,747
**Is there a way to use and plot with opencv2 with ipython notebook?** I am fairly new to python image analysis. I decided to go with the notebook work flow to make nice record as I process and it has been working out quite well using matplotlib/pylab to plot things. An initial hurdle I had was how to plot things within the notebook. Easy, just use magic: ``` %matplotlib inline ``` Later, I wanted to perform manipulations with interactive plots but plotting in a dedicated window would always freeze. Fine, I learnt again that you need to use magic. Instead of just importing the modules: ``` %pylab ``` Now I have moved onto working with opencv. I am now back to the same problem, where I either want to plot inline or use dedicated, interactive windows depending on the task at hand. Is there similar magic to use? Is there another way to get things working? Or am I stuck and need to just go back to running a program from IDLE? As a side note: I know that opencv has installed correctly. Firstly, because I got no errors either installing or importing the cv2 module. Secondly, because I can read in images with cv2 and then plot them with something else.
2016/01/06
[ "https://Stackoverflow.com/questions/34643747", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5754595/" ]
For a Jupyter notebook running on Python 3.5 I had to modify this to: ``` import io import cv2 import numpy as np from IPython.display import clear_output, Image, display import PIL.Image def showarray(a, fmt='jpeg'): a = np.uint8(np.clip(a, 0, 255)) f = io.BytesIO() PIL.Image.fromarray(a).save(f, fmt) display(Image(data=f.getvalue())) ```
There is also that little function that was used into the Google Deepdream Notebook: ```python import cv2 import numpy as np from IPython.display import clear_output, Image, display from cStringIO import StringIO import PIL.Image def showarray(a, fmt='jpeg'): a = np.uint8(np.clip(a, 0, 255)) f = StringIO() PIL.Image.fromarray(a).save(f, fmt) display(Image(data=f.getvalue())) ``` Then you can do : ```python img = cv2.imread("an_image.jpg") ``` And simply : ```python showarray(img) ``` Each time you need to render the image in a cell
68,003,878
I am new in python and would like to ask anyone of a solution related to dividing 2 rows in a data set that contains 25000 rows. It is easier to understand it by looking at my screenshot. Thanks for a help! [![enter image description here](https://i.stack.imgur.com/upP3d.png)](https://i.stack.imgur.com/upP3d.png)
2021/06/16
[ "https://Stackoverflow.com/questions/68003878", "https://Stackoverflow.com", "https://Stackoverflow.com/users/16243841/" ]
Looks like your dataframe has a [MultiIndex](https://pandas.pydata.org/pandas-docs/stable/user_guide/advanced.html). Let's take the first four rows as an example. It could be problematic to let one of the row index levels have the same name (`loan_default`) as the column, so I'd change the column name to `count`: ```py import pandas as pd df = pd.DataFrame({(1954, 0): [9], (1954, 1): [1], (1955, 0): [91], (1955, 1): [15]}).T df.columns = ['count'] print(df) ``` ``` count 1954 0 9 1 1 1955 0 91 1 15 ``` You can select all rows where a certain level of the MultiIndex has a certain value with [`df.xs()`](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.xs.html). This will give you two sub-series that you can divide by each other, which will be done element-wise: ```py defaulted = df.xs(1, level=1) not_defaulted = df.xs(0, level=1) odds = defaulted / not_defaulted odds.columns = ['defaulting_odds'] print(odds) ``` ``` defaulting_odds 1954 0.111111 1955 0.164835 ``` Note that this produces the odds of a loan defaulting for each year. If you would rather have the probabilities, you have to change the denominator. To get the percentage, just multiply by 100: ```py prob = defaulted / (defaulted + not_defaulted) prob.columns = ['defaulting_probability'] prob['defaulting_percent'] = prob.defaulting_probability * 100 print(prob) ``` ``` defaulting_probability defaulting_percent 1954 0.100000 10.000000 1955 0.141509 14.150943 ```
You can try dividing with `shift` and groups. ``` import pandas as pd df = pd.DataFrame() df['year_of_birth'] = ['1954','1954','1955','1955', '1956', '1956'] df['loan_default'] = ['9','1','91','15','194','32'] ``` Calculate the ratio: ``` df['percentage'] = df['loan_default'].div(df.groupby('year_of_birth')['loan_default'].shift(1)) ``` Drop the NaNs: ``` df = df.dropna(subset=['percentage']) ``` Convert ratio into a percentage: ``` df['percentage'] = df['percentage']*100 ```
45,194,587
This is not a duplicate of [this](https://stackoverflow.com/questions/6681743/splitting-a-number-into-the-integer-and-decimal-parts-in-python), I'll explain here. Consider `x = 1.2`. I'd like to separate it out into `1` and `0.2`. I've tried all these methods as outlined in the linked question: ``` In [370]: x = 1.2 In [371]: divmod(x, 1) Out[371]: (1.0, 0.19999999999999996) In [372]: math.modf(x) Out[372]: (0.19999999999999996, 1.0) In [373]: x - int(x) Out[373]: 0.19999999999999996 In [374]: x - int(str(x).split('.')[0]) Out[374]: 0.19999999999999996 ``` Nothing I try gives me exactly `1` and `0.2`. Is there any way to *reliably* convert a floating number to its decimal and floating point equivalents that is not hindered by the limitation of floating point representation? I understand this might be due to the limitation of how the number is itself stored, so I'm open to any suggestion (like a package or otherwise) that overcomes this. Edit: Would prefer a way that didn't involve string manipulation, *if possible*.
2017/07/19
[ "https://Stackoverflow.com/questions/45194587", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4909087/" ]
Solution -------- It may seem like a hack, but you could separate the string form (actually repr) and convert it back to ints and floats: ``` In [1]: x = 1.2 In [2]: s = repr(x) In [3]: p, q = s.split('.') In [4]: int(p) Out[4]: 1 In [5]: float('.' + q) Out[5]: 0.2 ``` How it works ------------ The reason for approaching it this way is that the [internal algorithm](https://bugs.python.org/issue1580) for displaying `1.2` is very sophisticated (a fast variant of [David Gay's algorithm](http://www.ampl.com/REFS/abstracts.html#rounding)). It works hard to show the shortest of the possible representations of numbers that cannot be represented exactly. By splitting the *repr* form, you're taking advantage of that algorithm. Internally, the value entered as `1.2` is stored as the binary fraction, `5404319552844595 / 4503599627370496` which is actually equal to `1.1999999999999999555910790149937383830547332763671875`. The Gay algorithm is used to display this as the string `1.2`. The *split* then reliably extracts the integer portion. ``` In [6]: from decimal import Decimal In [7]: Decimal(1.2) Out[7]: Decimal('1.1999999999999999555910790149937383830547332763671875') In [8]: (1.2).as_integer_ratio() Out[8]: (5404319552844595, 4503599627370496) ``` Rationale and problem analysis ------------------------------ As stated, your problem roughly translates to "I want to split the integral and fractional parts of the number as it appears visually rather that according to how it is actually stored". Framed that way, it is clear that the solution involves parsing how it is displayed visually. While it make feel like a hack, this is the most direct way to take advantage of the very sophisticated display algorithms and actually match what you see. This way may the only *reliable* way to match what you see unless you manually reproduce the internal display algorithms. Failure of alternatives ----------------------- If you want to stay in realm of integers, you could try rounding and subtraction but that would give you an unexpected value for the floating point portion: ``` In [9]: round(x) Out[9]: 1.0 In [10]: x - round(x) Out[10]: 0.19999999999999996 ```
You could try converting 1.2 to string, splitting on the '.' and then converting the two strings ("1" and "2") back to the format you want. Additionally padding the second portion with a '0.' will give you a nice format.
45,194,587
This is not a duplicate of [this](https://stackoverflow.com/questions/6681743/splitting-a-number-into-the-integer-and-decimal-parts-in-python), I'll explain here. Consider `x = 1.2`. I'd like to separate it out into `1` and `0.2`. I've tried all these methods as outlined in the linked question: ``` In [370]: x = 1.2 In [371]: divmod(x, 1) Out[371]: (1.0, 0.19999999999999996) In [372]: math.modf(x) Out[372]: (0.19999999999999996, 1.0) In [373]: x - int(x) Out[373]: 0.19999999999999996 In [374]: x - int(str(x).split('.')[0]) Out[374]: 0.19999999999999996 ``` Nothing I try gives me exactly `1` and `0.2`. Is there any way to *reliably* convert a floating number to its decimal and floating point equivalents that is not hindered by the limitation of floating point representation? I understand this might be due to the limitation of how the number is itself stored, so I'm open to any suggestion (like a package or otherwise) that overcomes this. Edit: Would prefer a way that didn't involve string manipulation, *if possible*.
2017/07/19
[ "https://Stackoverflow.com/questions/45194587", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4909087/" ]
Solution -------- It may seem like a hack, but you could separate the string form (actually repr) and convert it back to ints and floats: ``` In [1]: x = 1.2 In [2]: s = repr(x) In [3]: p, q = s.split('.') In [4]: int(p) Out[4]: 1 In [5]: float('.' + q) Out[5]: 0.2 ``` How it works ------------ The reason for approaching it this way is that the [internal algorithm](https://bugs.python.org/issue1580) for displaying `1.2` is very sophisticated (a fast variant of [David Gay's algorithm](http://www.ampl.com/REFS/abstracts.html#rounding)). It works hard to show the shortest of the possible representations of numbers that cannot be represented exactly. By splitting the *repr* form, you're taking advantage of that algorithm. Internally, the value entered as `1.2` is stored as the binary fraction, `5404319552844595 / 4503599627370496` which is actually equal to `1.1999999999999999555910790149937383830547332763671875`. The Gay algorithm is used to display this as the string `1.2`. The *split* then reliably extracts the integer portion. ``` In [6]: from decimal import Decimal In [7]: Decimal(1.2) Out[7]: Decimal('1.1999999999999999555910790149937383830547332763671875') In [8]: (1.2).as_integer_ratio() Out[8]: (5404319552844595, 4503599627370496) ``` Rationale and problem analysis ------------------------------ As stated, your problem roughly translates to "I want to split the integral and fractional parts of the number as it appears visually rather that according to how it is actually stored". Framed that way, it is clear that the solution involves parsing how it is displayed visually. While it make feel like a hack, this is the most direct way to take advantage of the very sophisticated display algorithms and actually match what you see. This way may the only *reliable* way to match what you see unless you manually reproduce the internal display algorithms. Failure of alternatives ----------------------- If you want to stay in realm of integers, you could try rounding and subtraction but that would give you an unexpected value for the floating point portion: ``` In [9]: round(x) Out[9]: 1.0 In [10]: x - round(x) Out[10]: 0.19999999999999996 ```
So I just did the following in a python terminal and it seemed to work properly... ``` x=1.2 s=str(x).split('.') i=int(s[0]) d=int(s[1])/10 ```
45,194,587
This is not a duplicate of [this](https://stackoverflow.com/questions/6681743/splitting-a-number-into-the-integer-and-decimal-parts-in-python), I'll explain here. Consider `x = 1.2`. I'd like to separate it out into `1` and `0.2`. I've tried all these methods as outlined in the linked question: ``` In [370]: x = 1.2 In [371]: divmod(x, 1) Out[371]: (1.0, 0.19999999999999996) In [372]: math.modf(x) Out[372]: (0.19999999999999996, 1.0) In [373]: x - int(x) Out[373]: 0.19999999999999996 In [374]: x - int(str(x).split('.')[0]) Out[374]: 0.19999999999999996 ``` Nothing I try gives me exactly `1` and `0.2`. Is there any way to *reliably* convert a floating number to its decimal and floating point equivalents that is not hindered by the limitation of floating point representation? I understand this might be due to the limitation of how the number is itself stored, so I'm open to any suggestion (like a package or otherwise) that overcomes this. Edit: Would prefer a way that didn't involve string manipulation, *if possible*.
2017/07/19
[ "https://Stackoverflow.com/questions/45194587", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4909087/" ]
Solution -------- It may seem like a hack, but you could separate the string form (actually repr) and convert it back to ints and floats: ``` In [1]: x = 1.2 In [2]: s = repr(x) In [3]: p, q = s.split('.') In [4]: int(p) Out[4]: 1 In [5]: float('.' + q) Out[5]: 0.2 ``` How it works ------------ The reason for approaching it this way is that the [internal algorithm](https://bugs.python.org/issue1580) for displaying `1.2` is very sophisticated (a fast variant of [David Gay's algorithm](http://www.ampl.com/REFS/abstracts.html#rounding)). It works hard to show the shortest of the possible representations of numbers that cannot be represented exactly. By splitting the *repr* form, you're taking advantage of that algorithm. Internally, the value entered as `1.2` is stored as the binary fraction, `5404319552844595 / 4503599627370496` which is actually equal to `1.1999999999999999555910790149937383830547332763671875`. The Gay algorithm is used to display this as the string `1.2`. The *split* then reliably extracts the integer portion. ``` In [6]: from decimal import Decimal In [7]: Decimal(1.2) Out[7]: Decimal('1.1999999999999999555910790149937383830547332763671875') In [8]: (1.2).as_integer_ratio() Out[8]: (5404319552844595, 4503599627370496) ``` Rationale and problem analysis ------------------------------ As stated, your problem roughly translates to "I want to split the integral and fractional parts of the number as it appears visually rather that according to how it is actually stored". Framed that way, it is clear that the solution involves parsing how it is displayed visually. While it make feel like a hack, this is the most direct way to take advantage of the very sophisticated display algorithms and actually match what you see. This way may the only *reliable* way to match what you see unless you manually reproduce the internal display algorithms. Failure of alternatives ----------------------- If you want to stay in realm of integers, you could try rounding and subtraction but that would give you an unexpected value for the floating point portion: ``` In [9]: round(x) Out[9]: 1.0 In [10]: x - round(x) Out[10]: 0.19999999999999996 ```
Here is a solution without string manipulation (`frac_digits` is the count of decimal digits that you can guarantee the fractional part of your numbers will fit into): ``` >>> def integer_and_fraction(x, frac_digits=3): ... i = int(x) ... c = 10**frac_digits ... f = round(x*c-i*c)/c ... return (i, f) ... >>> integer_and_fraction(1.2) (1, 0.2) >>> integer_and_fraction(1.2, 1) (1, 0.2) >>> integer_and_fraction(1.2, 2) (1, 0.2) >>> integer_and_fraction(1.2, 5) (1, 0.2) >>> ```
45,194,587
This is not a duplicate of [this](https://stackoverflow.com/questions/6681743/splitting-a-number-into-the-integer-and-decimal-parts-in-python), I'll explain here. Consider `x = 1.2`. I'd like to separate it out into `1` and `0.2`. I've tried all these methods as outlined in the linked question: ``` In [370]: x = 1.2 In [371]: divmod(x, 1) Out[371]: (1.0, 0.19999999999999996) In [372]: math.modf(x) Out[372]: (0.19999999999999996, 1.0) In [373]: x - int(x) Out[373]: 0.19999999999999996 In [374]: x - int(str(x).split('.')[0]) Out[374]: 0.19999999999999996 ``` Nothing I try gives me exactly `1` and `0.2`. Is there any way to *reliably* convert a floating number to its decimal and floating point equivalents that is not hindered by the limitation of floating point representation? I understand this might be due to the limitation of how the number is itself stored, so I'm open to any suggestion (like a package or otherwise) that overcomes this. Edit: Would prefer a way that didn't involve string manipulation, *if possible*.
2017/07/19
[ "https://Stackoverflow.com/questions/45194587", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4909087/" ]
Here is a solution without string manipulation (`frac_digits` is the count of decimal digits that you can guarantee the fractional part of your numbers will fit into): ``` >>> def integer_and_fraction(x, frac_digits=3): ... i = int(x) ... c = 10**frac_digits ... f = round(x*c-i*c)/c ... return (i, f) ... >>> integer_and_fraction(1.2) (1, 0.2) >>> integer_and_fraction(1.2, 1) (1, 0.2) >>> integer_and_fraction(1.2, 2) (1, 0.2) >>> integer_and_fraction(1.2, 5) (1, 0.2) >>> ```
You could try converting 1.2 to string, splitting on the '.' and then converting the two strings ("1" and "2") back to the format you want. Additionally padding the second portion with a '0.' will give you a nice format.
45,194,587
This is not a duplicate of [this](https://stackoverflow.com/questions/6681743/splitting-a-number-into-the-integer-and-decimal-parts-in-python), I'll explain here. Consider `x = 1.2`. I'd like to separate it out into `1` and `0.2`. I've tried all these methods as outlined in the linked question: ``` In [370]: x = 1.2 In [371]: divmod(x, 1) Out[371]: (1.0, 0.19999999999999996) In [372]: math.modf(x) Out[372]: (0.19999999999999996, 1.0) In [373]: x - int(x) Out[373]: 0.19999999999999996 In [374]: x - int(str(x).split('.')[0]) Out[374]: 0.19999999999999996 ``` Nothing I try gives me exactly `1` and `0.2`. Is there any way to *reliably* convert a floating number to its decimal and floating point equivalents that is not hindered by the limitation of floating point representation? I understand this might be due to the limitation of how the number is itself stored, so I'm open to any suggestion (like a package or otherwise) that overcomes this. Edit: Would prefer a way that didn't involve string manipulation, *if possible*.
2017/07/19
[ "https://Stackoverflow.com/questions/45194587", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4909087/" ]
Here is a solution without string manipulation (`frac_digits` is the count of decimal digits that you can guarantee the fractional part of your numbers will fit into): ``` >>> def integer_and_fraction(x, frac_digits=3): ... i = int(x) ... c = 10**frac_digits ... f = round(x*c-i*c)/c ... return (i, f) ... >>> integer_and_fraction(1.2) (1, 0.2) >>> integer_and_fraction(1.2, 1) (1, 0.2) >>> integer_and_fraction(1.2, 2) (1, 0.2) >>> integer_and_fraction(1.2, 5) (1, 0.2) >>> ```
So I just did the following in a python terminal and it seemed to work properly... ``` x=1.2 s=str(x).split('.') i=int(s[0]) d=int(s[1])/10 ```
9,509,096
How do you load a Django fixture so that models referenced via natural keys don't conflict with pre-existing records? I'm trying to load such a fixture, but I'm getting IntegrityErrors from my MySQL backend, complaining about Django trying to insert duplicate records, which doesn't make any sense. As I understand Django's natural key feature, in order to fully support dumpdata and loaddata usage, you need to define a `natural_key` method in the model, and a `get_by_natural_key` method in the model's manager. So, for example, I have two models: ``` class PersonManager(models.Manager): def get_by_natural_key(self, name): return self.get(name=name) class Person(models.Model): objects = PersonManager() name = models.CharField(max_length=255, unique=True) def natural_key(self): return (self.name,) class BookManager(models.Manager): def get_by_natural_key(self, title, *person_key): person = Person.objects.get_by_natural_key(*person_key) return self.get(title=title, person=person) class Book(models.Model): objects = BookManager() author = models.ForeignKey(Person) title = models.CharField(max_length=255) def natural_key(self): return (self.title,) + self.author.natural_key() natural_key.dependencies = ['myapp.Person'] ``` My test database already contains a sample Person and Book record, which I used to generate the fixture: ``` [ { "pk": null, "model": "myapp.person", "fields": { "name": "bob" } }, { "pk": null, "model": "myapp.book", "fields": { "author": [ "bob" ], "title": "bob's book", } } ] ``` I want to be able to take this fixture and load it into any instance of my database to recreate the records, regardless of whether or not they already exist in the database. However, when I run `python manage.py loaddata myfixture.json` I get the error: ``` IntegrityError: (1062, "Duplicate entry '1-1' for key 'myapp_person_name_uniq'") ``` Why is Django attempting to re-create the Person record instead of reusing the one that's already there?
2012/03/01
[ "https://Stackoverflow.com/questions/9509096", "https://Stackoverflow.com", "https://Stackoverflow.com/users/247542/" ]
Turns out the solution requires a very minor patch to Django's `loaddata` command. Since it's unlikely the Django devs would accept such a patch, I've [forked it](https://raw.githubusercontent.com/chrisspen/django-admin-steroids/master/admin_steroids/management/commands/loaddatanaturally.py) in my package of various Django admin related enhancements. The key code change (lines 189-201 of `loaddatanaturally.py`) simply involves calling `get_natural_key()` to find any existing pk inside the loop that iterates over the deserialized objects.
Actually loaddata is not supposed to work with existing data in database, it is normally used for initial load of models. Look at this question for another way of doing it: [Import data into Django model with existing data?](https://stackoverflow.com/questions/5940294/import-data-into-django-model-with-existing-data)
2,217,258
I'm looking for an easy, cross platform way to join path, directory and file names into a complete path in C++. I know python has `os.path.join()` and matlab has `fullfile()`. Does Qt has something similar? `QFileInfo` doesn't seem to be able to do this.
2010/02/07
[ "https://Stackoverflow.com/questions/2217258", "https://Stackoverflow.com", "https://Stackoverflow.com/users/9611/" ]
[QDir](http://qt-project.org/doc/qt-5.0/qtcore/qdir.html) has `absoluteFilePath` and `relativeFilePath` to combine a path with a file name.
Offhand, I'm not sure about Qt, but Boost has a `filesystem` class that handles things like this. This has the advantage that it has been accepted as a proposal for TR2. That means it has a pretty good chance of becoming part of the C++ standard library (though probably with some minor modifications here or there).
3,108,951
I need to write a python script that launches a shell script and import the environment variables AFTER a script is completed. Immagine you have a shell script "a.sh": ``` export MYVAR="test" ``` In python I would like to do something like: ``` import os env={} os.spawnlpe(os.P_WAIT,'sh', 'sh', 'a.sh',env) print env ``` and get: ``` {'MYVAR'="test"} ``` Is that possible?
2010/06/24
[ "https://Stackoverflow.com/questions/3108951", "https://Stackoverflow.com", "https://Stackoverflow.com/users/375112/" ]
Nope, any changes made to environment variables in a subprocess stay in that subprocess. (As far as I know, that is) When the subprocess terminates, its environment is lost. I'd suggest getting the shell script to print its environment, or at least the variables you care about, to its standard output (or standard error, or it could write them to a file), and you can read that output from Python.
I agree with David's post. Perl has a [Shell::Source](http://search.cpan.org/~pjcj/Shell-Source-0.01/Source.pm) module which does this. It works by running the script you want in a subprocess appended with an `env` which produces a list of variable value pairs separated by an `=` symbol. You can parse this and "import" the environment into your process. The module is worth looking at if you need this kind of behaviour.
29,548,982
I tried: `c:/python34/scripts/pip install http://bitbucket.org/pygame/pygame` and got this error: ``` Cannot unpack file C:\Users\Marius\AppData\Local\Temp\pip-b60d5tho-unpack\pygame (downloaded from C:\Users\Marius\AppData\Local\Temp\pip-rqmpq4tz-build, conte nt-type: text/html; charset=utf-8); cannot detect archive format Cannot determine archive format of C:\Users\Marius\AppData\Local\Temp\pip-rqmp q4tz-build ``` Please if anyone have any solutions please feel free to share them! I also tried `pip install --allow-unverified`, but that gave me an error as well.
2015/04/09
[ "https://Stackoverflow.com/questions/29548982", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4529330/" ]
This is the only method that works for me. ``` pip install pygame==1.9.1release --allow-external pygame --allow-unverified pygame ``` -- These are the steps that lead me to this command (I put them so people finds it easily): ``` $ pip install pygame Collecting pygame Could not find any downloads that satisfy the requirement pygame Some externally hosted files were ignored as access to them may be unreliable (use --allow-external pygame to allow). No distributions at all found for pygame ``` Then, as suggestes I allow external: ``` $ pip install pygame --allow-external pygame Collecting pygame Could not find any downloads that satisfy the requirement pygame Some insecure and unverifiable files were ignored (use --allow-unverified pygame to allow). No distributions at all found for pygame ``` So I also allow unverifiable: ``` $ pip install pygame --allow-external pygame --allow-unverified pygame Collecting pygame pygame is potentially insecure and unverifiable. HTTP error 400 while getting http://www.pygame.org/../../ftp/pygame-1.6.2.tar.bz2 (from http://www.pygame.org/download.shtml) Could not install requirement pygame because of error 400 Client Error: Bad Request Could not install requirement pygame because of HTTP error 400 Client Error: Bad Request for URL http://www.pygame.org/../../ftp/pygame-1.6.2.tar.bz2 (from http://www.pygame.org/download.shtml) ``` So, after a visit to <http://www.pygame.org/download.shtml>, I thought about adding the version number (1.9.1release is the currently stable one). -- Hope it helps.
I realized that the compatible Pygame version was simply corrupted or broken. Therfore i had to install a previous version of python to run Pygame. Which is actually fine as most modules aren't updated to be compatible with Python 3.4 yet so it only gives me more options.
29,548,982
I tried: `c:/python34/scripts/pip install http://bitbucket.org/pygame/pygame` and got this error: ``` Cannot unpack file C:\Users\Marius\AppData\Local\Temp\pip-b60d5tho-unpack\pygame (downloaded from C:\Users\Marius\AppData\Local\Temp\pip-rqmpq4tz-build, conte nt-type: text/html; charset=utf-8); cannot detect archive format Cannot determine archive format of C:\Users\Marius\AppData\Local\Temp\pip-rqmp q4tz-build ``` Please if anyone have any solutions please feel free to share them! I also tried `pip install --allow-unverified`, but that gave me an error as well.
2015/04/09
[ "https://Stackoverflow.com/questions/29548982", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4529330/" ]
This is the only method that works for me. ``` pip install pygame==1.9.1release --allow-external pygame --allow-unverified pygame ``` -- These are the steps that lead me to this command (I put them so people finds it easily): ``` $ pip install pygame Collecting pygame Could not find any downloads that satisfy the requirement pygame Some externally hosted files were ignored as access to them may be unreliable (use --allow-external pygame to allow). No distributions at all found for pygame ``` Then, as suggestes I allow external: ``` $ pip install pygame --allow-external pygame Collecting pygame Could not find any downloads that satisfy the requirement pygame Some insecure and unverifiable files were ignored (use --allow-unverified pygame to allow). No distributions at all found for pygame ``` So I also allow unverifiable: ``` $ pip install pygame --allow-external pygame --allow-unverified pygame Collecting pygame pygame is potentially insecure and unverifiable. HTTP error 400 while getting http://www.pygame.org/../../ftp/pygame-1.6.2.tar.bz2 (from http://www.pygame.org/download.shtml) Could not install requirement pygame because of error 400 Client Error: Bad Request Could not install requirement pygame because of HTTP error 400 Client Error: Bad Request for URL http://www.pygame.org/../../ftp/pygame-1.6.2.tar.bz2 (from http://www.pygame.org/download.shtml) ``` So, after a visit to <http://www.pygame.org/download.shtml>, I thought about adding the version number (1.9.1release is the currently stable one). -- Hope it helps.
If you get trouble when install pygame error about missing visual studio 10+. I have the answer: The problem is not about have or not have visual studio, because I try many version but it not work. The problem is file: between `tar.gz` and `.whl` so, this is the solution: 1) Download file: > > <http://www.lfd.uci.edu/~gohlke/pythonlibs/#pygame> > > > Go here and download your pygame version, notice about x64 or x86 and python version. My system is x64 and python is 3.4 so I choose: `pygame-1.9.2a0-cp34-none-win_amd64.whl` 2) Put it in some where to install: I put it in "C:", so open cmd: and type: cd C:\ (this is change location to C:) 3) Install ``` pip install C:\pygame-1.9.2a0-cp34-none-win_amd64.whl ``` Done !
29,548,982
I tried: `c:/python34/scripts/pip install http://bitbucket.org/pygame/pygame` and got this error: ``` Cannot unpack file C:\Users\Marius\AppData\Local\Temp\pip-b60d5tho-unpack\pygame (downloaded from C:\Users\Marius\AppData\Local\Temp\pip-rqmpq4tz-build, conte nt-type: text/html; charset=utf-8); cannot detect archive format Cannot determine archive format of C:\Users\Marius\AppData\Local\Temp\pip-rqmp q4tz-build ``` Please if anyone have any solutions please feel free to share them! I also tried `pip install --allow-unverified`, but that gave me an error as well.
2015/04/09
[ "https://Stackoverflow.com/questions/29548982", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4529330/" ]
This is the only method that works for me. ``` pip install pygame==1.9.1release --allow-external pygame --allow-unverified pygame ``` -- These are the steps that lead me to this command (I put them so people finds it easily): ``` $ pip install pygame Collecting pygame Could not find any downloads that satisfy the requirement pygame Some externally hosted files were ignored as access to them may be unreliable (use --allow-external pygame to allow). No distributions at all found for pygame ``` Then, as suggestes I allow external: ``` $ pip install pygame --allow-external pygame Collecting pygame Could not find any downloads that satisfy the requirement pygame Some insecure and unverifiable files were ignored (use --allow-unverified pygame to allow). No distributions at all found for pygame ``` So I also allow unverifiable: ``` $ pip install pygame --allow-external pygame --allow-unverified pygame Collecting pygame pygame is potentially insecure and unverifiable. HTTP error 400 while getting http://www.pygame.org/../../ftp/pygame-1.6.2.tar.bz2 (from http://www.pygame.org/download.shtml) Could not install requirement pygame because of error 400 Client Error: Bad Request Could not install requirement pygame because of HTTP error 400 Client Error: Bad Request for URL http://www.pygame.org/../../ftp/pygame-1.6.2.tar.bz2 (from http://www.pygame.org/download.shtml) ``` So, after a visit to <http://www.pygame.org/download.shtml>, I thought about adding the version number (1.9.1release is the currently stable one). -- Hope it helps.
I myself got this error(while using v3.10.0) and then I uninstalled the latest version and installed older version of python(v3.9.7) and it fixed the issue. Hope that works for you too. Honestly there are not many changes in the fix unless they jump from python3 to python4. so you dont need to update the python just for the sake of latest-ality
29,548,982
I tried: `c:/python34/scripts/pip install http://bitbucket.org/pygame/pygame` and got this error: ``` Cannot unpack file C:\Users\Marius\AppData\Local\Temp\pip-b60d5tho-unpack\pygame (downloaded from C:\Users\Marius\AppData\Local\Temp\pip-rqmpq4tz-build, conte nt-type: text/html; charset=utf-8); cannot detect archive format Cannot determine archive format of C:\Users\Marius\AppData\Local\Temp\pip-rqmp q4tz-build ``` Please if anyone have any solutions please feel free to share them! I also tried `pip install --allow-unverified`, but that gave me an error as well.
2015/04/09
[ "https://Stackoverflow.com/questions/29548982", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4529330/" ]
If you get trouble when install pygame error about missing visual studio 10+. I have the answer: The problem is not about have or not have visual studio, because I try many version but it not work. The problem is file: between `tar.gz` and `.whl` so, this is the solution: 1) Download file: > > <http://www.lfd.uci.edu/~gohlke/pythonlibs/#pygame> > > > Go here and download your pygame version, notice about x64 or x86 and python version. My system is x64 and python is 3.4 so I choose: `pygame-1.9.2a0-cp34-none-win_amd64.whl` 2) Put it in some where to install: I put it in "C:", so open cmd: and type: cd C:\ (this is change location to C:) 3) Install ``` pip install C:\pygame-1.9.2a0-cp34-none-win_amd64.whl ``` Done !
I realized that the compatible Pygame version was simply corrupted or broken. Therfore i had to install a previous version of python to run Pygame. Which is actually fine as most modules aren't updated to be compatible with Python 3.4 yet so it only gives me more options.
29,548,982
I tried: `c:/python34/scripts/pip install http://bitbucket.org/pygame/pygame` and got this error: ``` Cannot unpack file C:\Users\Marius\AppData\Local\Temp\pip-b60d5tho-unpack\pygame (downloaded from C:\Users\Marius\AppData\Local\Temp\pip-rqmpq4tz-build, conte nt-type: text/html; charset=utf-8); cannot detect archive format Cannot determine archive format of C:\Users\Marius\AppData\Local\Temp\pip-rqmp q4tz-build ``` Please if anyone have any solutions please feel free to share them! I also tried `pip install --allow-unverified`, but that gave me an error as well.
2015/04/09
[ "https://Stackoverflow.com/questions/29548982", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4529330/" ]
If you get trouble when install pygame error about missing visual studio 10+. I have the answer: The problem is not about have or not have visual studio, because I try many version but it not work. The problem is file: between `tar.gz` and `.whl` so, this is the solution: 1) Download file: > > <http://www.lfd.uci.edu/~gohlke/pythonlibs/#pygame> > > > Go here and download your pygame version, notice about x64 or x86 and python version. My system is x64 and python is 3.4 so I choose: `pygame-1.9.2a0-cp34-none-win_amd64.whl` 2) Put it in some where to install: I put it in "C:", so open cmd: and type: cd C:\ (this is change location to C:) 3) Install ``` pip install C:\pygame-1.9.2a0-cp34-none-win_amd64.whl ``` Done !
I myself got this error(while using v3.10.0) and then I uninstalled the latest version and installed older version of python(v3.9.7) and it fixed the issue. Hope that works for you too. Honestly there are not many changes in the fix unless they jump from python3 to python4. so you dont need to update the python just for the sake of latest-ality
29,205,752
I'm trying to produce a simple fibonacci algorithm with Cython. I have fib.pyx: ``` def fib(int n): cdef int i cdef double a=0.0, b=1.0 for i in range(n): a, b = a + b, a return a ``` and setup.py in the same folder: ``` from distutils.core import setup from Cython.Build import cythonize setup(ext_modules=cythonize('fib.pyx')) ``` Then I open cmd and cd my way to this folder and build the code with (I have [<http://www.microsoft.com/en-us/download/details.aspx?id=44266](this> compiler) : ``` python setup.py build ``` Which produces this result: ``` C:\Users\MyUserName\Documents\Python Scripts\Cython>python setup.py build Compiling fib.pyx because it changed. Cythonizing fib.pyx running build running build_ext building 'fib' extension creating build creating build\temp.win-amd64-2.7 creating build\temp.win-amd64-2.7\Release C:\Anaconda\Scripts\gcc.bat -DMS_WIN64 -mdll -O -Wall -IC:\Anaconda\include -IC: \Anaconda\PC -c fib.c -o build\temp.win-amd64-2.7\Release\fib.o writing build\temp.win-amd64-2.7\Release\fib.def creating build\lib.win-amd64-2.7 C:\Anaconda\Scripts\gcc.bat -DMS_WIN64 -shared -s build\temp.win-amd64-2.7\Relea se\fib.o build\temp.win-amd64-2.7\Release\fib.def -LC:\Anaconda\libs -LC:\Anacon da\PCbuild\amd64 -lpython27 -lmsvcr90 -o build\lib.win-amd64-2.7\fib.pyd ``` So it seems the compiling worked and I should be able to import this module with ``` import fib ImportError: No module named fib ``` Where did I go wrong? Edit: ``` os.getcwd() Out[6]: 'C:\\Users\\MyUserName\\Documents\\Python Scripts\\Cython\\build\\temp.win-amd64-2.7\\Release' In [7]: import fib Traceback (most recent call last): File "<ipython-input-7-6c0ab2f0a4e0>", line 1, in <module> import fib ImportError: No module named fib ```
2015/03/23
[ "https://Stackoverflow.com/questions/29205752", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4657326/" ]
Java implementation to remove all change notification registrations from the database ``` Statement stmt= conn.createStatement(); ResultSet rs = stmt.executeQuery("select regid,callback from USER_CHANGE_NOTIFICATION_REGS"); while(rs.next()) { long regid = rs.getLong(1); String callback = rs.getString(2); ((OracleConnection)conn).unregisterDatabaseChangeNotification(regid,callback); } rs.close(); stmt.close(); ``` You need to have ojdbc6/7.jar in class path to execute this code. Original post:<https://community.oracle.com/message/9315024#9315024>
You just can revoke change notification from current user and grant it again. I know, this isn't best solution, but it work.
29,205,752
I'm trying to produce a simple fibonacci algorithm with Cython. I have fib.pyx: ``` def fib(int n): cdef int i cdef double a=0.0, b=1.0 for i in range(n): a, b = a + b, a return a ``` and setup.py in the same folder: ``` from distutils.core import setup from Cython.Build import cythonize setup(ext_modules=cythonize('fib.pyx')) ``` Then I open cmd and cd my way to this folder and build the code with (I have [<http://www.microsoft.com/en-us/download/details.aspx?id=44266](this> compiler) : ``` python setup.py build ``` Which produces this result: ``` C:\Users\MyUserName\Documents\Python Scripts\Cython>python setup.py build Compiling fib.pyx because it changed. Cythonizing fib.pyx running build running build_ext building 'fib' extension creating build creating build\temp.win-amd64-2.7 creating build\temp.win-amd64-2.7\Release C:\Anaconda\Scripts\gcc.bat -DMS_WIN64 -mdll -O -Wall -IC:\Anaconda\include -IC: \Anaconda\PC -c fib.c -o build\temp.win-amd64-2.7\Release\fib.o writing build\temp.win-amd64-2.7\Release\fib.def creating build\lib.win-amd64-2.7 C:\Anaconda\Scripts\gcc.bat -DMS_WIN64 -shared -s build\temp.win-amd64-2.7\Relea se\fib.o build\temp.win-amd64-2.7\Release\fib.def -LC:\Anaconda\libs -LC:\Anacon da\PCbuild\amd64 -lpython27 -lmsvcr90 -o build\lib.win-amd64-2.7\fib.pyd ``` So it seems the compiling worked and I should be able to import this module with ``` import fib ImportError: No module named fib ``` Where did I go wrong? Edit: ``` os.getcwd() Out[6]: 'C:\\Users\\MyUserName\\Documents\\Python Scripts\\Cython\\build\\temp.win-amd64-2.7\\Release' In [7]: import fib Traceback (most recent call last): File "<ipython-input-7-6c0ab2f0a4e0>", line 1, in <module> import fib ImportError: No module named fib ```
2015/03/23
[ "https://Stackoverflow.com/questions/29205752", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4657326/" ]
Although this is a rather old question I will describe my experience with Oracle CQN just in case it helps someone. The feature works better with java where its easy not only to register but also to unregister the notification. In .NET if the application crashes there is no way in my experience to unregister the notification with code. Revoking change notification is not working immediately. Until database restart the registration survived the revoke. It seems that Oracle removes the registration when there is a problem in communication with the notification receiver. I was able to unregister notifications using this behavior. By turning on the firewall for example! Another solution I use to unregister the notifications for a particular oracle user is a tool I wrote in java named NotificationRegistrationsCleaner.jar. It can be downloaded from the following link. We call it passing 4 parameters it like this. java -jar NotificationRegistrationsCleaner.jar [oracle ip] [oracle service] [oracle user] [oracle password] The tool displays the removed registrations. Far from perfect but its doing the job. The java code is very similar to @TMtech code described above. [NotificationRegistrationsCleaner.jar](https://drive.google.com/file/d/1ClM-dWhsaEfNScDWXstp2WnhskEQL8Xy/view?usp=sharing)
You just can revoke change notification from current user and grant it again. I know, this isn't best solution, but it work.
70,161,899
I'm trying to build drake from source on Ubuntu 20.04 by following instructions from [here](https://drake.mit.edu/from_source.html). I already checked that my system meets all the requirements, and was ale to successfully run the mandatory platform-specific setup script (and it completed saying: 'install\_prereqs: success'). However, when I try to run cmake to build the python bindings, I'm confronted with the following error: ``` CMake Error at /usr/share/cmake-3.16/Modules/FindPackageHandleStandardArgs.cmake:146 (message): Could NOT find Python (missing: Python_NumPy_INCLUDE_DIRS NumPy) (found suitable exact version "3.8.10") Call Stack (most recent call first): /usr/share/cmake-3.16/Modules/FindPackageHandleStandardArgs.cmake:393 (_FPHSA_FAILURE_MESSAGE) /usr/share/cmake-3.16/Modules/FindPython/Support.cmake:2214 (find_package_handle_standard_args) /usr/share/cmake-3.16/Modules/FindPython.cmake:304 (include) CMakeLists.txt:240 (find_package) -- Configuring incomplete, errors occurred! ``` I can't seem to think of any reason why this is happening (I made sure to remove conda from my PATH variable following the note [here](https://drake.mit.edu/python_bindings.html#installation). Any help around this issue is much appreciated! EDIT: Want to mention that I'm trying to install Drake from [this PR](https://github.com/RobotLocomotion/drake/pull/16147) that includes a feature I need access to.
2021/11/29
[ "https://Stackoverflow.com/questions/70161899", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3598807/" ]
On another tack, you could try to temporarily work around the problem by doing (in Drake) a `bazel run //:install -- /path/to/somewhere` to install Drake, and thus skipping the CMake stuff that seems to be the problem here.
Here is some diagnostic output from my Ubuntu 20.04 system. Can you run the same, and check to see if anything looks different? ``` jwnimmer@call-cps:~$ which python /usr/bin/python jwnimmer@call-cps:~$ which python3 /usr/bin/python3 jwnimmer@call-cps:~$ file /usr/bin/python3 /usr/bin/python3: symbolic link to python3.8 jwnimmer@call-cps:~$ dpkg -l python3-numpy Desired=Unknown/Install/Remove/Purge/Hold | Status=Not/Inst/Conf-files/Unpacked/halF-conf/Half-inst/trig-aWait/Trig-pend |/ Err?=(none)/Reinst-required (Status,Err: uppercase=bad) ||/ Name Version Architecture Description +++-==============-=================-============-============================================ ii python3-numpy 1:1.17.4-5ubuntu3 amd64 Fast array facility to the Python 3 language jwnimmer@call-cps:~$ ls -l /usr/include/python3.8/numpy lrwxrwxrwx 1 root root 56 Feb 18 2020 /usr/include/python3.8/numpy -> ../../lib/python3/dist-packages/numpy/core/include/numpy jwnimmer@call-cps:~$ ls -l /usr/lib/python3/dist-packages/numpy/core/include/numpy | head total 388 -rw-r--r-- 1 root root 164 Jun 15 2019 arrayobject.h -rw-r--r-- 1 root root 3509 Jun 15 2019 arrayscalars.h -rw-r--r-- 1 root root 1878 Aug 30 2019 halffloat.h -rw-r--r-- 1 root root 61098 Feb 18 2020 __multiarray_api.h -rw-r--r-- 1 root root 56456 Feb 18 2020 multiarray_api.txt -rw-r--r-- 1 root root 11496 Oct 15 2019 ndarrayobject.h -rw-r--r-- 1 root root 65018 Nov 8 2019 ndarraytypes.h -rw-r--r-- 1 root root 1861 Jun 15 2019 _neighborhood_iterator_imp.h -rw-r--r-- 1 root root 6786 Aug 30 2019 noprefix.h ```
70,332,071
I'm trying to get a preprocessing function to work with the Dataset map, but I get the following error (full stack trace at the bottom): ``` ValueError: Tensor-typed variable initializers must either be wrapped in an init_scope or callable (e.g., `tf.Variable(lambda : tf.truncated_normal([10, 40]))`) when building functions. Please file a feature request if this restriction inconveniences you. ``` Below is a full snippet that reproduces the issue. My question is, why in one use case (crop only) it works, and when RandomFlip is used it doesn't? How can this be fixed? ```py import functools import numpy as np import tensorflow as tf def data_gen(): for i in range(10): x = np.random.random(size=(80, 80, 3)) * 255 # rgb image x = x.astype('uint8') y = np.random.random(size=(40, 40, 1)) * 255 # downsized mono image y = y.astype('uint8') yield x, y def preprocess(image, label, cropped_image_size, cropped_label_size, skip_augmentations=False): x = image y = label x_size = cropped_image_size y_size = cropped_label_size if not skip_augmentations: x = tf.keras.layers.RandomFlip(mode="horizontal")(x) y = tf.keras.layers.RandomFlip(mode="horizontal")(y) x = tf.keras.layers.RandomRotation(factor=1.0, fill_mode='constant')(x) y = tf.keras.layers.RandomRotation(factor=1.0, fill_mode='constant')(y) x = tf.keras.layers.CenterCrop(x_size, x_size)(x) y = tf.keras.layers.CenterCrop(y_size, y_size)(y) return x, y print(tf.__version__) # 2.6.0 dataset = tf.data.Dataset.from_generator(data_gen, output_signature=( tf.TensorSpec(shape=(80, 80, 3), dtype='uint8'), tf.TensorSpec(shape=(40, 40, 1), dtype='uint8') )) crop_only_fn = functools.partial(preprocess, cropped_image_size=50, cropped_label_size=25, skip_augmentations=True) train_preprocess_fn = functools.partial(preprocess, cropped_image_size=50, cropped_label_size=25, skip_augmentations=False) # This works crop_dataset = dataset.map(crop_only_fn) # This fails: ValueError: Tensor-typed variable initializers must either be wrapped in an init_scope or callable train_dataset = dataset.map(train_preprocess_fn) ``` Full-stack trace: ``` Traceback (most recent call last): File "./issue_dataaug.py", line 50, in <module> train_dataset = dataset.map(train_preprocess_fn) File "/...//virtualenvs/cvi36/lib/python3.6/site-packages/tensorflow/python/data/ops/dataset_ops.py", line 1861, in map return MapDataset(self, map_func, preserve_cardinality=True) File "/...//virtualenvs/cvi36/lib/python3.6/site-packages/tensorflow/python/data/ops/dataset_ops.py", line 4985, in __init__ use_legacy_function=use_legacy_function) File "/...//virtualenvs/cvi36/lib/python3.6/site-packages/tensorflow/python/data/ops/dataset_ops.py", line 4218, in __init__ self._function = fn_factory() File "/...//virtualenvs/cvi36/lib/python3.6/site-packages/tensorflow/python/eager/function.py", line 3151, in get_concrete_function *args, **kwargs) File "/...//virtualenvs/cvi36/lib/python3.6/site-packages/tensorflow/python/eager/function.py", line 3116, in _get_concrete_function_garbage_collected graph_function, _ = self._maybe_define_function(args, kwargs) File "/...//virtualenvs/cvi36/lib/python3.6/site-packages/tensorflow/python/eager/function.py", line 3463, in _maybe_define_function graph_function = self._create_graph_function(args, kwargs) File "/...//virtualenvs/cvi36/lib/python3.6/site-packages/tensorflow/python/eager/function.py", line 3308, in _create_graph_function capture_by_value=self._capture_by_value), File "/...//virtualenvs/cvi36/lib/python3.6/site-packages/tensorflow/python/framework/func_graph.py", line 1007, in func_graph_from_py_func func_outputs = python_func(*func_args, **func_kwargs) File "/...//virtualenvs/cvi36/lib/python3.6/site-packages/tensorflow/python/data/ops/dataset_ops.py", line 4195, in wrapped_fn ret = wrapper_helper(*args) File "/...//virtualenvs/cvi36/lib/python3.6/site-packages/tensorflow/python/data/ops/dataset_ops.py", line 4125, in wrapper_helper ret = autograph.tf_convert(self._func, ag_ctx)(*nested_args) File "/...//virtualenvs/cvi36/lib/python3.6/site-packages/tensorflow/python/autograph/impl/api.py", line 695, in wrapper raise e.ag_error_metadata.to_exception(e) ValueError: in user code: ./issue_dataaug.py:25 preprocess * x = tf.keras.layers.RandomFlip(mode="horizontal")(x) /...//virtualenvs/cvi36/lib/python3.6/site-packages/keras/layers/preprocessing/image_preprocessing.py:414 __init__ ** self._rng = make_generator(self.seed) /...//virtualenvs/cvi36/lib/python3.6/site-packages/keras/layers/preprocessing/image_preprocessing.py:1375 make_generator return tf.random.Generator.from_non_deterministic_state() /...//virtualenvs/cvi36/lib/python3.6/site-packages/tensorflow/python/ops/stateful_random_ops.py:396 from_non_deterministic_state return cls(state=state, alg=alg) /...//virtualenvs/cvi36/lib/python3.6/site-packages/tensorflow/python/ops/stateful_random_ops.py:476 __init__ trainable=False) /...//virtualenvs/cvi36/lib/python3.6/site-packages/tensorflow/python/ops/stateful_random_ops.py:489 _create_variable return variables.Variable(*args, **kwargs) /...//virtualenvs/cvi36/lib/python3.6/site-packages/tensorflow/python/ops/variables.py:268 __call__ return cls._variable_v2_call(*args, **kwargs) /...//virtualenvs/cvi36/lib/python3.6/site-packages/tensorflow/python/ops/variables.py:262 _variable_v2_call shape=shape) /...//virtualenvs/cvi36/lib/python3.6/site-packages/tensorflow/python/ops/variables.py:243 <lambda> previous_getter = lambda **kws: default_variable_creator_v2(None, **kws) /...//virtualenvs/cvi36/lib/python3.6/site-packages/tensorflow/python/ops/variable_scope.py:2675 default_variable_creator_v2 shape=shape) /...//virtualenvs/cvi36/lib/python3.6/site-packages/tensorflow/python/ops/variables.py:270 __call__ return super(VariableMetaclass, cls).__call__(*args, **kwargs) /...//virtualenvs/cvi36/lib/python3.6/site-packages/tensorflow/python/ops/resource_variable_ops.py:1613 __init__ distribute_strategy=distribute_strategy) /...//virtualenvs/cvi36/lib/python3.6/site-packages/tensorflow/python/ops/resource_variable_ops.py:1695 _init_from_args raise ValueError("Tensor-typed variable initializers must either be " ValueError: Tensor-typed variable initializers must either be wrapped in an init_scope or callable (e.g., `tf.Variable(lambda : tf.truncated_normal([10, 40]))`) when building functions. Please file a feature request if this restriction inconveniences you. ```
2021/12/13
[ "https://Stackoverflow.com/questions/70332071", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1213694/" ]
found the solution. ``` private var isLoading = true override fun onCreate(savedInstanceState: Bundle?) { val splashScreen = installSplashScreen() splashScreen.setKeepVisibleCondition { isLoading } } private fun doApiCalls(){ ... isLoading = false } ```
@sujith's answer did not work for me for some reason. I added a method in my viewModel like this: ``` fun isDataReady(): Boolean { return isDataReady.value?:false } ``` and used ``` splashScreen.setKeepVisibleCondition { !viewModel.isDataReady() } ``` This worked for me. May be someone can explain to me why sujiths answer was not working for me( it was hiding the splash screen after some time). Because I know both of us are essentially doing the same thing.
70,332,071
I'm trying to get a preprocessing function to work with the Dataset map, but I get the following error (full stack trace at the bottom): ``` ValueError: Tensor-typed variable initializers must either be wrapped in an init_scope or callable (e.g., `tf.Variable(lambda : tf.truncated_normal([10, 40]))`) when building functions. Please file a feature request if this restriction inconveniences you. ``` Below is a full snippet that reproduces the issue. My question is, why in one use case (crop only) it works, and when RandomFlip is used it doesn't? How can this be fixed? ```py import functools import numpy as np import tensorflow as tf def data_gen(): for i in range(10): x = np.random.random(size=(80, 80, 3)) * 255 # rgb image x = x.astype('uint8') y = np.random.random(size=(40, 40, 1)) * 255 # downsized mono image y = y.astype('uint8') yield x, y def preprocess(image, label, cropped_image_size, cropped_label_size, skip_augmentations=False): x = image y = label x_size = cropped_image_size y_size = cropped_label_size if not skip_augmentations: x = tf.keras.layers.RandomFlip(mode="horizontal")(x) y = tf.keras.layers.RandomFlip(mode="horizontal")(y) x = tf.keras.layers.RandomRotation(factor=1.0, fill_mode='constant')(x) y = tf.keras.layers.RandomRotation(factor=1.0, fill_mode='constant')(y) x = tf.keras.layers.CenterCrop(x_size, x_size)(x) y = tf.keras.layers.CenterCrop(y_size, y_size)(y) return x, y print(tf.__version__) # 2.6.0 dataset = tf.data.Dataset.from_generator(data_gen, output_signature=( tf.TensorSpec(shape=(80, 80, 3), dtype='uint8'), tf.TensorSpec(shape=(40, 40, 1), dtype='uint8') )) crop_only_fn = functools.partial(preprocess, cropped_image_size=50, cropped_label_size=25, skip_augmentations=True) train_preprocess_fn = functools.partial(preprocess, cropped_image_size=50, cropped_label_size=25, skip_augmentations=False) # This works crop_dataset = dataset.map(crop_only_fn) # This fails: ValueError: Tensor-typed variable initializers must either be wrapped in an init_scope or callable train_dataset = dataset.map(train_preprocess_fn) ``` Full-stack trace: ``` Traceback (most recent call last): File "./issue_dataaug.py", line 50, in <module> train_dataset = dataset.map(train_preprocess_fn) File "/...//virtualenvs/cvi36/lib/python3.6/site-packages/tensorflow/python/data/ops/dataset_ops.py", line 1861, in map return MapDataset(self, map_func, preserve_cardinality=True) File "/...//virtualenvs/cvi36/lib/python3.6/site-packages/tensorflow/python/data/ops/dataset_ops.py", line 4985, in __init__ use_legacy_function=use_legacy_function) File "/...//virtualenvs/cvi36/lib/python3.6/site-packages/tensorflow/python/data/ops/dataset_ops.py", line 4218, in __init__ self._function = fn_factory() File "/...//virtualenvs/cvi36/lib/python3.6/site-packages/tensorflow/python/eager/function.py", line 3151, in get_concrete_function *args, **kwargs) File "/...//virtualenvs/cvi36/lib/python3.6/site-packages/tensorflow/python/eager/function.py", line 3116, in _get_concrete_function_garbage_collected graph_function, _ = self._maybe_define_function(args, kwargs) File "/...//virtualenvs/cvi36/lib/python3.6/site-packages/tensorflow/python/eager/function.py", line 3463, in _maybe_define_function graph_function = self._create_graph_function(args, kwargs) File "/...//virtualenvs/cvi36/lib/python3.6/site-packages/tensorflow/python/eager/function.py", line 3308, in _create_graph_function capture_by_value=self._capture_by_value), File "/...//virtualenvs/cvi36/lib/python3.6/site-packages/tensorflow/python/framework/func_graph.py", line 1007, in func_graph_from_py_func func_outputs = python_func(*func_args, **func_kwargs) File "/...//virtualenvs/cvi36/lib/python3.6/site-packages/tensorflow/python/data/ops/dataset_ops.py", line 4195, in wrapped_fn ret = wrapper_helper(*args) File "/...//virtualenvs/cvi36/lib/python3.6/site-packages/tensorflow/python/data/ops/dataset_ops.py", line 4125, in wrapper_helper ret = autograph.tf_convert(self._func, ag_ctx)(*nested_args) File "/...//virtualenvs/cvi36/lib/python3.6/site-packages/tensorflow/python/autograph/impl/api.py", line 695, in wrapper raise e.ag_error_metadata.to_exception(e) ValueError: in user code: ./issue_dataaug.py:25 preprocess * x = tf.keras.layers.RandomFlip(mode="horizontal")(x) /...//virtualenvs/cvi36/lib/python3.6/site-packages/keras/layers/preprocessing/image_preprocessing.py:414 __init__ ** self._rng = make_generator(self.seed) /...//virtualenvs/cvi36/lib/python3.6/site-packages/keras/layers/preprocessing/image_preprocessing.py:1375 make_generator return tf.random.Generator.from_non_deterministic_state() /...//virtualenvs/cvi36/lib/python3.6/site-packages/tensorflow/python/ops/stateful_random_ops.py:396 from_non_deterministic_state return cls(state=state, alg=alg) /...//virtualenvs/cvi36/lib/python3.6/site-packages/tensorflow/python/ops/stateful_random_ops.py:476 __init__ trainable=False) /...//virtualenvs/cvi36/lib/python3.6/site-packages/tensorflow/python/ops/stateful_random_ops.py:489 _create_variable return variables.Variable(*args, **kwargs) /...//virtualenvs/cvi36/lib/python3.6/site-packages/tensorflow/python/ops/variables.py:268 __call__ return cls._variable_v2_call(*args, **kwargs) /...//virtualenvs/cvi36/lib/python3.6/site-packages/tensorflow/python/ops/variables.py:262 _variable_v2_call shape=shape) /...//virtualenvs/cvi36/lib/python3.6/site-packages/tensorflow/python/ops/variables.py:243 <lambda> previous_getter = lambda **kws: default_variable_creator_v2(None, **kws) /...//virtualenvs/cvi36/lib/python3.6/site-packages/tensorflow/python/ops/variable_scope.py:2675 default_variable_creator_v2 shape=shape) /...//virtualenvs/cvi36/lib/python3.6/site-packages/tensorflow/python/ops/variables.py:270 __call__ return super(VariableMetaclass, cls).__call__(*args, **kwargs) /...//virtualenvs/cvi36/lib/python3.6/site-packages/tensorflow/python/ops/resource_variable_ops.py:1613 __init__ distribute_strategy=distribute_strategy) /...//virtualenvs/cvi36/lib/python3.6/site-packages/tensorflow/python/ops/resource_variable_ops.py:1695 _init_from_args raise ValueError("Tensor-typed variable initializers must either be " ValueError: Tensor-typed variable initializers must either be wrapped in an init_scope or callable (e.g., `tf.Variable(lambda : tf.truncated_normal([10, 40]))`) when building functions. Please file a feature request if this restriction inconveniences you. ```
2021/12/13
[ "https://Stackoverflow.com/questions/70332071", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1213694/" ]
found the solution. ``` private var isLoading = true override fun onCreate(savedInstanceState: Bundle?) { val splashScreen = installSplashScreen() splashScreen.setKeepVisibleCondition { isLoading } } private fun doApiCalls(){ ... isLoading = false } ```
Expanding on @hushed\_voice's answer, *setKeepVisibleCondition()* will keep the Splash Screen on as long as it's returning true. Once it equates to false, the Splash will finish and your app will proceed forward. Here is a short function I wrote up to handle my Splash Screen logic in my Main Activity: ``` private fun splashScreen() { val splash = installSplashScreen() splash.setKeepVisibleCondition{ viewModel.initialize() } } ``` Inside my ViewModel, the *initialize()* function does some asynchronous work, after which it returns false. ``` fun initialize(): Boolean { return !isDataReady } ``` Till then my Splash Screen is present, after which it goes away. You should be able to throw in your API calls in this block and use a Reactive library to wait for them to complete before returning false. This is working perfectly for me.
73,879,190
A is a m*n matrix B is a n*n matrix I want to return matrix C of size m\*n such that: [![$C_{ij} = \sum_{k=1}^{n} max(0, a_{ij} - b_{jk}) $](https://i.stack.imgur.com/Uq3SK.png)](https://i.stack.imgur.com/Uq3SK.png) In python it could be like below ``` for i in range(m): for j in range(n): C[i][j] = 0 for k in range(n): C[i][j] += max(0, A[i][j] - B[j][k]) ``` this runs on O(m\*n^2) if `A[i][j] - B[j][k]` is always > 0 it could easily be improved as ``` C[i][j] = n*A[i][j] - sum(B[j]) ``` but it is possible to improve as well when there are cases of `A[i][j] - B[j][k]< 0` ? I think some divide and conquer algorithms might help here but I am not familiar with them.
2022/09/28
[ "https://Stackoverflow.com/questions/73879190", "https://Stackoverflow.com", "https://Stackoverflow.com/users/12968928/" ]
I would look on much simpler construct and go from there.. lets say the max between 0 and the addition wasn't there. so the answer would be : a(i,j)*n - sum(b(j,) on this you could just go linearly by sum each vector and erase it from a(i,j)*n and because you need sum each vector in b only once per j it can be done in max(m*n,n*n) now think about simple solution for the max problem... if you would find which elements in b(j,) is bigger than a(i,j) you could just ignore their sum and substract their count from the multipication of a(i,j) All of that can be done by ordering the vector b(j,) by size and make a summing array to each vector from the biggest to lowest (it can be done in n*n*log(n) because you order each b(j,) vector once) then you only need to binary search where is the a(i,j) in the ordered vector and take the sum you already found and subtract it from a(i,j) \* the position you found in the binary search. Eventually you'll get O( max( m*n*log(n),n*n*log(n) ) ) I got for you also the implementation: ```py import numpy as np M = 4 N = 7 array = np.random.randint(100, size=(M,N)) array2 = np.random.randint(100, size=(N,N)) def matrixMacossoOperation(a,b, N, M): cSlow = np.empty((M,N)) for i in range(M): for j in range(N): cSlow[i][j] = 0 for k in range(N): cSlow[i][j] += max(0, a[i][j] - b[j][k]) for i in range(N): b[i].sort() sumArr = np.copy(b) for j in range(N): for i in range(N - 1): sumArr[j][i + 1] += sumArr[j][i] c = np.empty((M,N)) for i in range(M): for j in range(N): sumIndex = np.searchsorted(b[j],a[i][j]) if sumIndex == 0: c[i][j] = 0; else: c[i][j] = ((sumIndex) * a[i][j]) - sumArr[j][sumIndex - 1] print(c) assert(np.array_equal(cSlow,c)) matrixMacossoOperation(array,array2,N,M) ```
For each `j`, You can sort each column `B[j][:]` and compute cumulative sums. Then for a given `A[i][j]` you can find the sum of `B[j][k]` that are larger than `A[i][j]` in O(log n) time using binary search. If there's `x` elements of `B[j][:]` that are greater than `A[i][j]` and their sum is S, then `C[i][j] = A[i][j] * x - S`. This gives you an overall O((m+n)n log n) time algorithm.
62,667,225
For the last 3 days, I have been trying to set up virtual Env on Vs Code for python with some luck but I have a few questions that I cant seem to find the answer to. 1. Does Vs Code have to run in WSL for me to use venv? 2. When I install venv on my device it doesn't seem to install a Scripts folder inside the vevn folder. Is this out dated information or am I installing it incorrectly. I am installing onto Documents folder inside my D: drive using python3 - m venv venv. The folder does install and does run in WSL mode but I am trying to run it in clear VsCode so I can use other add-ons such as AREPL that doesn't seem to like being ran in WSL. For extra context I have oh-my-ZSH set up and using the ubuntu command line on my windows device. Any information will be helpful at this point because I am losing my mind. [venv folder in side D: drive](https://i.stack.imgur.com/0FHoW.png) [result](https://i.stack.imgur.com/ecbDq.png)
2020/06/30
[ "https://Stackoverflow.com/questions/62667225", "https://Stackoverflow.com", "https://Stackoverflow.com/users/13843817/" ]
If you have the python extension installed you should be able to select your python interpreter at the bottom. [![python interpreter selection at the bottom of vscode](https://i.stack.imgur.com/dsdfm.png)](https://i.stack.imgur.com/dsdfm.png) You should then be able to select the appropriate path [![selecting the python interpreter](https://i.stack.imgur.com/fJW40.png)](https://i.stack.imgur.com/fJW40.png)
You don't have to create a virtual environment under WSL, it will work anywhere. But the reason you don't have a `Scripts/` directory is because (I bet) you're running VS Code with git bash and that makes Python think you're running under Unix. In that case it creates a `bin/` directory. That will also confuse VS Code because the extension thinks you're running under Windows. I would either create a virtual environment using a Windows terminal like PowerShell or Command Prompt or use WSL2.
62,667,225
For the last 3 days, I have been trying to set up virtual Env on Vs Code for python with some luck but I have a few questions that I cant seem to find the answer to. 1. Does Vs Code have to run in WSL for me to use venv? 2. When I install venv on my device it doesn't seem to install a Scripts folder inside the vevn folder. Is this out dated information or am I installing it incorrectly. I am installing onto Documents folder inside my D: drive using python3 - m venv venv. The folder does install and does run in WSL mode but I am trying to run it in clear VsCode so I can use other add-ons such as AREPL that doesn't seem to like being ran in WSL. For extra context I have oh-my-ZSH set up and using the ubuntu command line on my windows device. Any information will be helpful at this point because I am losing my mind. [venv folder in side D: drive](https://i.stack.imgur.com/0FHoW.png) [result](https://i.stack.imgur.com/ecbDq.png)
2020/06/30
[ "https://Stackoverflow.com/questions/62667225", "https://Stackoverflow.com", "https://Stackoverflow.com/users/13843817/" ]
Run **`Set-ExecutionPolicy Unrestricted -scope process`** before activating virtual environment. All the best
You don't have to create a virtual environment under WSL, it will work anywhere. But the reason you don't have a `Scripts/` directory is because (I bet) you're running VS Code with git bash and that makes Python think you're running under Unix. In that case it creates a `bin/` directory. That will also confuse VS Code because the extension thinks you're running under Windows. I would either create a virtual environment using a Windows terminal like PowerShell or Command Prompt or use WSL2.
10,283,067
I wanted to create a simple gui with a play and stop button to play an mp3 file in python. I created a very simple gui using Tkinter that consists of 2 buttons (stop and play). I created a function that does the following: ``` def playsound () : sound = pyglet.media.load('music.mp3') sound.play() pyglet.app.run() ``` I added that function as a command to the button play. I also made a different function to stop music: ``` def stopsound (): pyglet.app.exit ``` I added this function as a command to the second button. But the problem is that when I hit play, python and the gui freeze. I can try to close the window but it does not close, and the stop button is not responsive. I understand that this is because the pyglet.app.run() is executing till the song is over but how exactly do I prevent this? I want the gui to stop the music when I click on the button. Any ideas on where I can find a solution to this?
2012/04/23
[ "https://Stackoverflow.com/questions/10283067", "https://Stackoverflow.com", "https://Stackoverflow.com/users/947933/" ]
You are mixing two UI libraries together - that is not intrinsically bad, but there are some problems. Notably, both of them need a main loop of their own to process their events. TKinter uses it to communicate with the desktop and user-generated events, and in this case, pyglet uses it to play your music. Each of these loops prevents a normal "top down" program flow, as we are used to when we learn non-GUI programming, and the program should proceed basically with callbacks from the main loops. In this case, in the middle of a Tkinter callback, you put the pyglet mainloop (calling `pyglet.app.run`) in motion, and the control never returns to the Tkinter library. Sometimes loops of different libraries can coexist on the same process, with no conflicts -- but of course you will be either running one of them or the other. If so, it may be possible to run each library's mainloop in a different Python thread. If they can not exist together, you will have to deal with each library in a different process. So, one way to make the music player to start in another thread could be: ``` from threading import Thread def real_playsound () : sound = pyglet.media.load('music.mp3') sound.play() pyglet.app.run() def playsound(): global player_thread player_thread = Thread(target=real_playsound) player_thread.start() ``` If Tkinter and pyglet can coexist, that should be enough to get your music to start. To be able to control it, however, you will need to implement a couple more things. My suggestion is to have a callback on the pyglet thread that is called by pyglet every second or so -- this callback checks the state of some global variables, and based on them chooses to stop the music, change the file being played, and so on.
There is a media player implementation in the pyglet documentation: <http://www.pyglet.org/doc/programming_guide/playing_sounds_and_music.html> The script you should look at is [media\_player.py](http://www.pyglet.org/doc/programming_guide/media_player.py) Hopefully this will get you started
10,283,067
I wanted to create a simple gui with a play and stop button to play an mp3 file in python. I created a very simple gui using Tkinter that consists of 2 buttons (stop and play). I created a function that does the following: ``` def playsound () : sound = pyglet.media.load('music.mp3') sound.play() pyglet.app.run() ``` I added that function as a command to the button play. I also made a different function to stop music: ``` def stopsound (): pyglet.app.exit ``` I added this function as a command to the second button. But the problem is that when I hit play, python and the gui freeze. I can try to close the window but it does not close, and the stop button is not responsive. I understand that this is because the pyglet.app.run() is executing till the song is over but how exactly do I prevent this? I want the gui to stop the music when I click on the button. Any ideas on where I can find a solution to this?
2012/04/23
[ "https://Stackoverflow.com/questions/10283067", "https://Stackoverflow.com", "https://Stackoverflow.com/users/947933/" ]
I would do something like: ``` import pyglet from pyglet.gl import * class main (pyglet.window.Window): def __init__ (self): super(main, self).__init__(800, 600, fullscreen = False) self.button_texture = pyglet.image.load('button.png') self.button = pyglet.sprite.Sprite(self.button_texture) self.sound = pyglet.media.load('music.mp3') self.sound.play() self.alive = 1 def on_draw(self): self.render() def on_close(self): self.alive = 0 def on_mouse_press(self, x, y, button, modifiers): if x > self.button.x and x < (self.button.x + self.button_texture.width): if y > self.button.y and y < (self.button.y + self.button_texture.height): self.alive = 0 def on_key_press(self, symbol, modifiers): if symbol == 65307: # [ESC] self.alive = 0 def render(self): self.clear() self.button.draw() self.flip() def run(self): while self.alive == 1: self.render() # -----------> This is key <---------- # This is what replaces pyglet.app.run() # but is required for the GUI to not freeze # event = self.dispatch_events() x = main() x.run() ```
There is a media player implementation in the pyglet documentation: <http://www.pyglet.org/doc/programming_guide/playing_sounds_and_music.html> The script you should look at is [media\_player.py](http://www.pyglet.org/doc/programming_guide/media_player.py) Hopefully this will get you started
10,283,067
I wanted to create a simple gui with a play and stop button to play an mp3 file in python. I created a very simple gui using Tkinter that consists of 2 buttons (stop and play). I created a function that does the following: ``` def playsound () : sound = pyglet.media.load('music.mp3') sound.play() pyglet.app.run() ``` I added that function as a command to the button play. I also made a different function to stop music: ``` def stopsound (): pyglet.app.exit ``` I added this function as a command to the second button. But the problem is that when I hit play, python and the gui freeze. I can try to close the window but it does not close, and the stop button is not responsive. I understand that this is because the pyglet.app.run() is executing till the song is over but how exactly do I prevent this? I want the gui to stop the music when I click on the button. Any ideas on where I can find a solution to this?
2012/04/23
[ "https://Stackoverflow.com/questions/10283067", "https://Stackoverflow.com", "https://Stackoverflow.com/users/947933/" ]
This solution is the easiest one: ``` import pyglet foo=pyglet.media.load("/data/Me/Music/Goo Goo Dolls/[1998] Dizzy Up The Girl/11 - Iris.mp3") foo.play() def exiter(dt): pyglet.app.exit() print "Song length is: %f" % foo.duration # foo.duration is the song length pyglet.clock.schedule_once(exiter, foo.duration) pyglet.app.run() ``` source: <http://ubuntuforums.org/showthread.php?t=1651906>
There is a media player implementation in the pyglet documentation: <http://www.pyglet.org/doc/programming_guide/playing_sounds_and_music.html> The script you should look at is [media\_player.py](http://www.pyglet.org/doc/programming_guide/media_player.py) Hopefully this will get you started
10,283,067
I wanted to create a simple gui with a play and stop button to play an mp3 file in python. I created a very simple gui using Tkinter that consists of 2 buttons (stop and play). I created a function that does the following: ``` def playsound () : sound = pyglet.media.load('music.mp3') sound.play() pyglet.app.run() ``` I added that function as a command to the button play. I also made a different function to stop music: ``` def stopsound (): pyglet.app.exit ``` I added this function as a command to the second button. But the problem is that when I hit play, python and the gui freeze. I can try to close the window but it does not close, and the stop button is not responsive. I understand that this is because the pyglet.app.run() is executing till the song is over but how exactly do I prevent this? I want the gui to stop the music when I click on the button. Any ideas on where I can find a solution to this?
2012/04/23
[ "https://Stackoverflow.com/questions/10283067", "https://Stackoverflow.com", "https://Stackoverflow.com/users/947933/" ]
You are mixing two UI libraries together - that is not intrinsically bad, but there are some problems. Notably, both of them need a main loop of their own to process their events. TKinter uses it to communicate with the desktop and user-generated events, and in this case, pyglet uses it to play your music. Each of these loops prevents a normal "top down" program flow, as we are used to when we learn non-GUI programming, and the program should proceed basically with callbacks from the main loops. In this case, in the middle of a Tkinter callback, you put the pyglet mainloop (calling `pyglet.app.run`) in motion, and the control never returns to the Tkinter library. Sometimes loops of different libraries can coexist on the same process, with no conflicts -- but of course you will be either running one of them or the other. If so, it may be possible to run each library's mainloop in a different Python thread. If they can not exist together, you will have to deal with each library in a different process. So, one way to make the music player to start in another thread could be: ``` from threading import Thread def real_playsound () : sound = pyglet.media.load('music.mp3') sound.play() pyglet.app.run() def playsound(): global player_thread player_thread = Thread(target=real_playsound) player_thread.start() ``` If Tkinter and pyglet can coexist, that should be enough to get your music to start. To be able to control it, however, you will need to implement a couple more things. My suggestion is to have a callback on the pyglet thread that is called by pyglet every second or so -- this callback checks the state of some global variables, and based on them chooses to stop the music, change the file being played, and so on.
I would do something like: ``` import pyglet from pyglet.gl import * class main (pyglet.window.Window): def __init__ (self): super(main, self).__init__(800, 600, fullscreen = False) self.button_texture = pyglet.image.load('button.png') self.button = pyglet.sprite.Sprite(self.button_texture) self.sound = pyglet.media.load('music.mp3') self.sound.play() self.alive = 1 def on_draw(self): self.render() def on_close(self): self.alive = 0 def on_mouse_press(self, x, y, button, modifiers): if x > self.button.x and x < (self.button.x + self.button_texture.width): if y > self.button.y and y < (self.button.y + self.button_texture.height): self.alive = 0 def on_key_press(self, symbol, modifiers): if symbol == 65307: # [ESC] self.alive = 0 def render(self): self.clear() self.button.draw() self.flip() def run(self): while self.alive == 1: self.render() # -----------> This is key <---------- # This is what replaces pyglet.app.run() # but is required for the GUI to not freeze # event = self.dispatch_events() x = main() x.run() ```
10,283,067
I wanted to create a simple gui with a play and stop button to play an mp3 file in python. I created a very simple gui using Tkinter that consists of 2 buttons (stop and play). I created a function that does the following: ``` def playsound () : sound = pyglet.media.load('music.mp3') sound.play() pyglet.app.run() ``` I added that function as a command to the button play. I also made a different function to stop music: ``` def stopsound (): pyglet.app.exit ``` I added this function as a command to the second button. But the problem is that when I hit play, python and the gui freeze. I can try to close the window but it does not close, and the stop button is not responsive. I understand that this is because the pyglet.app.run() is executing till the song is over but how exactly do I prevent this? I want the gui to stop the music when I click on the button. Any ideas on where I can find a solution to this?
2012/04/23
[ "https://Stackoverflow.com/questions/10283067", "https://Stackoverflow.com", "https://Stackoverflow.com/users/947933/" ]
You are mixing two UI libraries together - that is not intrinsically bad, but there are some problems. Notably, both of them need a main loop of their own to process their events. TKinter uses it to communicate with the desktop and user-generated events, and in this case, pyglet uses it to play your music. Each of these loops prevents a normal "top down" program flow, as we are used to when we learn non-GUI programming, and the program should proceed basically with callbacks from the main loops. In this case, in the middle of a Tkinter callback, you put the pyglet mainloop (calling `pyglet.app.run`) in motion, and the control never returns to the Tkinter library. Sometimes loops of different libraries can coexist on the same process, with no conflicts -- but of course you will be either running one of them or the other. If so, it may be possible to run each library's mainloop in a different Python thread. If they can not exist together, you will have to deal with each library in a different process. So, one way to make the music player to start in another thread could be: ``` from threading import Thread def real_playsound () : sound = pyglet.media.load('music.mp3') sound.play() pyglet.app.run() def playsound(): global player_thread player_thread = Thread(target=real_playsound) player_thread.start() ``` If Tkinter and pyglet can coexist, that should be enough to get your music to start. To be able to control it, however, you will need to implement a couple more things. My suggestion is to have a callback on the pyglet thread that is called by pyglet every second or so -- this callback checks the state of some global variables, and based on them chooses to stop the music, change the file being played, and so on.
This solution is the easiest one: ``` import pyglet foo=pyglet.media.load("/data/Me/Music/Goo Goo Dolls/[1998] Dizzy Up The Girl/11 - Iris.mp3") foo.play() def exiter(dt): pyglet.app.exit() print "Song length is: %f" % foo.duration # foo.duration is the song length pyglet.clock.schedule_once(exiter, foo.duration) pyglet.app.run() ``` source: <http://ubuntuforums.org/showthread.php?t=1651906>
26,943,578
I'm making a simple guessing game using tkinter for my python class and was wondering if there was a way to loop it so that the player would have a maximum number of guesses before the program tells the player what the number was and changes the number, or kill the program after it tells them the answer. Heres my code so far: ``` # This program is a number guessing game using tkinter gui. # Import all the necessary libraries. import tkinter import tkinter.messagebox import random # Set the variables. number = random.randint(1,80) attempts = 0 # Start coding the GUI class numbergameGUI: def __init__(self): # Create the main window. self.main_window = tkinter.Tk() # Create four frames to group widgets. self.top_frame = tkinter.Frame() self.mid_frame1 = tkinter.Frame() self.mid_frame2 = tkinter.Frame() self.bottom_frame = tkinter.Frame() # Create the widget for the top frame. self.top_label = tkinter.Label(self.top_frame, \ text='The number guessing game!') # Pack the widget for the top frame. self.top_label.pack(side='left') # Create the widgets for the upper middle frame self.prompt_label = tkinter.Label(self.mid_frame1, \ text='Guess the number I\'m thinking of:') self.guess_entry = tkinter.Entry(self.mid_frame1, \ width=10) # Pack the widgets for the upper middle frame. self.prompt_label.pack(side='left') self.guess_entry.pack(side='left') # Create the widget for the bottom middle frame. self.descr_label = tkinter.Label(self.mid_frame2, \ text='Your Guess is:') self.value = tkinter.StringVar() # This tells user if guess was too high or low. self.guess_label = tkinter.Label(self.mid_frame2, \ textvariable=self.value) # Pack the middle frame's widgets. self.descr_label.pack(side='left') self.guess_label.pack(side='left') # Create the button widgets for the bottom frame. self.guess_button = tkinter.Button(self.bottom_frame, \ text='Guess', \ command=self.guess,) self.quit_button = tkinter.Button(self.bottom_frame, \ text='Quit', \ command=self.main_window.destroy) # Pack the buttons. self.guess_button.pack(side='left') self.quit_button.pack(side='left') # Pack the frames self.top_frame.pack() self.mid_frame1.pack() self.mid_frame2.pack() self.bottom_frame.pack() # Enter the tkinter main loop. tkinter.mainloop() # Define guess def guess(self): # Get the number they guessed. guess1 = int(self.guess_entry.get()) # sattempts +=1 # Tell player too low if their guess was too low. if guess1 < number: self.value.set('too low') # Tell player too high if their guess was too high. elif guess1 > number: self.value.set('too high') # End the loop if the player attempts the correct number. if guess1 == number: tkinter.messagebox.showinfo('Result', 'Congratulations! You guessed right!') start = numbergameGUI() ``` I tried to put a while loop inside of the guess function because I did that before the program was using tkinter but I haven't been able to get it to work yet.
2014/11/15
[ "https://Stackoverflow.com/questions/26943578", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4255017/" ]
You need to set the visibility of `$secret` ``` private $secret = ""; ``` Then just remove that casting on the base64 and use `$this->secret` to access the property: ``` return base64_encode($this->secret); ``` So finally: ``` class mySimpleClass { // public $secret = ""; private $secret = ''; public function __construct($s) { $this->secret = $s; } public function getSecret() { return base64_encode($this->secret); } } ```
I suggest you to declare `$secret` as `public` or `private` & access it using `$this->`. Example: ``` class mySimpleClass { public $secret = ""; public function __construct($s) { $this -> secret = $s; } public function getSecret() { return base64_encode($this->$secret); } } ```
51,309,341
So im trying to send a saved wave file from a client to a server with socket however every attempt at doing it fails, the closest ive got is this: ``` #Server.py requests = 0 while True: wavfile = open(str(requests)+str(addr)+".wav", "wb") while True: data = clientsocket.recv(1024) if not data: break requests = requests+1 wavefile.write(data) #Client.py bytes = open("senddata", "rb") networkmanager.send(bytes.encode()) ``` the error with this code is "AttributeError: '\_io.BufferedReader' object has no attribute 'encode'" so is there anyway to fix this?, and im using python
2018/07/12
[ "https://Stackoverflow.com/questions/51309341", "https://Stackoverflow.com", "https://Stackoverflow.com/users/9995176/" ]
You could create e.g. a `utils` file, export your helpers from there, and import them when needed: ``` // utils.js export function romanize(str) { // ... } export function getDocumentType(doc) { // ... } // App.js import { romanize } from './utils'; ```
The "react way" is to structure these files in the way that makes most sense for your application. Let me give you some examples of what react applications tend to look like to help you out. React has a declarative tree structure for the view, and other related concepts have a tendency to fall into this declarative tree structure form as-well. Let's look at two examples, one where the paradigm relates to the view hierarchy and one where it does not. For one where it does not, we can think about your domain model. You may need to structure local state in stores that resemble your business model. You business model will usually look different from your view hierarchy, so we would have a separate hierarchy for this. But what about the places where the business model needs to connect to the view layer. Since we are specifying data on a per component bases. Even though it isn't the view or styles or how the component behaves, this is still colocated in the same folder hierarchy as the react component because it fits into the same conceptual structure. Now, there is your question of utilities. There are many approaches to this. 1. If they are all small and specific to your application but not any part, you can put them in the root under utils. 2. If there are a lot of utils and they fit into a structure separate from any of your existing hierarchies, make a new hierarchy. 3. If they are independent from your application, either of the above approaches could become an npm package. 4. If they relate to certain parts of your app, you can put them at the highest point in the hierarchy such that everything that uses the utility is beneath the directory where the utility lives.
51,309,341
So im trying to send a saved wave file from a client to a server with socket however every attempt at doing it fails, the closest ive got is this: ``` #Server.py requests = 0 while True: wavfile = open(str(requests)+str(addr)+".wav", "wb") while True: data = clientsocket.recv(1024) if not data: break requests = requests+1 wavefile.write(data) #Client.py bytes = open("senddata", "rb") networkmanager.send(bytes.encode()) ``` the error with this code is "AttributeError: '\_io.BufferedReader' object has no attribute 'encode'" so is there anyway to fix this?, and im using python
2018/07/12
[ "https://Stackoverflow.com/questions/51309341", "https://Stackoverflow.com", "https://Stackoverflow.com/users/9995176/" ]
There are some situations where you will need a helper functions like these and setting those up in a util or helpers folder is a great way to handle that. However, to take full advantage of React, I'd suggest thinking about if there is a way you could make a shared component instead. For functions such as your romanize function, you can make a React component that formats the number you pass it and displays it in a span. This is the same approach react libraries use, for example the `react-intl` library recommends using their `<FormattedMessage />` component instead of their `formatMessage` helper function. For example, ``` const RomanNumeral = ({ number }) => { // romanize logic here return <span>{result}</span> } ``` Then you can use it like so: ``` <RomanNumeral number={5} /> ```
The "react way" is to structure these files in the way that makes most sense for your application. Let me give you some examples of what react applications tend to look like to help you out. React has a declarative tree structure for the view, and other related concepts have a tendency to fall into this declarative tree structure form as-well. Let's look at two examples, one where the paradigm relates to the view hierarchy and one where it does not. For one where it does not, we can think about your domain model. You may need to structure local state in stores that resemble your business model. You business model will usually look different from your view hierarchy, so we would have a separate hierarchy for this. But what about the places where the business model needs to connect to the view layer. Since we are specifying data on a per component bases. Even though it isn't the view or styles or how the component behaves, this is still colocated in the same folder hierarchy as the react component because it fits into the same conceptual structure. Now, there is your question of utilities. There are many approaches to this. 1. If they are all small and specific to your application but not any part, you can put them in the root under utils. 2. If there are a lot of utils and they fit into a structure separate from any of your existing hierarchies, make a new hierarchy. 3. If they are independent from your application, either of the above approaches could become an npm package. 4. If they relate to certain parts of your app, you can put them at the highest point in the hierarchy such that everything that uses the utility is beneath the directory where the utility lives.
51,309,341
So im trying to send a saved wave file from a client to a server with socket however every attempt at doing it fails, the closest ive got is this: ``` #Server.py requests = 0 while True: wavfile = open(str(requests)+str(addr)+".wav", "wb") while True: data = clientsocket.recv(1024) if not data: break requests = requests+1 wavefile.write(data) #Client.py bytes = open("senddata", "rb") networkmanager.send(bytes.encode()) ``` the error with this code is "AttributeError: '\_io.BufferedReader' object has no attribute 'encode'" so is there anyway to fix this?, and im using python
2018/07/12
[ "https://Stackoverflow.com/questions/51309341", "https://Stackoverflow.com", "https://Stackoverflow.com/users/9995176/" ]
The "react way" is to structure these files in the way that makes most sense for your application. Let me give you some examples of what react applications tend to look like to help you out. React has a declarative tree structure for the view, and other related concepts have a tendency to fall into this declarative tree structure form as-well. Let's look at two examples, one where the paradigm relates to the view hierarchy and one where it does not. For one where it does not, we can think about your domain model. You may need to structure local state in stores that resemble your business model. You business model will usually look different from your view hierarchy, so we would have a separate hierarchy for this. But what about the places where the business model needs to connect to the view layer. Since we are specifying data on a per component bases. Even though it isn't the view or styles or how the component behaves, this is still colocated in the same folder hierarchy as the react component because it fits into the same conceptual structure. Now, there is your question of utilities. There are many approaches to this. 1. If they are all small and specific to your application but not any part, you can put them in the root under utils. 2. If there are a lot of utils and they fit into a structure separate from any of your existing hierarchies, make a new hierarchy. 3. If they are independent from your application, either of the above approaches could become an npm package. 4. If they relate to certain parts of your app, you can put them at the highest point in the hierarchy such that everything that uses the utility is beneath the directory where the utility lives.
Shared Conmponents is definetly the react way but having reusable functions in the utility/helper folder is always handy. Here is how I would do it. You could create a `utility` folder inside `src` folder where you could export all the reusable functions ``` --| src ----| utility -------| formatDate.js -------| formatCurrency.js -------| romanize.js ----| components ----| hooks ----| api ``` Then you could import the functions inside your components
51,309,341
So im trying to send a saved wave file from a client to a server with socket however every attempt at doing it fails, the closest ive got is this: ``` #Server.py requests = 0 while True: wavfile = open(str(requests)+str(addr)+".wav", "wb") while True: data = clientsocket.recv(1024) if not data: break requests = requests+1 wavefile.write(data) #Client.py bytes = open("senddata", "rb") networkmanager.send(bytes.encode()) ``` the error with this code is "AttributeError: '\_io.BufferedReader' object has no attribute 'encode'" so is there anyway to fix this?, and im using python
2018/07/12
[ "https://Stackoverflow.com/questions/51309341", "https://Stackoverflow.com", "https://Stackoverflow.com/users/9995176/" ]
You could create e.g. a `utils` file, export your helpers from there, and import them when needed: ``` // utils.js export function romanize(str) { // ... } export function getDocumentType(doc) { // ... } // App.js import { romanize } from './utils'; ```
There are some situations where you will need a helper functions like these and setting those up in a util or helpers folder is a great way to handle that. However, to take full advantage of React, I'd suggest thinking about if there is a way you could make a shared component instead. For functions such as your romanize function, you can make a React component that formats the number you pass it and displays it in a span. This is the same approach react libraries use, for example the `react-intl` library recommends using their `<FormattedMessage />` component instead of their `formatMessage` helper function. For example, ``` const RomanNumeral = ({ number }) => { // romanize logic here return <span>{result}</span> } ``` Then you can use it like so: ``` <RomanNumeral number={5} /> ```
51,309,341
So im trying to send a saved wave file from a client to a server with socket however every attempt at doing it fails, the closest ive got is this: ``` #Server.py requests = 0 while True: wavfile = open(str(requests)+str(addr)+".wav", "wb") while True: data = clientsocket.recv(1024) if not data: break requests = requests+1 wavefile.write(data) #Client.py bytes = open("senddata", "rb") networkmanager.send(bytes.encode()) ``` the error with this code is "AttributeError: '\_io.BufferedReader' object has no attribute 'encode'" so is there anyway to fix this?, and im using python
2018/07/12
[ "https://Stackoverflow.com/questions/51309341", "https://Stackoverflow.com", "https://Stackoverflow.com/users/9995176/" ]
You could create e.g. a `utils` file, export your helpers from there, and import them when needed: ``` // utils.js export function romanize(str) { // ... } export function getDocumentType(doc) { // ... } // App.js import { romanize } from './utils'; ```
Shared Conmponents is definetly the react way but having reusable functions in the utility/helper folder is always handy. Here is how I would do it. You could create a `utility` folder inside `src` folder where you could export all the reusable functions ``` --| src ----| utility -------| formatDate.js -------| formatCurrency.js -------| romanize.js ----| components ----| hooks ----| api ``` Then you could import the functions inside your components
51,309,341
So im trying to send a saved wave file from a client to a server with socket however every attempt at doing it fails, the closest ive got is this: ``` #Server.py requests = 0 while True: wavfile = open(str(requests)+str(addr)+".wav", "wb") while True: data = clientsocket.recv(1024) if not data: break requests = requests+1 wavefile.write(data) #Client.py bytes = open("senddata", "rb") networkmanager.send(bytes.encode()) ``` the error with this code is "AttributeError: '\_io.BufferedReader' object has no attribute 'encode'" so is there anyway to fix this?, and im using python
2018/07/12
[ "https://Stackoverflow.com/questions/51309341", "https://Stackoverflow.com", "https://Stackoverflow.com/users/9995176/" ]
There are some situations where you will need a helper functions like these and setting those up in a util or helpers folder is a great way to handle that. However, to take full advantage of React, I'd suggest thinking about if there is a way you could make a shared component instead. For functions such as your romanize function, you can make a React component that formats the number you pass it and displays it in a span. This is the same approach react libraries use, for example the `react-intl` library recommends using their `<FormattedMessage />` component instead of their `formatMessage` helper function. For example, ``` const RomanNumeral = ({ number }) => { // romanize logic here return <span>{result}</span> } ``` Then you can use it like so: ``` <RomanNumeral number={5} /> ```
Shared Conmponents is definetly the react way but having reusable functions in the utility/helper folder is always handy. Here is how I would do it. You could create a `utility` folder inside `src` folder where you could export all the reusable functions ``` --| src ----| utility -------| formatDate.js -------| formatCurrency.js -------| romanize.js ----| components ----| hooks ----| api ``` Then you could import the functions inside your components
64,708,781
I am trying to use multiprocessing in order to run a CPU-intensive job in the background. I'd like this process to be able to use peewee ORM to write its results to the SQLite database. In order to do so, I am trying to override the Meta.database of my model class after thread creation so that I can have a separate db connection for my new process. ``` def get_db(): db = SqliteExtDatabase(path) return db class BaseModel(Model): class Meta: database = get_db() # Many other models class Batch(BaseModel): def multi(): def background_proc(): # trying to override Meta's db connection. BaseModel._meta.database = get_db() job = Job.get_by_id(1) print("working in the background") process = multiprocessing.Process(target=background_proc) process.start() ``` Error when executing `my_batch.multi()` ``` Process Process-1: Traceback (most recent call last): File "/Users/layne/.pyenv/versions/3.7.6/envs/jupyterlab/lib/python3.7/site-packages/peewee.py", line 3099, in execute_sql cursor.execute(sql, params or ()) sqlite3.OperationalError: disk I/O error During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/Users/layne/.pyenv/versions/3.7.6/lib/python3.7/multiprocessing/process.py", line 297, in _bootstrap self.run() File "/Users/layne/.pyenv/versions/3.7.6/lib/python3.7/multiprocessing/process.py", line 99, in run self._target(*self._args, **self._kwargs) File "/Users/layne/Desktop/pydatasci/pydatasci/aidb/__init__.py", line 1249, in background_proc job = Job.get_by_id(1) File "/Users/layne/.pyenv/versions/3.7.6/envs/jupyterlab/lib/python3.7/site-packages/peewee.py", line 6395, in get_by_id return cls.get(cls._meta.primary_key == pk) File "/Users/layne/.pyenv/versions/3.7.6/envs/jupyterlab/lib/python3.7/site-packages/peewee.py", line 6384, in get return sq.get() File "/Users/layne/.pyenv/versions/3.7.6/envs/jupyterlab/lib/python3.7/site-packages/peewee.py", line 6807, in get return clone.execute(database)[0] File "/Users/layne/.pyenv/versions/3.7.6/envs/jupyterlab/lib/python3.7/site-packages/peewee.py", line 1886, in inner return method(self, database, *args, **kwargs) File "/Users/layne/.pyenv/versions/3.7.6/envs/jupyterlab/lib/python3.7/site-packages/peewee.py", line 1957, in execute return self._execute(database) File "/Users/layne/.pyenv/versions/3.7.6/envs/jupyterlab/lib/python3.7/site-packages/peewee.py", line 2129, in _execute cursor = database.execute(self) File "/Users/layne/.pyenv/versions/3.7.6/envs/jupyterlab/lib/python3.7/site-packages/peewee.py", line 3112, in execute return self.execute_sql(sql, params, commit=commit) File "/Users/layne/.pyenv/versions/3.7.6/envs/jupyterlab/lib/python3.7/site-packages/peewee.py", line 3106, in execute_sql self.commit() File "/Users/layne/.pyenv/versions/3.7.6/envs/jupyterlab/lib/python3.7/site-packages/peewee.py", line 2873, in __exit__ reraise(new_type, new_type(exc_value, *exc_args), traceback) File "/Users/layne/.pyenv/versions/3.7.6/envs/jupyterlab/lib/python3.7/site-packages/peewee.py", line 183, in reraise raise value.with_traceback(tb) File "/Users/layne/.pyenv/versions/3.7.6/envs/jupyterlab/lib/python3.7/site-packages/peewee.py", line 3099, in execute_sql cursor.execute(sql, params or ()) peewee.OperationalError: disk I/O error ``` I got this working using threads instead, but it's hard to actually terminate a thread (not just break from a loop) and CPU-intensive (not io delayed) jobs should be multiprocessed. UPDATE: looking into peewee proxy <http://docs.peewee-orm.com/en/latest/peewee/database.html#dynamically-defining-a-database>
2020/11/06
[ "https://Stackoverflow.com/questions/64708781", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5739514/" ]
Haskell doesn't allow this because it would be ambiguous. The value constructor `Prop` is effectively a function, which may be clearer if you ask GHCi about its type: ``` > :t Const Const :: Bool -> Prop ``` If you attempt to add one more `Const` constructor in the same module, you'd have two 'functions' called `Const` in the same module. You can't have that.
This is somewhat horrible, but will basically let you do what you want: ```hs {-# LANGUAGE PatternSynonyms, TypeFamilies, ViewPatterns #-} data Prop = PropConst Bool | PropVar Char | PropNot Prop | PropOr Prop Prop | PropAnd Prop Prop | PropImply Prop Prop data Formula = FormulaConst Bool | FormulaVar Prop | FormulaNot Formula | FormulaAnd Formula Formula | FormulaOr Formula Formula | FormulaImply Formula Formula class PropOrFormula t where type Var t constructConst :: Bool -> t deconstructConst :: t -> Maybe Bool constructVar :: Var t -> t deconstructVar :: t -> Maybe (Var t) constructNot :: t -> t deconstructNot :: t -> Maybe t constructOr :: t -> t -> t deconstructOr :: t -> Maybe (t, t) constructAnd :: t -> t -> t deconstructAnd :: t -> Maybe (t, t) constructImply :: t -> t -> t deconstructImply :: t -> Maybe (t, t) instance PropOrFormula Prop where type Var Prop = Char constructConst = PropConst deconstructConst (PropConst x) = Just x deconstructConst _ = Nothing constructVar = PropVar deconstructVar (PropVar x) = Just x deconstructVar _ = Nothing constructNot = PropNot deconstructNot (PropNot x) = Just x deconstructNot _ = Nothing constructOr = PropOr deconstructOr (PropOr x y) = Just (x, y) deconstructOr _ = Nothing constructAnd = PropAnd deconstructAnd (PropAnd x y) = Just (x, y) deconstructAnd _ = Nothing constructImply = PropImply deconstructImply (PropImply x y) = Just (x, y) deconstructImply _ = Nothing instance PropOrFormula Formula where type Var Formula = Prop constructConst = FormulaConst deconstructConst (FormulaConst x) = Just x deconstructConst _ = Nothing constructVar = FormulaVar deconstructVar (FormulaVar x) = Just x deconstructVar _ = Nothing constructNot = FormulaNot deconstructNot (FormulaNot x) = Just x deconstructNot _ = Nothing constructOr = FormulaOr deconstructOr (FormulaOr x y) = Just (x, y) deconstructOr _ = Nothing constructAnd = FormulaAnd deconstructAnd (FormulaAnd x y) = Just (x, y) deconstructAnd _ = Nothing constructImply = FormulaImply deconstructImply (FormulaImply x y) = Just (x, y) deconstructImply _ = Nothing pattern Const x <- (deconstructConst -> Just x) where Const x = constructConst x pattern Var x <- (deconstructVar -> Just x) where Var x = constructVar x pattern Not x <- (deconstructNot -> Just x) where Not x = constructNot x pattern Or x y <- (deconstructOr -> Just (x, y)) where Or x y = constructOr x y pattern And x y <- (deconstructAnd -> Just (x, y)) where And x y = constructAnd x y pattern Imply x y <- (deconstructImply -> Just (x, y)) where Imply x y = constructImply x y {-# COMPLETE Const, Var, Not, Or, And, Imply :: Prop #-} {-# COMPLETE Const, Var, Not, Or, And, Imply :: Formula #-} ``` If <https://gitlab.haskell.org/ghc/ghc/-/issues/8583> were ever done, then this could be substantially cleaned up.
46,700,236
when I using tensorflow ,I meet with a error: ``` [W 09:27:49.213 NotebookApp] 404 GET /api/kernels/4e889506-2258-481c-b18e-d6a8e920b606/channels?session_id=0665F3F07C004BBAA7CDF6601B6E2BA1 (127.0.0.1): Kernel does not exist: 4e889506-2258-481c-b18e-d6a8e920b606 [W 09:27:49.266 NotebookApp] 404 GET /api/kernels/4e889506-2258-481c-b18e-d6a8e920b606/channels?session_id=0665F3F07C004BBAA7CDF6601B6E2BA1 (127.0.0.1) 340.85ms referer=None [W 09:27:50.337 NotebookApp] /home/dxq/g++ doesn't exist [W 09:27:50.514 NotebookApp] /home/dxq/gcc doesn't exist [I 09:28:03.159 NotebookApp] Kernel started: aa5e56b4-df58-4e74-8dc1-96a4cee847aa [I 09:28:04.032 NotebookApp] Adapting to protocol v5.1 for kernel aa5e56b4-df58-4e74-8dc1-96a4cee847aa I tensorflow/stream_executor/dso_loader.cc:111] successfully opened CUDA library libcublas.so locally I tensorflow/stream_executor/dso_loader.cc:111] successfully opened CUDA library libcudnn.so locally I tensorflow/stream_executor/dso_loader.cc:111] successfully opened CUDA library libcufft.so locally I tensorflow/stream_executor/dso_loader.cc:111] successfully opened CUDA library libcuda.so.1 locally I tensorflow/stream_executor/dso_loader.cc:111] successfully opened CUDA library libcurand.so locally E tensorflow/core/common_runtime/direct_session.cc:132] Internal: failed initializing StreamExecutor for CUDA device ordinal 0: Internal: failed call to cuCtxCreate: CUDA_ERROR_OUT_OF_MEMORY; total memory reported: 18446744071514750976 ``` what's wrong with here? Here is the full spec: ``` ubuntu 16.04 cuda:8.0 python 2.7 ```
2017/10/12
[ "https://Stackoverflow.com/questions/46700236", "https://Stackoverflow.com", "https://Stackoverflow.com/users/7786383/" ]
Run the following command in terminal: ``` nvidia-smi ``` You will get an output like this. [![enter image description here](https://i.stack.imgur.com/FtqZs.png)](https://i.stack.imgur.com/FtqZs.png) You will get a summary of the processes occupying the memory of your GPU. In notebooks, even if no cell is running currently, but previously being run and the local server is still on, the memory will be occupied. You will have to stop whichever process is occupying more memory to allocate some bandwidth for your current process to run.
Check the cuDNN version. It should be 5.1
23,790,460
I am new to Python and I installed the [`speech`](https://pypi.python.org/pypi/speech) library. But whenever I'm importing `speech` from Python shell it's giving the error ``` >>> import speech Traceback (most recent call last): File "<pyshell#0>", line 1, in <module> import speech File "C:\Python34\lib\site-packages\speech-0.5.2-py3.4.egg\speech.py", line 55, in <module> from win32com.client import constants as _constants File "C:\Python34\lib\site-packages\win32com\__init__.py", line 5, in <module> import win32api, sys, os ImportError: DLL load failed: The specified module could not be found. ```
2014/05/21
[ "https://Stackoverflow.com/questions/23790460", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3661976/" ]
So I have contacted the author of R2jags and he has added an addition argument to jags.parallel that lets you pass envir, which is then past onto clusterExport. This works well except it allows clashes between the name of my data and variables in the jags.parallel function.
if you use intensively JAGS in parrallel I can suggest you to look the package `rjags` combined with the package `dclone`. I think `dclone` is realy powerfull because the syntaxe was exactly the same as `rjags`. I have never see your problem with this package. If you want to use `R2jags` I think you need to pass your variables and your init function to the workers with the function: `clusterExport(cl, list("jags.data", "jags.params", "jags.inits"))`
23,790,460
I am new to Python and I installed the [`speech`](https://pypi.python.org/pypi/speech) library. But whenever I'm importing `speech` from Python shell it's giving the error ``` >>> import speech Traceback (most recent call last): File "<pyshell#0>", line 1, in <module> import speech File "C:\Python34\lib\site-packages\speech-0.5.2-py3.4.egg\speech.py", line 55, in <module> from win32com.client import constants as _constants File "C:\Python34\lib\site-packages\win32com\__init__.py", line 5, in <module> import win32api, sys, os ImportError: DLL load failed: The specified module could not be found. ```
2014/05/21
[ "https://Stackoverflow.com/questions/23790460", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3661976/" ]
So I have contacted the author of R2jags and he has added an addition argument to jags.parallel that lets you pass envir, which is then past onto clusterExport. This works well except it allows clashes between the name of my data and variables in the jags.parallel function.
Without changing the code of `R2jags`, you can still assign those data variables to the global environment in an easier way by using `list2env`. Obviously, there is is a concern that those variable names could be overwritten in the global environment, but you probably can control for that. Below is the same code as the example given in the original post except I put the data into a list and sent that list's data into the global environment using the `list2env` function. (Also I took out the unused "out" variable in the function.) This currently runs fine for me; you may have to add more chains and/or add more iterations to see the parallelism in action, though. ``` testparallel <- function(){ library(R2jags) model.file <- system.file(package="R2jags", "model", "schools.txt") # Make a list of the data with named items. jags.data.v2 <- list( J=8.0, y=c(28.4,7.9,-2.8,6.8,-0.6,0.6,18.0,12.2), sd=c(14.9,10.2,16.3,11.0,9.4,11.4,10.4,17.6) ) # Store all that data explicitly in the globalenv() as # was previosly suggesting using the assign(...) function. # This will do that for you. # Now R2jags will have access to the data without you having # to explicitly "assign" each to the globalenv. list2env( jags.data.v2, envir=globalenv() ) jags.params <- c("mu","sigma","theta") jags.inits <- function(){ list("mu"=rnorm(1),"sigma"=runif(1),"theta"=rnorm(J)) } jagsfit <- jags.parallel( data=names(jags.data.v2), inits=jags.inits, jags.params, n.iter=5000, model.file=model.file) return(jagsfit) } ```
61,163,289
I am pretty new to python and I am trying to swap the values of some variables in my code below: ``` def MutationPop(LocalBestInd,clmns,VNSdata): import random MutPop = [] for i in range(0,VNSdata[1]): tmpMutPop = LocalBestInd #generation of random numbers RandomNums = [] while len(RandomNums) < 2: r = random.randint(0,clmns-1) if r not in RandomNums: RandomNums.append(r) RandomNums = sorted(RandomNums) #apply swap to berths tmpMutPop[0][RandomNums[0]] = LocalBestInd[0][RandomNums[1]] tmpMutPop[0][RandomNums[1]] = LocalBestInd[0][RandomNums[0]] #generation of random numbers RandomNums = [] while len(RandomNums) < 2: r = random.randint(0,clmns-1) if r not in RandomNums: RandomNums.append(r) RandomNums = sorted(RandomNums) #apply swap to vessels tmpMutPop[1][RandomNums[0]] = LocalBestInd[1][RandomNums[1]] tmpMutPop[1][RandomNums[1]] = LocalBestInd[1][RandomNums[0]] MutPop.append(tmpMutPop) Neighborhood = MutPop return(Neighborhood) ``` my problem is that I do not want to change the variable "`LocalBestInd`" and want to use it as a reference to generate new "tmpMutPop"s in the loop, but the code put "`LocalBestInd`" equal to "`tmpMutPop`" every time that loop is iterated. The same problem happens for other assignments (e.g., `tmpMutPop[1][RandomNums[1]] = LocalBestInd[1][RandomNums[0]]`) in this code. Would you please help me to solve this problem? Thank you Masoud
2020/04/11
[ "https://Stackoverflow.com/questions/61163289", "https://Stackoverflow.com", "https://Stackoverflow.com/users/13271276/" ]
Based off the tutorial it looks like you've missed a crucial step. You need to install `google-maps-react` dependency in your project. In your console, navigate to your project root directory and run the following: ``` npm install --save google-maps-react ``` Another troubleshooting issue for those who are stuck is to DELETE your `node_modules` folder and the run `npm install` in the console. This will reinstall all the required dependencies for your project. --- **Note:** Considering you've accidentally installed `google-map-react` instead of `google-maps-react`. I recommend uninstalling `google-map-react` since it's not being used. Do that by run the following in your console: ``` npm uninstall --save google-map-react ```
I had the same issue. I fixed it by adding `declare module 'google-map-react'`; in file `react-app-env.d.ts` Try it out and give a feedback by the way I am using TS with React
36,531,404
I have a scenario, where I have to call a certain Python script multiple times in another python script. script1: ``` import sys path=sys.argv print "I am a test" print "see! I do nothing productive." print "path:",path[1] ``` script2: ``` import subprocess l=list() l.append('root') l.append('root1') l.append('root2') for i in l: cmd="python script1.py i" subprocess.Popen(cmd,shell=True) ``` Here, my issue is that in script 2, I am not able to replace the value of "i" in the for loop. Can you help with that?
2016/04/10
[ "https://Stackoverflow.com/questions/36531404", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5820814/" ]
to substitute the value of i into the string you can concatenate it: ``` cmd="python script1.py "+i ``` or format it into the string: ``` cmd="python script1.py %s"%i ``` Either way you need to use the variable i instead of the string i.
I think you are looking for this: ``` cmd="python script1.py %s" % i ```
36,531,404
I have a scenario, where I have to call a certain Python script multiple times in another python script. script1: ``` import sys path=sys.argv print "I am a test" print "see! I do nothing productive." print "path:",path[1] ``` script2: ``` import subprocess l=list() l.append('root') l.append('root1') l.append('root2') for i in l: cmd="python script1.py i" subprocess.Popen(cmd,shell=True) ``` Here, my issue is that in script 2, I am not able to replace the value of "i" in the for loop. Can you help with that?
2016/04/10
[ "https://Stackoverflow.com/questions/36531404", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5820814/" ]
to substitute the value of i into the string you can concatenate it: ``` cmd="python script1.py "+i ``` or format it into the string: ``` cmd="python script1.py %s"%i ``` Either way you need to use the variable i instead of the string i.
Use `subprocess.Popen` with a lists: ``` import subprocess paths = ['root', 'root1', 'root2'] for path in paths: subprocess.Popen(['python', 'script1.py', path]) ```
3,400,144
All, I am familiar with the ability to fake GPS information to the emulator through the use of the `geo fix long lat altitude` command when connected through the emulator. What I'd like to do is have a simulation running on potentially a different computer produce lat, long, altitudes that should be sent over to the Android device to fake the emulator into thinking it has received a GPS update. I see various solutions for [scripting a telnet session](https://stackoverflow.com/questions/709801/creating-a-script-for-a-telnet-session); it seems like the best solution is, in pseudocode: ``` while true: if position update generated / received open subprocess and call "echo 'geo fix lon lat altitude' | nc localhost 5554" ``` This seems like a big hack, although it works on Mac (not on Windows). Is there a better way to do this? (I cannot generate the tracks ahead of time and feed them in as a route; the Android system is part of a real time simulation, but as it's running on an emulator there is no position updates. Another system is responsible for calculating these position updates). edit: Alternative method, perhaps more clean, is to use the telnetlib library of python. ``` import telnetlib tn = telnetlib.Telnet("localhost",5554) while True: if position update generated / received tn.write("geo fix longitude latitude altitude\r\n") ```
2010/08/03
[ "https://Stackoverflow.com/questions/3400144", "https://Stackoverflow.com", "https://Stackoverflow.com/users/155392/" ]
The response you're seeing is an empty response which doesn't necessarily mean there's no metric data available. A few ideas what might cause this: * Are you using a user access token? If yes, does the user own the page? Is the 'read\_insights' extended permission granted for the user / access token? How about 'offline\_access'? * end\_time needs should be specified as midnight, Pacific Time. * Valid periods are 86400, 604800, 2592000 (day, week, month) * Does querying 'page\_fan\_adds' metric yield meaningful results for a given period? While I haven't worked with the insights table, working with Facebook's FQL taught me not to expect error messages, error codes but try to follow the documentation (if available) and then experiment with it... As for the date, use the following ruby snippet for midnight, today: ``` Date.new(2010,9,14).to_time.to_i ``` --- I also found the following on the Graph API documentation page: > > **Impersonation** > > > You can impersonate pages administrated by your users by requesting the "manage\_pages" extended permission. > > > Once a user has granted your application the "manage\_pages" permission, the "accounts" connection will yield an additional access\_token property for every page administrated by the current user. These access\_tokens can be used to make calls on behalf of a page. The permissions granted by a user to your application will now also be applicable to their pages. ([source](http://developers.facebook.com/docs/api)) > > > Have you tried requesting this permission and use &metadata=1 in a Graph API query to get the access token for each account?
If you want to know the number of fans a facebook page has, use something like: ``` https://graph.facebook.com/cocacola ``` The response contains a `fan_count` property.
3,400,144
All, I am familiar with the ability to fake GPS information to the emulator through the use of the `geo fix long lat altitude` command when connected through the emulator. What I'd like to do is have a simulation running on potentially a different computer produce lat, long, altitudes that should be sent over to the Android device to fake the emulator into thinking it has received a GPS update. I see various solutions for [scripting a telnet session](https://stackoverflow.com/questions/709801/creating-a-script-for-a-telnet-session); it seems like the best solution is, in pseudocode: ``` while true: if position update generated / received open subprocess and call "echo 'geo fix lon lat altitude' | nc localhost 5554" ``` This seems like a big hack, although it works on Mac (not on Windows). Is there a better way to do this? (I cannot generate the tracks ahead of time and feed them in as a route; the Android system is part of a real time simulation, but as it's running on an emulator there is no position updates. Another system is responsible for calculating these position updates). edit: Alternative method, perhaps more clean, is to use the telnetlib library of python. ``` import telnetlib tn = telnetlib.Telnet("localhost",5554) while True: if position update generated / received tn.write("geo fix longitude latitude altitude\r\n") ```
2010/08/03
[ "https://Stackoverflow.com/questions/3400144", "https://Stackoverflow.com", "https://Stackoverflow.com/users/155392/" ]
We have confirmed this bug. This is due to the end\_time having to be aligned with day delimiters in PST in order for the Insights table to return any data. To address this issue, we introduced two custom functions you can use to query the insights table: 1. end\_time\_date() : accept DATE in string form(e.g. '2010-08-01') 2. period() : accept 'lifetime', 'day', 'week' and 'month' For example, you can now query the insights table using: ``` SELECT metric, value FROM insights WHERE object_id = YOUR_APP_ID AND metric = 'application_active_users' AND end_time = end_time_date('2010-09-01') AND period = period('day'); ``` We will document about these functions soon, sorry for all the inconvenience! P.S. If you don't want to use the end\_time\_date() function, please make sure the end\_time timestamp in your query is aligned with day delimiters in PST. Thanks! Facebook Insights Team
If you want to know the number of fans a facebook page has, use something like: ``` https://graph.facebook.com/cocacola ``` The response contains a `fan_count` property.
3,400,144
All, I am familiar with the ability to fake GPS information to the emulator through the use of the `geo fix long lat altitude` command when connected through the emulator. What I'd like to do is have a simulation running on potentially a different computer produce lat, long, altitudes that should be sent over to the Android device to fake the emulator into thinking it has received a GPS update. I see various solutions for [scripting a telnet session](https://stackoverflow.com/questions/709801/creating-a-script-for-a-telnet-session); it seems like the best solution is, in pseudocode: ``` while true: if position update generated / received open subprocess and call "echo 'geo fix lon lat altitude' | nc localhost 5554" ``` This seems like a big hack, although it works on Mac (not on Windows). Is there a better way to do this? (I cannot generate the tracks ahead of time and feed them in as a route; the Android system is part of a real time simulation, but as it's running on an emulator there is no position updates. Another system is responsible for calculating these position updates). edit: Alternative method, perhaps more clean, is to use the telnetlib library of python. ``` import telnetlib tn = telnetlib.Telnet("localhost",5554) while True: if position update generated / received tn.write("geo fix longitude latitude altitude\r\n") ```
2010/08/03
[ "https://Stackoverflow.com/questions/3400144", "https://Stackoverflow.com", "https://Stackoverflow.com/users/155392/" ]
The response you're seeing is an empty response which doesn't necessarily mean there's no metric data available. A few ideas what might cause this: * Are you using a user access token? If yes, does the user own the page? Is the 'read\_insights' extended permission granted for the user / access token? How about 'offline\_access'? * end\_time needs should be specified as midnight, Pacific Time. * Valid periods are 86400, 604800, 2592000 (day, week, month) * Does querying 'page\_fan\_adds' metric yield meaningful results for a given period? While I haven't worked with the insights table, working with Facebook's FQL taught me not to expect error messages, error codes but try to follow the documentation (if available) and then experiment with it... As for the date, use the following ruby snippet for midnight, today: ``` Date.new(2010,9,14).to_time.to_i ``` --- I also found the following on the Graph API documentation page: > > **Impersonation** > > > You can impersonate pages administrated by your users by requesting the "manage\_pages" extended permission. > > > Once a user has granted your application the "manage\_pages" permission, the "accounts" connection will yield an additional access\_token property for every page administrated by the current user. These access\_tokens can be used to make calls on behalf of a page. The permissions granted by a user to your application will now also be applicable to their pages. ([source](http://developers.facebook.com/docs/api)) > > > Have you tried requesting this permission and use &metadata=1 in a Graph API query to get the access token for each account?
I'm not sure the date is correct. do you really want the date as an integer? Usually SQL takes dates in db-format, so to format it you'd use: ``` Date.new(2010,9,14).to_s(:db) (Time.now - 5.days).to_s(:db) # or even better: 5.days.ago.to_s(:db) ```