qid
int64
46k
74.7M
question
stringlengths
54
37.8k
date
stringlengths
10
10
metadata
listlengths
3
3
response_j
stringlengths
17
26k
response_k
stringlengths
26
26k
18,950,409
I have a two dimensional associative array (dictionary). I'd like to iterate over the first dimension using a for loop, and extract the second dimension's dictionary at each iteration. For example: ``` #!/usr/bin/python doubleDict = dict() doubleDict['one'] = dict() doubleDict['one']['type'] = 'animal' doubleDict['one']['name'] = 'joe' doubleDict['one']['species'] = 'monkey' doubleDict['two'] = dict() doubleDict['two']['type'] = 'plant' doubleDict['two']['name'] = 'moe' doubleDict['two']['species'] = 'oak' for thing in doubleDict: print thing print thing['type'] print thing['name'] print thing['species'] ``` My desired output: ``` {'type': 'plant', 'name': 'moe', 'species': 'oak'} plant moe oak ``` My actual output: ``` two Traceback (most recent call last): File "./test.py", line 16, in <module> print thing['type'] TypeError: string indices must be integers, not str ``` What am I missing? PS I'm aware I can do a `for k,v in doubleDict`, but I'm *really* trying to avoid having to do a long `if k == 'type': ... elif k == 'name': ...` statement. I'm looking to be able to call `thing['type']` directly.
2013/09/23
[ "https://Stackoverflow.com/questions/18950409", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1174102/" ]
For-loops in `dict`s iterates over the keys and not over the values. To iterate over the values do: ``` for thing in doubleDict.itervalues(): print thing print thing['type'] print thing['name'] print thing['species'] ``` I used your exact same code, but added the `.itervalues()` at the end which means: "I want to iterate over the values".
these all work... but looking at your code, why not use a named tuple instead? from collections import namedtuple LivingThing = namedtuple('LivingThing', 'type name species') doubledict['one'] = LivingThing(type='animal', name='joe', species='monkey') doubledict['one'].name doubledict['one'].\_asdict['name']
18,950,409
I have a two dimensional associative array (dictionary). I'd like to iterate over the first dimension using a for loop, and extract the second dimension's dictionary at each iteration. For example: ``` #!/usr/bin/python doubleDict = dict() doubleDict['one'] = dict() doubleDict['one']['type'] = 'animal' doubleDict['one']['name'] = 'joe' doubleDict['one']['species'] = 'monkey' doubleDict['two'] = dict() doubleDict['two']['type'] = 'plant' doubleDict['two']['name'] = 'moe' doubleDict['two']['species'] = 'oak' for thing in doubleDict: print thing print thing['type'] print thing['name'] print thing['species'] ``` My desired output: ``` {'type': 'plant', 'name': 'moe', 'species': 'oak'} plant moe oak ``` My actual output: ``` two Traceback (most recent call last): File "./test.py", line 16, in <module> print thing['type'] TypeError: string indices must be integers, not str ``` What am I missing? PS I'm aware I can do a `for k,v in doubleDict`, but I'm *really* trying to avoid having to do a long `if k == 'type': ... elif k == 'name': ...` statement. I'm looking to be able to call `thing['type']` directly.
2013/09/23
[ "https://Stackoverflow.com/questions/18950409", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1174102/" ]
A generic way to get to the nested results: ``` for thing in doubleDict.values(): print(thing) for vals in thing.values(): print(vals) ``` or ``` for thing in doubleDict.values(): print(thing) print('\n'.join(thing.values())) ```
You could use @Haidro's answer but make it more generic with a double loop: ``` for key1 in doubleDict: print(doubleDict[key1]) for key2 in doubleDict[key1]: print(doubleDict[key1][key2]) {'type': 'plant', 'name': 'moe', 'species': 'oak'} plant moe oak {'type': 'animal', 'name': 'joe', 'species': 'monkey'} animal joe monkey ```
18,950,409
I have a two dimensional associative array (dictionary). I'd like to iterate over the first dimension using a for loop, and extract the second dimension's dictionary at each iteration. For example: ``` #!/usr/bin/python doubleDict = dict() doubleDict['one'] = dict() doubleDict['one']['type'] = 'animal' doubleDict['one']['name'] = 'joe' doubleDict['one']['species'] = 'monkey' doubleDict['two'] = dict() doubleDict['two']['type'] = 'plant' doubleDict['two']['name'] = 'moe' doubleDict['two']['species'] = 'oak' for thing in doubleDict: print thing print thing['type'] print thing['name'] print thing['species'] ``` My desired output: ``` {'type': 'plant', 'name': 'moe', 'species': 'oak'} plant moe oak ``` My actual output: ``` two Traceback (most recent call last): File "./test.py", line 16, in <module> print thing['type'] TypeError: string indices must be integers, not str ``` What am I missing? PS I'm aware I can do a `for k,v in doubleDict`, but I'm *really* trying to avoid having to do a long `if k == 'type': ... elif k == 'name': ...` statement. I'm looking to be able to call `thing['type']` directly.
2013/09/23
[ "https://Stackoverflow.com/questions/18950409", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1174102/" ]
A generic way to get to the nested results: ``` for thing in doubleDict.values(): print(thing) for vals in thing.values(): print(vals) ``` or ``` for thing in doubleDict.values(): print(thing) print('\n'.join(thing.values())) ```
these all work... but looking at your code, why not use a named tuple instead? from collections import namedtuple LivingThing = namedtuple('LivingThing', 'type name species') doubledict['one'] = LivingThing(type='animal', name='joe', species='monkey') doubledict['one'].name doubledict['one'].\_asdict['name']
48,000,225
I have two dataframes as follows: `leader`: ```none 0 11 1 8 2 5 3 9 4 8 5 6 [6065 rows x 2 columns] ```none `DatasetLabel`: ```none 0 1 .... 7 8 9 10 11 12 0 A J .... 1 2 5 NaN NaN NaN 1 B K .... 3 4 NaN NaN NaN NaN [4095 rows x 14 columns] ``` The Information dataset column names 0 to 6 are `DatasetLabel` about data and 7 to 12 are indexes that refer to the first column of `leader` Dataframe. I want to create dataset where instead of the indexes in `DatasetLabel` dataframe, I have the value of each index from the `leader` dataframe, which is `leader.iloc[index,1]` How can I do it using python features? The output should look like: `DatasetLabel`: ```none 0 1 .... 7 8 9 10 11 12 0 A J .... 8 5 6 NaN NaN NaN 1 B K .... 9 8 NaN NaN NaN NaN ``` I have come up with the following, but I get an error: ```py for column in DatasetLabel.ix[:, 8:13]: DatasetLabel[DatasetLabel[column].notnull()] = leader.iloc[DatasetLabel[DatasetLabel[column].notnull()][column].values, 1] ``` Error: ```none ValueError: Must have equal len keys and value when setting with an iterable ```
2017/12/28
[ "https://Stackoverflow.com/questions/48000225", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3806649/" ]
You can use `apply` to index into `leader` and exchange values with `DatasetLabel`, although it's not very pretty. One issue is that Pandas won't let us index with `NaN`. Converting to `str` provides a workaround. But that creates a second issue, namely, column `9` is of type `float` (because `NaN` is `float`), so `5` becomes `5.0`. Once it's a string, that's `"5.0"`, which will fail to match the index values in `leader`. We can remove the `.0`, and then this solution will work - but it's a bit of a hack. With `DatasetLabel` as: ``` Unnamed:0 0 1 7 8 9 10 11 12 0 0 A J 1 2 5.0 NaN NaN NaN 1 1 B K 3 4 NaN NaN NaN NaN ``` And `leader` as: ``` 0 1 0 0 11 1 1 8 2 2 5 3 3 9 4 4 8 5 5 6 ``` Then: ``` cols = ["7","8","9","10","11","12"] updated = DatasetLabel[cols].apply( lambda x: leader.loc[x.astype(str).str.split(".").str[0], 1].values, axis=1) updated 7 8 9 10 11 12 0 8.0 5.0 6.0 NaN NaN NaN 1 9.0 8.0 NaN NaN NaN NaN ``` Now we can `concat` the unmodified columns (which we'll call `original`) with `updated`: ``` original_cols = DatasetLabel.columns[~DatasetLabel.columns.isin(cols)] original = DatasetLabel[original_cols] pd.concat([original, updated], axis=1) ``` Output: ``` Unnamed:0 0 1 7 8 9 10 11 12 0 0 A J 8.0 5.0 6.0 NaN NaN NaN 1 1 B K 9.0 8.0 NaN NaN NaN NaN ``` Note: It may be clearer to use `concat` here, but here's another, cleaner way of merging `original` and `updated`, using `assign`: ``` DatasetLabel.assign(**updated) ```
The [source code](https://github.com/pandas-dev/pandas/blob/v1.5.2/pandas/core/indexing.py#L1828-L1882) shows that this error occurs when you try to broadcast a list-like object (numpy array, list, set, tuple etc.) to multiple columns or rows but didn't specify the index correctly. Of course, list-like objects don't have custom indices like pandas objects, so it usually causes this error. Solutions to common cases: 1. **You want to assign the same values across multiple columns at once.** In other words, you want to change the values of certain columns using a list-like object whose (a) length doesn't match the number of columns or rows and (b) dtype doesn't match the dtype of the columns they are being assigned to.1 An illustration may make it clearer. If you try to make the transformation below: [![first](https://i.stack.imgur.com/kAyhC.png)](https://i.stack.imgur.com/kAyhC.png) using a code similar to the one below, this error occurs: ```py df = pd.DataFrame({'A': [1, 5, 9], 'B': [2, 6, 10], 'C': [3, 7, 11], 'D': [4, 8, 12]}) df.loc[:2, ['C','D']] = [100, 200.2, 300] ``` **Solution:** Duplicate the list/array/tuple, transpose it (either using `T` or `zip()`) and assign to the relevant rows/columns.2 ```py df.loc[:2, ['C','D']] = np.tile([100, 200.2, 300], (len(['C','D']), 1)).T # if you don't fancy numpy, use zip() on a list # df.loc[:2, ['C','D']] = list(zip(*[[100, 200.2, 300]]*len(['C','D']))) ``` 2. **You want to assign the same values to multiple rows at once.** If you try to make the following transformation [![second](https://i.stack.imgur.com/p56EI.png)](https://i.stack.imgur.com/p56EI.png) using a code similar to the following: ```py df = pd.DataFrame({'A': [1, 5, 9], 'B': [2, 6, 10], 'C': [3, 7, 11], 'D': [4, 8, 12]}) df.loc[[0, 1], ['A', 'B', 'C']] = [100, 200.2] ``` **Solution:** To make it work as expected, we must convert the list/array into a Series with the correct index: ```py df.loc[[0, 1], ['A', 'B', 'C']] = pd.Series([100, 200.2], index=[0, 1]) ``` A common sub-case is if the row indices come from using a boolean mask. N.B. This is the case in the OP. In that case, just use the mask to filter `df.index`: ```py msk = df.index < 2 df.loc[msk, ['A', 'B', 'C']] = [100, 200.2] # <--- error df.loc[msk, ['A', 'B', 'C']] = pd.Series([100, 200.2], index=df.index[msk]) # <--- OK ``` 3. **You want to store the same list in some rows of a column.** An illustration of this case is: [![third](https://i.stack.imgur.com/Jk8zJ.png)](https://i.stack.imgur.com/Jk8zJ.png) **Solution:** Explicitly construct a Series with the correct indices. ```py # for the case on the left in the image above df['D'] = pd.Series([[100, 200.2]]*len(df), index=df.index) # latter case df.loc[[1], 'D'] = pd.Series([[100, 200.2]], index=df.index[[1]]) ``` --- 1: Here, we tried to assign a list containing a float to int dtype columns, which contributed to this error being raised. If we tried to assign a list of ints (so that the dtypes match), we'd get a different error: `ValueError: shape mismatch: value array of shape (2,) could not be broadcast to indexing result of shape (2,3)` which can also be solved by the same method as above. 2: An error related to this one `ValueError: Must have equal len keys and value when setting with an ndarray` occurs if the object being assigned is a numpy array and there's a shape mismatch. That one is often solved either using `np.tile` or simply transposing the array.
7,007,400
I have a small python application, which uses pyttsx for some text to speech. How it works: simply say whatever is there in the clipboard. The program works as expected inside eclipse. But if run on cmd.exe it only works partly if the text on the clipboard is too large(a few paras). Why ? when run from cmd, it prints statements , but the actual 'talking' doesn't work(if the clipboard text is too large Here is a of the program part which actually does the talking: As can be seen the 'talking' part is handled inside a thread. ``` def saythread(queue , text , pauselocation, startingPoint): saythread.pauselocation = pauselocation saythread.pause = 0 saythread.engine = pyttsx.init() saythread.pausequeue1 = False def onWord(name, location, length): saythread.pausequeue1 = queue.get(False) saythread.pause = location saythread.pauselocation.append(location) if saythread.pausequeue1 == True : saythread.engine.stop() def onFinishUtterance(name, completed): if completed == True: os._exit(0) def engineRun(): if len(saythread.pauselocation) == 1: rate = saythread.engine.getProperty('rate') print rate saythread.engine.setProperty('rate', rate-30) textMod = text[startingPoint:] saythread.engine.say(text[startingPoint:]) token = saythread.engine.connect("started-word" , onWord ) saythread.engine.connect("finished-utterance" , onFinishUtterance ) saythread.engine.startLoop(True) engineRun() if saythread.pausequeue1 == False: os._exit(1) def runNewThread(wordsToSay, startingPoint): global queue, pauselocation e1 = (queue, wordsToSay, pauselocation, startingPoint) t1 = threading.Thread(target=saythread,args=e1) t1.start() #wordsToSay = CLIPBOARD CONTENTS runNewThread(wordsToSay,0) ``` Thanks Edit: I have checked than the python version used is the same 2.7 . The command used to run the program in cmd : `python d:\python\play\speech\speechplay.py`
2011/08/10
[ "https://Stackoverflow.com/questions/7007400", "https://Stackoverflow.com", "https://Stackoverflow.com/users/161179/" ]
Checked that the problem is not in the code that reads the text from the clipboard. You should check if your eclipse setup specifies custom environment variables for the project which do not exist outside Eclipse. Especially: * PYTHONPATH (and also additional projects on which your program could depend in your setup) * PATH Use ``` import os print os.environ['PATH'] print os.environ['PYTHONPATH'] ``` at the beginning of your program to compare both settings. Misc stylistic advices: * don't use `os._exit`, prefer `sys.exit` (you should only use `os._exit` in a child process after a call to `os.fork`, which is not available on Windows) * I think a `threading.Event` would be more appropriate than a `queue.Queue` * I'd use a subclass approach for the thread with methods rather than a function with inner functions For example: ``` import threading import sys import pyttsx class SayThread(threading.Thread): def __init__(self, queue, text, pauselocation, startingPoint, debug=False): threading.Thread.__init__(self) self.queue = queue self.text = text self.pauselocation = pauselocation self.startingPoint = startingPoint self.pause = 0 self.engine = pyttsx.init(debug=debug) self.pausequeue1 = False def run(self): if len(self.pauselocation) == 1: rate = self.engine.getProperty('rate') print rate self.engine.setProperty('rate', rate-30) textMod = self.text[self.startingPoint:] self.engine.say(self.text[self.startingPoint:]) self.engine.connect("started-word", self.onWord ) self.engine.connect("finished-utterance", self.onFinishUtterance ) self.engine.startLoop(True) if self.pausequeue1 == False: sys.exit(1) def onWord(self, name, location, length): self.pausequeue1 = self.queue.get(False) self.pause = location self.pauselocation.append(location) if self.pausequeue1 == True : self.engine.stop() def onFinishUtterance(self, name, completed): if completed == True: sys.exit(0) def runNewThread(wordsToSay, startingPoint): global queue, pauselocation t1 = SayThread(queue, wordsToSay, pauselocation, startingPoint) t1.start() #wordsToSay = CLIPBOARD CONTENTS runNewThread(wordsToSay,0) ```
In fact, eclipse itself uses a commandline command to start it's apps. You should check what command eclipse is giving to start the program. It might be a bit verbose, but you can start from there and test what is necessary and what isn't. You can find out the commandline eclipse uses by running the program and then selecting the output in the debug window. Right-click it, select properties and you're done. If you don't have a debug window you can open it window/show view/(other possibly)/debug.
7,007,400
I have a small python application, which uses pyttsx for some text to speech. How it works: simply say whatever is there in the clipboard. The program works as expected inside eclipse. But if run on cmd.exe it only works partly if the text on the clipboard is too large(a few paras). Why ? when run from cmd, it prints statements , but the actual 'talking' doesn't work(if the clipboard text is too large Here is a of the program part which actually does the talking: As can be seen the 'talking' part is handled inside a thread. ``` def saythread(queue , text , pauselocation, startingPoint): saythread.pauselocation = pauselocation saythread.pause = 0 saythread.engine = pyttsx.init() saythread.pausequeue1 = False def onWord(name, location, length): saythread.pausequeue1 = queue.get(False) saythread.pause = location saythread.pauselocation.append(location) if saythread.pausequeue1 == True : saythread.engine.stop() def onFinishUtterance(name, completed): if completed == True: os._exit(0) def engineRun(): if len(saythread.pauselocation) == 1: rate = saythread.engine.getProperty('rate') print rate saythread.engine.setProperty('rate', rate-30) textMod = text[startingPoint:] saythread.engine.say(text[startingPoint:]) token = saythread.engine.connect("started-word" , onWord ) saythread.engine.connect("finished-utterance" , onFinishUtterance ) saythread.engine.startLoop(True) engineRun() if saythread.pausequeue1 == False: os._exit(1) def runNewThread(wordsToSay, startingPoint): global queue, pauselocation e1 = (queue, wordsToSay, pauselocation, startingPoint) t1 = threading.Thread(target=saythread,args=e1) t1.start() #wordsToSay = CLIPBOARD CONTENTS runNewThread(wordsToSay,0) ``` Thanks Edit: I have checked than the python version used is the same 2.7 . The command used to run the program in cmd : `python d:\python\play\speech\speechplay.py`
2011/08/10
[ "https://Stackoverflow.com/questions/7007400", "https://Stackoverflow.com", "https://Stackoverflow.com/users/161179/" ]
In fact, eclipse itself uses a commandline command to start it's apps. You should check what command eclipse is giving to start the program. It might be a bit verbose, but you can start from there and test what is necessary and what isn't. You can find out the commandline eclipse uses by running the program and then selecting the output in the debug window. Right-click it, select properties and you're done. If you don't have a debug window you can open it window/show view/(other possibly)/debug.
turns out pythonpath wasn't set properly on my system. Edit: turns out pythonpath isn't the problem. I have no idea whats the problem. arghhhhhhhhhhhhhhhhhhhhhhhh
7,007,400
I have a small python application, which uses pyttsx for some text to speech. How it works: simply say whatever is there in the clipboard. The program works as expected inside eclipse. But if run on cmd.exe it only works partly if the text on the clipboard is too large(a few paras). Why ? when run from cmd, it prints statements , but the actual 'talking' doesn't work(if the clipboard text is too large Here is a of the program part which actually does the talking: As can be seen the 'talking' part is handled inside a thread. ``` def saythread(queue , text , pauselocation, startingPoint): saythread.pauselocation = pauselocation saythread.pause = 0 saythread.engine = pyttsx.init() saythread.pausequeue1 = False def onWord(name, location, length): saythread.pausequeue1 = queue.get(False) saythread.pause = location saythread.pauselocation.append(location) if saythread.pausequeue1 == True : saythread.engine.stop() def onFinishUtterance(name, completed): if completed == True: os._exit(0) def engineRun(): if len(saythread.pauselocation) == 1: rate = saythread.engine.getProperty('rate') print rate saythread.engine.setProperty('rate', rate-30) textMod = text[startingPoint:] saythread.engine.say(text[startingPoint:]) token = saythread.engine.connect("started-word" , onWord ) saythread.engine.connect("finished-utterance" , onFinishUtterance ) saythread.engine.startLoop(True) engineRun() if saythread.pausequeue1 == False: os._exit(1) def runNewThread(wordsToSay, startingPoint): global queue, pauselocation e1 = (queue, wordsToSay, pauselocation, startingPoint) t1 = threading.Thread(target=saythread,args=e1) t1.start() #wordsToSay = CLIPBOARD CONTENTS runNewThread(wordsToSay,0) ``` Thanks Edit: I have checked than the python version used is the same 2.7 . The command used to run the program in cmd : `python d:\python\play\speech\speechplay.py`
2011/08/10
[ "https://Stackoverflow.com/questions/7007400", "https://Stackoverflow.com", "https://Stackoverflow.com/users/161179/" ]
Checked that the problem is not in the code that reads the text from the clipboard. You should check if your eclipse setup specifies custom environment variables for the project which do not exist outside Eclipse. Especially: * PYTHONPATH (and also additional projects on which your program could depend in your setup) * PATH Use ``` import os print os.environ['PATH'] print os.environ['PYTHONPATH'] ``` at the beginning of your program to compare both settings. Misc stylistic advices: * don't use `os._exit`, prefer `sys.exit` (you should only use `os._exit` in a child process after a call to `os.fork`, which is not available on Windows) * I think a `threading.Event` would be more appropriate than a `queue.Queue` * I'd use a subclass approach for the thread with methods rather than a function with inner functions For example: ``` import threading import sys import pyttsx class SayThread(threading.Thread): def __init__(self, queue, text, pauselocation, startingPoint, debug=False): threading.Thread.__init__(self) self.queue = queue self.text = text self.pauselocation = pauselocation self.startingPoint = startingPoint self.pause = 0 self.engine = pyttsx.init(debug=debug) self.pausequeue1 = False def run(self): if len(self.pauselocation) == 1: rate = self.engine.getProperty('rate') print rate self.engine.setProperty('rate', rate-30) textMod = self.text[self.startingPoint:] self.engine.say(self.text[self.startingPoint:]) self.engine.connect("started-word", self.onWord ) self.engine.connect("finished-utterance", self.onFinishUtterance ) self.engine.startLoop(True) if self.pausequeue1 == False: sys.exit(1) def onWord(self, name, location, length): self.pausequeue1 = self.queue.get(False) self.pause = location self.pauselocation.append(location) if self.pausequeue1 == True : self.engine.stop() def onFinishUtterance(self, name, completed): if completed == True: sys.exit(0) def runNewThread(wordsToSay, startingPoint): global queue, pauselocation t1 = SayThread(queue, wordsToSay, pauselocation, startingPoint) t1.start() #wordsToSay = CLIPBOARD CONTENTS runNewThread(wordsToSay,0) ```
turns out pythonpath wasn't set properly on my system. Edit: turns out pythonpath isn't the problem. I have no idea whats the problem. arghhhhhhhhhhhhhhhhhhhhhhhh
32,678,690
How to install pip for python3.4 when my pi have python3.2 and python3.4 when I used `sudo install python3-pip` it's only for python3.2 but I want install pip for python3.4
2015/09/20
[ "https://Stackoverflow.com/questions/32678690", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5089211/" ]
Python 3.4 has `pip` included, see [*What's New in Python 3.4*](https://docs.python.org/3/whatsnew/3.4.html#whatsnew-pep-453). Just execute: ``` python3.4 -m ensurepip ``` to install it if it is missing for you. See the [`ensurepip` module documentation](https://docs.python.org/3/library/ensurepip.html) for further details.
You can go to your python 3.4 directory scripts and run it's pip in: `../python3.4/scripts`
32,678,690
How to install pip for python3.4 when my pi have python3.2 and python3.4 when I used `sudo install python3-pip` it's only for python3.2 but I want install pip for python3.4
2015/09/20
[ "https://Stackoverflow.com/questions/32678690", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5089211/" ]
Python 3.4 has `pip` included, see [*What's New in Python 3.4*](https://docs.python.org/3/whatsnew/3.4.html#whatsnew-pep-453). Just execute: ``` python3.4 -m ensurepip ``` to install it if it is missing for you. See the [`ensurepip` module documentation](https://docs.python.org/3/library/ensurepip.html) for further details.
You should compile python 3.4 and use venv for python3 environment: 1. Check if you have installed required dependencies: ``` sudo apt-get install build-essential sudo apt-get install libc6-dev libreadline-dev libz-dev libncursesw5-dev libssl-dev libgdbm-dev libsqlite3-dev libbz2-dev liblzma-dev tk-dev ``` 2. Download and compile Python 3.4.3. You should not `sudo make install` it because we don't need it systemwide: ``` wget https://www.python.org/ftp/python/3.4.3/Python-3.4.3.tgz tar -xvf Python-3.4.3.tgz cd Python-3.4.3 ./configure && make ``` This may take a while on pi. 3. While you still in current folder, create python environment: ``` mkdir -p ~/.virtualenvs ./python -m venv ~/.virtualenvs/py34 ``` 4. launch your virtual environment: ``` source ~/.virtualenvs/py34/bin/activate ``` Now you have Python 3.4 and pip inside of it. Try: ``` pip install bpython ``` To exit virtual environment use: ``` deactivate ```
7,921,973
i'm writing an installer using py2exe which needs to run in admin to have permission to perform various file operations. i've modified some sample code from the user\_access\_controls directory that comes with py2exe to create the setup file. creating/running the generated exe works fine when i run it on my own computer. however, when i try to run the exe on a computer that doesn't have python installed, i get an error saying that the import modules (shutil and os in this case) do not exist. it was my impression that py2exe automatically wraps all the file dependencies into the exe but i guess that this is not the case. py2exe does generate a zip file called library that contains all the python modules but apparently they are not used by the generated exe. basically my question is how do i get the imports to be included in the exe generated by py2exe. perhaps modification need to be made to my setup.py file - the code for this is as follows: ``` from distutils.core import setup import py2exe # The targets to build # create a target that says nothing about UAC - On Python 2.6+, this # should be identical to "asInvoker" below. However, for 2.5 and # earlier it will force the app into compatibility mode (as no # manifest will exist at all in the target.) t1 = dict(script="findpath.py", dest_base="findpath", uac_info="requireAdministrator") console = [t1] # hack to make windows copies of them all too, but # with '_w' on the tail of the executable. windows = [{'script': "findpath.py", 'uac_info': "requireAdministrator", },] setup( version = "0.5.0", description = "py2exe user-access-control", name = "py2exe samples", # targets to build windows = windows, console = console, ) ```
2011/10/27
[ "https://Stackoverflow.com/questions/7921973", "https://Stackoverflow.com", "https://Stackoverflow.com/users/971550/" ]
Try to set `options={'py2exe': {'bundle_files': 1}},` and `zipfile = None` in setup section. Python will make single .exe file without dependencies. Example: ``` from distutils.core import setup import py2exe setup( console=['watt.py'], options={'py2exe': {'bundle_files': 1}}, zipfile = None ) ```
I rewrite your setup script for you. This will work ``` from distutils.core import setup import py2exe # The targets to build # create a target that says nothing about UAC - On Python 2.6+, this # should be identical to "asInvoker" below. However, for 2.5 and # earlier it will force the app into compatibility mode (as no # manifest will exist at all in the target.) t1 = dict(script="findpath.py", dest_base="findpath", uac_info="requireAdministrator") console = [t1] # hack to make windows copies of them all too, but # with '_w' on the tail of the executable. windows = [{'script': "findpath.py", 'uac_info': "requireAdministrator", },] setup( version = "0.5.0", description = "py2exe user-access-control", name = "py2exe samples", # targets to build windows = windows, console = console, #the options is what you fail to include it will instruct py2exe to include these modules explicitly options={"py2exe": {"includes": ["sip","os","shutil"]} } ) ```
71,853,039
In short, how do I get this: [![enter image description here](https://i.stack.imgur.com/JBBws.jpg)](https://i.stack.imgur.com/JBBws.jpg) From this: ```py def fiblike(ls, n): store = [] for i in range(n): a = ls.pop(0) ls.append(sum(ls)+a) store.append(a) return store ``` With all the indentation guide and code highlighting. I have written hundreds of Python scripts and I need to convert all of them to images... I have seen this: ```py import Image import ImageDraw import ImageFont def getSize(txt, font): testImg = Image.new('RGB', (1, 1)) testDraw = ImageDraw.Draw(testImg) return testDraw.textsize(txt, font) if __name__ == '__main__': fontname = "Arial.ttf" fontsize = 11 text = "example@gmail.com" colorText = "black" colorOutline = "red" colorBackground = "white" font = ImageFont.truetype(fontname, fontsize) width, height = getSize(text, font) img = Image.new('RGB', (width+4, height+4), colorBackground) d = ImageDraw.Draw(img) d.text((2, height/2), text, fill=colorText, font=font) d.rectangle((0, 0, width+3, height+3), outline=colorOutline) img.save("D:/image.png") ``` from [here](https://www.codegrepper.com/code-examples/python/how+to+convert+text+file+to+image+in+python) But it does not do code highlighting and I want either a `numpy` or `cv2` based solution. How can I do it?
2022/04/13
[ "https://Stackoverflow.com/questions/71853039", "https://Stackoverflow.com", "https://Stackoverflow.com/users/-1/" ]
* Getting pair token balance of contracts > > web3.eth.contract(address=token\_address,abi=abi).functions.balanceOf(contract\_address).call() > > > * and then get current price of each token / USDT by calling function slot0 in pool tokenA/USDT & tokenB/USDT > > slot0 = contract.functions.slot0().call() > > > > > sqrtPriceCurrent = slot0[0] / (1 << 96) > > > > > priceCurrent = sqrtPriceCurrent \*\* 2 > > > > > decimal\_diff = USDT\_decimal - TOKEN\_A\_decimal > > > > > token\_price = 10\*\*(-decimal\_diff)/( priceCurrent) if token0\_address == USDT\_address else priceCurrent/(10\*\*decimal\_diff) > > > * Finally, TVL = sum(token\_balance \* token\_price) \*\* Remember: check price from big pool
No offense but you are following a hard way, which needs to use `TickBitmap` to get the next initialized tick (Remember not all ticks are initialized unless necessary.) Alternatively the easy way to get a pool's TVL is to query Uniswap V3's [subgraph](https://thegraph.com/hosted-service/subgraph/ianlapham/uniswap-v3-subgraph?selected=playground): like ``` { pool(id: "0x4e68ccd3e89f51c3074ca5072bbac773960dfa36") { id token0 {symbol} totalValueLockedToken0 token1 {symbol} totalValueLockedToken1 } } ``` (for some reason it doesn't show result if you put checksum address) or ``` { pools(first: 5) { id token0 {symbol} totalValueLockedToken0 token1 {symbol} totalValueLockedToken1 } } ```
60,325,327
I wrote an app in python3.7.5 that connects to RabbitMQ: ======================================================== ### Using Ubuntu as the docker-machine I am running rabbitmq with docker: `docker run --name rabbitmq -p 5671:5671 -p 5672:5672 -p 15672:15672 --hostname rabbitmq rabbitmq:3.6.6-management` TEST: ----- * My python app connects to it via 127.0.01:5672 * Expected: connects and works * Actual: connects and works ### I put the app inside docker and build and run ``` --build-arg ENVIRONMENT_NAME=develop -t pdf-svc-image:latest . && docker run -P --env ENVIRONMENT_NAME=local --name html-to-pdf -v /home/mickey/dev/core/components/pdf-svc/:/html-to-pdf --privileged --network host pdf-svc-image:latest bash ``` (This command line is created with pycharm) ### When running this code (inside the docker) , I get an exception ``` return await aio_pika.connect_robust( "amqp://guest:guest@{host}".format(host=consts.MESSAGE_QUEUE_HOST) ) ``` * [Errno 111] Connect call failed ('127.0.0.1', 5672) * [Errno 99] Cannot assign requested address Help ?
2020/02/20
[ "https://Stackoverflow.com/questions/60325327", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1125913/" ]
According to <https://docs.docker.com/network/host/>, > > Note: Given that the container does not have its own IP-address when using host mode networking, port-mapping does not take effect, and the -p, --publish, -P, and --publish-all option are ignored, producing a warning instead: > > > I am not sure this is your case. You would login the container, and do run `ping, nslookup` to check the network connection.
RabbitMQ container ``` docker run --name rabbitmq \ -p 5671:5671 -p 5672:5672 -p 15672:15672 \ --hostname rabbitmq \ --network host \ # <-- Add this line, now both container see each other rabbitmq:3.6.6-management ``` App container ``` docker run \ -P \ --env ENVIRONMENT_NAME=local \ --name html-to-pdf \ -v /home/mickey/dev/core/components/pdf-svc/:/html-to-pdf \ --privileged \ --network host \ pdf-svc-image:latest bash ``` Then on your code you need to load your variable with `host = rabbitmq` not 127.0.0.1.
66,169,625
I have two CSV files: **File 1** ``` Id, 1st, 2nd 1, first, row 2, second, row ``` **File 2** ``` Id, 1st, 2nd 1, first, row 2, second, line 3, third, row ``` I am just starting in python and need to write some code, which can do the diff on these files based on primary columns and in this case first column "Id". Output file should be a delta file which should identify the rows that have changed in the second file: **Output delta file** ``` 2, second, line 3, third, row ```
2021/02/12
[ "https://Stackoverflow.com/questions/66169625", "https://Stackoverflow.com", "https://Stackoverflow.com/users/15196604/" ]
I suggest you load both CSV files as Pandas DataFrames, and then you use and outer `merge` with indicator to know what rows changed in the second file. Then, you use `query` to get only the rows that changed in the second file, and you drop the indicator column ('\_merge'). ```py import pandas as pd df1 = pd.read_csv("FILENAME_1.csv") df2 = pd.read_csv("FILENAME_2.csv") merged = pd.merge(df1, df2, how="outer", indicator=True) diff = merged.query("_merge == 'right_only'").drop("_merge", axis="columns") ``` For further details on finding differences in Pandas DataFrames, read [this](https://stackoverflow.com/questions/48647534/python-pandas-find-difference-between-two-data-frames) other question.
I'd also use pandas, as Enrico suggested, for anything more complex than your example. But if you want to do it in pure Python, you can convert your rows into sets and compute a set difference: ```py import csv from io import StringIO data1 = """Id, 1st, 2nd 1, first, row 2, second, row""" data2 = """Id, 1st, 2nd 1, first, row 2, second, line 3, third, row""" s1 = {tuple(row) for row in csv.reader(StringIO(data1))} s2 = {tuple(row) for row in csv.reader(StringIO(data2))} print(s2-s1) print(s2-s1) {('2', ' second', ' line'), ('3', ' third', ' row')} ``` Note that in your example you are not actually diffing based on your primary column only, but on the entire row. If you really want to only consider the `Id` column, you can do: ```py d1 = {row[0]:row[1:] for row in csv.reader(StringIO(data1))} d2 = {row[0]:row[1:] for row in csv.reader(StringIO(data2))} diff = { k : d2[k] for k in set(d2) - set(d1)} print(diff) {'3': [' third', ' row']} ```
60,532,107
Trying to find out the correct number of parallel processes to run with [python multiprocessing](https://docs.python.org/3.6/library/multiprocessing.html). Scripts below are run on an 8-core, 32 GB (Ubuntu 18.04) machine. (There were only system processes and basic user processes running while the below was tested.) Tested `multiprocessing.Pool` and `apply_async` with the following: ``` from multiprocessing import current_process, Pool, cpu_count from datetime import datetime import time num_processes = 1 # vary this print(f"Starting at {datetime.now()}") start = time.perf_counter() print(f"# CPUs = {cpu_count()}") # 8 num_procs = 5 * cpu_count() # 40 def cpu_heavy_fn(): s = time.perf_counter() print(f"{datetime.now()}: {current_process().name}") x = 1 for i in range(1, int(1e7)): x = x * i x = x / i t_taken = round(time.perf_counter() - s, 2) return t_taken, current_process().name pool = Pool(processes=num_processes) multiple_results = [pool.apply_async(cpu_heavy_fn, ()) for i in range(num_procs)] results = [res.get() for res in multiple_results] for r in results: print(r[0], r[1]) print(f"Done at {datetime.now()}") print(f"Time taken = {time.perf_counter() - start}s") ``` Here are the results: ``` num_processes total_time_taken 1 28.25 2 14.28 3 10.2 4 7.35 5 7.89 6 8.03 7 8.41 8 8.72 9 8.75 16 8.7 40 9.53 ``` The following make sense to me: * Running one process at a time takes about 0.7 seconds for each process, so running 40 should take about 28s, which agrees with what we observe above. * Running 2 processes at a time should halve the time and this is observed above (~14s). * Running 4 processes at a time should further halve the time and this is observed above (~7s). * Increasing parallelism to more than the number of cores (8) should degrade performance (due to CPU contention) and this is observed (sort of). What doesn't make sense is: * Why does running 8 in parallel not twice as fast as running 4 in parallel i.e. why is it not ~3.5s? * Why is running 5 to 8 in parallel at a time worse than running 4 at a time? There are 8 cores, but still why is the overall run time worse? (When running 8 in parallel, `htop` showed all CPUs at near 100% utilization. When running 4 in parallel, only 4 of them were at 100% which makes sense.)
2020/03/04
[ "https://Stackoverflow.com/questions/60532107", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1333610/" ]
> > **Q** : *"**Why** is running 5 to 8 in parallel at a time **worse than running 4** at a time?"* > > > Well, there are several reasons and we will start from a static, easiest observable one : Since the **silicon design** ( for which they used a few hardware tricks ) **does not scale** beyond the 4. So **the last** [Amdahl's Law](https://stackoverflow.com/revisions/18374629/3) explained & promoted speedup from just `+1` upscaled count of *processors* is 4 and any next +1 will not upscale the performance in that same way observed in the { 2, 3, 4 }-case : This `lstopo` CPU-topology map helps to start to decode **WHY** ( here for 4-cores, but the logic is the same as for your 8-core silicon - run `lstopo` on your device to see more details in vivo ) : ``` ┌───────────────────────────────────────────────────────────────────────────────────────────────────────────────────┐ │ Machine (31876MB) │ │ │ │ ┌────────────────────────────────────────────────────────────┐ ┌───────────────────────────┐ │ │ │ Package P#0 │ ├┤╶─┬─────┼┤╶───────┤ PCI 10ae:1F44 │ │ │ │ │ │ │ │ │ │ │ ┌────────────────────────────────────────────────────────┐ │ │ │ ┌────────────┐ ┌───────┐ │ │ │ │ │ L3 (8192KB) │ │ │ │ │ renderD128 │ │ card0 │ │ │ │ │ └────────────────────────────────────────────────────────┘ │ │ │ └────────────┘ └───────┘ │ │ │ │ │ │ │ │ │ │ │ ┌──────────────────────────┐ ┌──────────────────────────┐ │ │ │ ┌────────────┐ │ │ │ │ │ L2 (2048KB) │ │ L2 (2048KB) │ │ │ │ │ controlD64 │ │ │ │ │ └──────────────────────────┘ └──────────────────────────┘ │ │ │ └────────────┘ │ │ │ │ │ │ └───────────────────────────┘ │ │ │ ┌──────────────────────────┐ ┌──────────────────────────┐ │ │ │ │ │ │ L1i (64KB) │ │ L1i (64KB) │ │ │ ┌───────────────┐ │ │ │ └──────────────────────────┘ └──────────────────────────┘ │ ├─────┼┤╶───────┤ PCI 10bc:8268 │ │ │ │ │ │ │ │ │ │ │ ┌────────────┐┌────────────┐ ┌────────────┐┌────────────┐ │ │ │ ┌────────┐ │ │ │ │ │ L1d (16KB) ││ L1d (16KB) │ │ L1d (16KB) ││ L1d (16KB) │ │ │ │ │ enp2s0 │ │ │ │ │ └────────────┘└────────────┘ └────────────┘└────────────┘ │ │ │ └────────┘ │ │ │ │ │ │ └───────────────┘ │ │ │ ┌────────────┐┌────────────┐ ┌────────────┐┌────────────┐ │ │ │ │ │ │ Core P#0 ││ Core P#1 │ │ Core P#2 ││ Core P#3 │ │ │ ┌──────────────────┐ │ │ │ │ ││ │ │ ││ │ │ ├─────┤ PCI 1002:4790 │ │ │ │ │ ┌────────┐ ││ ┌────────┐ │ │ ┌────────┐ ││ ┌────────┐ │ │ │ │ │ │ │ │ │ │ PU P#0 │ ││ │ PU P#1 │ │ │ │ PU P#2 │ ││ │ PU P#3 │ │ │ │ │ ┌─────┐ ┌─────┐ │ │ │ │ │ └────────┘ ││ └────────┘ │ │ └────────┘ ││ └────────┘ │ │ │ │ │ sr0 │ │ sda │ │ │ │ │ └────────────┘└────────────┘ └────────────┘└────────────┘ │ │ │ └─────┘ └─────┘ │ │ │ └────────────────────────────────────────────────────────────┘ │ └──────────────────┘ │ │ │ │ │ │ ┌───────────────┐ │ │ └─────┤ PCI 1002:479c │ │ │ └───────────────┘ │ └───────────────────────────────────────────────────────────────────────────────────────────────────────────────────┘ ``` A closer look, like the one from a call to `hwloc`-tool: **`lstopo-no-graphics -.ascii`**, shows **where mutual processing independence ends** - here at a level of ***shared `L1`-instruction-cache** ( the `L3` one is shared either, yet at the top of the hierarchy and at such a size that bothers for large problems solvers only, not our case )* --- Next comes a worse observable reason *WHY even worse* on 8-processes : ---------------------------------------------------------------------- > > **Q** : *"Why does running 8 in parallel not twice as fast as running 4 in parallel i.e. why is it not **`~3.5s`**?"* > > > Because of **thermal management**. [![enter image description here](https://i.stack.imgur.com/xCqqv.jpg)](https://i.stack.imgur.com/xCqqv.jpg) The more work is loaded onto CPU-cores, the more heat is produced from driving electrons on **`~3.5+ GHz`** through the silicon maze. Thermal constraints are those, that prevent any further performance boost in CPU computing powers, simply because of the Laws of physics, as we know them, do not permit to grow beyond some material-defined limits. **So what comes next?** The CPU-design has circumvented not the physics ( that is impossible ), but us, the users - by promising us a CPU chip having **`~3.5+ GHz`** ( but in fact, the CPU can use this clock-rate only for small amounts of time - until the dissipated heat does not get the silicon close to the thermal-limits - and then, the CPU will decide to either **reduce its own clock-rate** as an overheating defensive step ( this reduces the performance, doesn't it? ) or **some CPU-micro-architectures may hop** ( move a flow of processing ) onto another, free, thus cooler, CPU-core ( which keeps a promise of higher clock-rate ***there** ( at least for some small amount of time )* yet also reduces the performance, as the hop does not occur in zero-time and does not happen at zero-costs ( cache-losses, re-fetches etc ) This picture shows a snapshot of the case of core-hopping - cores `0-19` got too hot and are under the Thermal Throttling cap, while cores **`20-39`** can ( at least for now ) run at full speed: [![enter image description here](https://i.stack.imgur.com/nqJt4.png)](https://i.stack.imgur.com/nqJt4.png) --- The Result? ----------- Both the thermal-constraints ( diving CPU into a pool of liquid nitrogen was demonstrated for a "popular" magazine show, yet is not a reasonable option for any sustainable computing, as the mechanical stress from going from deep frozen state into a **`6+ GHz`** clock-rate steam-forming super-heater cracks the body of the CPU and will result in CPU-death from cracks and mechanical fatigue in but a few workload episodes - so a no-go zone, due to **negative ROI** for any serious project ). Good cooling and right-sizing of the pool-of-workers, based on in-vivo pre-testing is the only sure bet here. Other architecture : [![enter image description here](https://i.stack.imgur.com/7DtTQ.png)](https://i.stack.imgur.com/7DtTQ.png)
Most likely cause is that you are running the program on a CPU that uses [simultaneous multithreading (SMT)](https://en.wikipedia.org/wiki/Simultaneous_multithreading), better known as [hyper-threading](https://en.wikipedia.org/wiki/Hyper-threading) on Intel units. To cite after wiki, *for each processor core that is physically present, the operating system addresses two virtual (logical) cores and shares the workload between them when possible.* That's what's happening here. Your OS says 8 cores, but in truth it's 4 cores with SMT. The task is clearly CPU-bound, so any increase beyond **physical** number of cores does not bring any benefit, only overhead cost of multiprocessing. That's why you see almost linear increase in performance until you reach (physical!) max. number of cores (4) and then decrease when the cores needs be shared for this very CPU-intensive task.
73,171,968
I'm trying to make a form where JavaScript makes the authentication of it. After JavaScript says that the user followed the rules correctly, the JavaScript file collects the data typed by the user, so the data is sent to Python (with the help of ajax). From the Python file, I want that it recognizes the data and finally redirect to a new page. Now, I am having an issue because after Python recognizes the previously shared data, `return redirect()` is not working. It's strange, even though I tried with return `redirect(url_for())`, I got the same result. **The browser does not update the new page.** ***Inside templates folder >>*** **educateForm.html** ``` <!DOCTYPE html> <html lang="en"> <head> <meta name="viewport" content="width=device-width, initial-scale=1"> <link rel="stylesheet" href="https://use.fontawesome.com/releases/v5.5.0/css/all.css" integrity="sha384-B4dIYHKNBt8Bc12p+WXckhzcICo0wtJAoU8YZTY5qE0Id1GSseTk6S+L3BlXeVIU" crossorigin="anonymous"> <link href="https://cdn.jsdelivr.net/npm/bootstrap@5.0.2/dist/css/bootstrap.min.css" rel="stylesheet" integrity="sha384-EVSTQN3/azprG1Anm3QDgpJLIm9Nao0Yz1ztcQTwFspd3yD65VohhpuuCOmLASjC" crossorigin="anonymous"> <!--Icon link--> <link rel="stylesheet" href="https://cdn.jsdelivr.net/npm/bootstrap-icons@1.5.0/font/bootstrap-icons.css"> <title>Educate Form</title> </head> <body> <div class="container-fluid padding" id="fullForm1"> <div class="row padding"> <div class="col-md-12 col-lg-12"> <div class="centerForm" style="text-align:left;"> <h2>Contact information: </h2> <!--Form--> <form name='registration' action="/educateForm" method="post"><!--method="post" onsubmit="return formValidation()"--> <!--onsubmit="return false"--> <div class="row"> <div class="col-md-5"> <label for="FName" class="form-label">First name</label> <input type="text" name="FName" placeholder="Mark" id="FName" class="form-control" required="required" autocomplete="off"/> </div> <div class="col-md-5"> <label for="LName" class="form-label">Last name</label> <input type="text" name="LName" placeholder="Smith" id="LName" class="form-control" required="required" autocomplete="off"/> </div> </div> <div class="row"> <div class="col-md-5"> <label for="email" class="form-label">Email address</label> <input type="email" name="email" id="email" placeholder="mark@smith.com" class="form-control" aria-describedby="emailHelp" required="required" autocomplete="off"> </div> </div> <!--DISPLAY NONE - STYLE--> <div class="row" style="display: none;"> <div class="col-md-10"> <label for="skillsLabel" class="form-label">What are your skills?</label> <textarea class="form-control" name="skillsText" id="skillsText" style="height: 100px" required="required" autocomplete="off"></textarea> </div> </div> <div id="buttonContainer"> <input id="orderButton" type="submit" name="submit" class="btn btn-primary" value="NEXT" onClick="return formValidation();" /> <input id="resetButton" type="reset" name="reset" class="btn btn-outline-secondary" value="Clear Form" onClick="return confirmreset()" /> </div> </form> </div> </div> </div> </div> <!--Pagination--> <ul class="pagination justify-content-center" id="pagination"> <li class="page-item disabled"> <a class="page-link">Previous</a> </li> <li class="page-item active"><a class="page-link" href="/educateForm">1</a></li> <li class="page-item"><a class="page-link" href="/educateForm2">2</a></li> <li class="page-item"> <a class="page-link" href="/educateForm2">Next</a> </li> </ul> <!--<script type="text/javascript" src="http://ajax.googleapis.com/ajax/libs/jquery/1.5/jquery.min.js"></script>--> <!--<script src="/educate"></script>--> <script src="https://cdn.jsdelivr.net/npm/bootstrap@5.0.2/dist/js/bootstrap.bundle.min.js" integrity="sha384-MrcW6ZMFYlzcLA8Nl+NtUVF0sA7MsXsP1UyJoMp4YLEuNSfAP+JcXn/tWtIaxVXM" crossorigin="anonymous"></script> <!--Doesn't recognizes AJAX ⏬--> <!--<script src="https://code.jquery.com/jquery-3.2.1.slim.min.js" integrity="sha384-KJ3o2DKtIkvYIK3UENzmM7KCkRr/rE9/Qpg6aAZGJwFDMVNA/GpGFF93hXpG5KkN" crossorigin="anonymous"></script>--> <script src="https://code.jquery.com/jquery-3.6.0.min.js" integrity="sha256-/xUj+3OJU5yExlq6GSYGSHk7tPXikynS7ogEvDej/m4=" crossorigin="anonymous"></script> <script src="https://cdn.jsdelivr.net/npm/popper.js@1.12.9/dist/umd/popper.min.js" integrity="sha384-ApNbgh9B+Y1QKtv3Rn7W3mgPxhU9K/ScQsAP7hUibX39j7fakFPskvXusvfa0b4Q" crossorigin="anonymous"></script> <script src="/educate"></script> </body> </html> ``` **educateForm2.html** ``` <!DOCTYPE html> <html lang="en"> <head> <meta name="viewport" content="width=device-width, initial-scale=1"> <link rel="stylesheet" href="https://use.fontawesome.com/releases/v5.5.0/css/all.css" integrity="sha384-B4dIYHKNBt8Bc12p+WXckhzcICo0wtJAoU8YZTY5qE0Id1GSseTk6S+L3BlXeVIU" crossorigin="anonymous"> <link href="https://cdn.jsdelivr.net/npm/bootstrap@5.0.2/dist/css/bootstrap.min.css" rel="stylesheet" integrity="sha384-EVSTQN3/azprG1Anm3QDgpJLIm9Nao0Yz1ztcQTwFspd3yD65VohhpuuCOmLASjC" crossorigin="anonymous"> <!--Icon link--> <link rel="stylesheet" href="https://cdn.jsdelivr.net/npm/bootstrap-icons@1.5.0/font/bootstrap-icons.css"> <title>Educate 2</title> </head> <body> <div class="container-fluid padding"> <div class="row welcome text-center"> <div id="properties" class="col-12"> <h1 class="display-4">Page 2</h1> </div> </div> </div> <!--Pagination--> <ul class="pagination justify-content-center" id="pagination"> <li class="page-item"> <a class="page-link" href="educateForm">Previous</a> </li> <li class="page-item"><a class="page-link" href="educateForm">1</a></li> <li class="page-item active"><a class="page-link" href="educateForm2">2</a></li> <li class="page-item disabled"> <a class="page-link">Next</a> </li> </ul> <script src="https://cdn.jsdelivr.net/npm/bootstrap@5.0.2/dist/js/bootstrap.bundle.min.js" integrity="sha384-MrcW6ZMFYlzcLA8Nl+NtUVF0sA7MsXsP1UyJoMp4YLEuNSfAP+JcXn/tWtIaxVXM" crossorigin="anonymous"></script> <!--Doesn't recognizes AJAX ⏬--> <!--<script src="https://code.jquery.com/jquery-3.2.1.slim.min.js" integrity="sha384-KJ3o2DKtIkvYIK3UENzmM7KCkRr/rE9/Qpg6aAZGJwFDMVNA/GpGFF93hXpG5KkN" crossorigin="anonymous"></script>--> <script src="https://code.jquery.com/jquery-3.6.0.min.js" integrity="sha256-/xUj+3OJU5yExlq6GSYGSHk7tPXikynS7ogEvDej/m4=" crossorigin="anonymous"></script> <script src="https://cdn.jsdelivr.net/npm/popper.js@1.12.9/dist/umd/popper.min.js" integrity="sha384-ApNbgh9B+Y1QKtv3Rn7W3mgPxhU9K/ScQsAP7hUibX39j7fakFPskvXusvfa0b4Q" crossorigin="anonymous"></script> </body> </html> ``` **educate.js** ``` "use strict"; function formValidation() { var emailRegex = /^[A-Za-z0-9._]*\@[A-Za-z]*\.[A-Za-z]{2,5}$/; // Expression for validating email var fname = document.registration.FName.value; var lname = document.registration.LName.value; var email = document.registration.email.value; if (fname == "") { alert('Enter the first name!'); document.registration.FName.focus(); return false; } if (lname == "") { document.registration.LName.focus(); alert('Enter the last name!'); return false; } if (email == "") { document.registration.email.focus(); alert('Enter the email!'); return false; } if (!emailRegex.test(email)) { alert('Re-enter the valid email in this format: [abc@abc.com]'); document.registration.email.focus(); return false; } if (fname != '' && lname != '' && email != '') // condition for check mandatory all fields { let confirmation = "Once you submit this form, you can't go back \nAre you sure you want to leave this page?"; if (confirm(confirmation) == true) { const dict_values = {fname, lname, email} //Pass the javascript variables to a dictionary. const s = JSON.stringify(dict_values); // Stringify converts a JavaScript object or value to a JSON string console.log(s); // Prints the variables to console window, which are in the JSON format window.alert(s); //Passing the data to Python (into "/educateForm" page) ⏬ $.ajax({ url:"/educateForm", type:"POST", contentType: "application/json", data: JSON.stringify(s)}); //Display 2nd page without sharing data with Python⏬ //var display = window.open("/educateForm2", "_self", "pagewin"); //window.location.href = "/educateForm2"; } } } function setUpPage(){ formValidation(); } window.addEventListener("load", setUpPage, false); ``` ***Outside templates folder >>*** **app.py** ``` import json import os from flask import Flask, flash, redirect, render_template, request, session from flask_session import Session from tempfile import mkdtemp from werkzeug.security import check_password_hash, generate_password_hash from flask import jsonify # NEW from flask import url_for # Configure application app = Flask(__name__) # Ensure templates are auto-reloaded app.config["TEMPLATES_AUTO_RELOAD"] = True # Configure session to use filesystem (instead of signed cookies) app.config["SESSION_PERMANENT"] = False app.config["SESSION_TYPE"] = "filesystem" Session(app) @app.after_request def after_request(response): """Ensure responses aren't cached""" response.headers["Cache-Control"] = "no-cache, no-store, must-revalidate" response.headers["Expires"] = 0 response.headers["Pragma"] = "no-cache" return response @app.route("/educateForm", methods=["GET", "POST"]) def educateForm(): """Show Educate Form(part 1)""" if request.method == "POST": output = request.get_json() print(output) # This is the output that was stored in the JSON within the browser print(type(output)) result = json.loads(output) #this converts the json output to a python dictionary print(result) # Printing the new dictionary print(type(result))#this shows the json converted as a python dictionary #PROBLEM: Neither of both options worked ⏬ #return redirect(url_for('educateForm2')) return redirect("/educateForm2") else: # GET # Redirect user to educateForm.html return render_template("educateForm.html") @app.route("/educateForm2", methods=["GET"]) def educateForm2(): """Show Educate Form(part 2)""" if request.method == "GET": # Redirect user to educateForm2.html return render_template("educateForm2.html") @app.route("/educate") def educate(): """Show educate.js""" if request.method == "GET": # Redirect user to educate.js return render_template("educate.js") if __name__ == "__main__": app.run() ```
2022/07/29
[ "https://Stackoverflow.com/questions/73171968", "https://Stackoverflow.com", "https://Stackoverflow.com/users/19575161/" ]
A very simple, and performant way of checking if all pixels are the same, would be to use PIL's `getextrema()` which tells you the brightest and darkest pixel in an image. So you would just test if they are the same and that would work if testing they were both zero, or any other number. It will be performant because it is implemented in C. ``` min, max = msk.getextrema() if min == max: ... ``` --- If you wanted to use Numpy, in a very similar vein, you could use its `np.ptp()` which tells you the *"peak-to-peak"* difference between brightest and darkest pixel: ``` import numpy as np # Make Numpy array from image "msk" na = np.array(msk) diff = np.ptp(na) if diff == 0: ... ``` Or, you could test if true that all elements equal the first: ``` result = np.all(na == na[0]) ```
1. Convert image to 3D numpy array [enter link description here](https://ru.stackoverflow.com/questions/1145128/%D0%9A%D0%B0%D0%BA-%D0%BF%D1%80%D0%B5%D0%BE%D0%B1%D1%80%D0%B0%D0%B7%D0%BE%D0%B2%D0%B0%D1%82%D1%8C-jpg-%D0%B2-%D0%BC%D0%B0%D1%81%D1%81%D0%B8%D0%B2-numpy) 2. Check if all elements of an array are the same [enter link description here](https://ru.stackoverflow.com/questions/1096559/%D0%9A%D0%B0%D0%BA-%D0%BF%D1%80%D0%BE%D0%B2%D0%B5%D1%80%D0%B8%D1%82%D1%8C-%D0%B2%D1%81%D0%B5-%D0%BB%D0%B8-%D1%8D%D0%BB%D0%B5%D0%BC%D0%B5%D0%BD%D1%82%D1%8B-%D0%B2-%D0%BC%D0%B0%D1%81%D1%81%D0%B8%D0%B2%D0%B5-numpy-%D0%BE%D0%B4%D0%B8%D0%BD%D0%B0%D0%BA%D0%BE%D0%B2%D1%8B%D0%B5-python)
69,607,510
``` import csv import mysql.connector as mysql marathons = [] with open ("marathon_results.csv") as file: data = csv.reader(file) next(data) for rij in data: year = rij[0], winner = rij[1], gender = rij[2], country = rij[3], time = rij[4], marathon = rij[5], marathons.append((year, winner, gender, country, time, marathon)) conn = mysql.connect( host="localhost", user="root", password="" ) c = conn.cursor() create_database_query = 'CREATE DATABASE IF NOT EXISTS marathon_file' c.execute(create_database_query) c.execute('USE marathon_file') c.execute("""CREATE TABLE IF NOT EXISTS winners( year INT(100), winner VARCHAR(255), gender VARCHAR(255), country VARCHAR(255), time TIME, marathon VARCHAR(255) ) """) print('CSV-bestand in de MySQL-database aan het laden...') insert_query = "INSERT INTO winners(year, winner, gender, country, time, marathon) VALUES (%s, %s, %s, %s, %s, &s);" c.executemany(insert_query, marathons) c.commit() print('Bestand succesvol geladen!') ``` So i have this code above trying to get a certain .csv file from my venv to mysql. made a list from the data and skipped the first line (since those were headers) and tried to import it to mysql. But i keep getting the following Error: ``` CSV-bestand in de MySQL-database aan het laden... --------------------------------------------------------------------------- AttributeError Traceback (most recent call last) /Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/site-packages/mysql/connector/conversion.py in to_mysql(self, value) 179 try: --> 180 return getattr(self, "_{0}_to_mysql".format(type_name))(value) 181 except AttributeError: AttributeError: 'MySQLConverter' object has no attribute '_tuple_to_mysql' During handling of the above exception, another exception occurred: TypeError Traceback (most recent call last) /Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/site-packages/mysql/connector/cursor.py in _process_params(self, params) 430 --> 431 res = [to_mysql(i) for i in res] 432 res = [escape(i) for i in res] /Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/site-packages/mysql/connector/cursor.py in <listcomp>(.0) 430 --> 431 res = [to_mysql(i) for i in res] 432 res = [escape(i) for i in res] /Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/site-packages/mysql/connector/conversion.py in to_mysql(self, value) 181 except AttributeError: --> 182 raise TypeError("Python '{0}' cannot be converted to a " 183 "MySQL type".format(type_name)) TypeError: Python 'tuple' cannot be converted to a MySQL type During handling of the above exception, another exception occurred: ProgrammingError Traceback (most recent call last) /var/folders/yc/mz4bq04s7wngrglphldwpwfc0000gn/T/ipykernel_17482/929148642.py in <module> 38 insert_query = "INSERT INTO winners(year, winner, gender, country, time, marathon) VALUES (%s, %s, %s, %s, %s, &s);" 39 ---> 40 c.executemany(insert_query, marathons) 41 c.commit() 42 /Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/site-packages/mysql/connector/cursor.py in executemany(self, operation, seq_params) 665 self._rowcount = 0 666 return None --> 667 stmt = self._batch_insert(operation, seq_params) 668 if stmt is not None: 669 self._executed = stmt /Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/site-packages/mysql/connector/cursor.py in _batch_insert(self, operation, seq_params) 607 tmp, self._process_params_dict(params)) 608 else: --> 609 psub = _ParamSubstitutor(self._process_params(params)) 610 tmp = RE_PY_PARAM.sub(psub, tmp) 611 if psub.remaining != 0: /Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/site-packages/mysql/connector/cursor.py in _process_params(self, params) 433 res = [quote(i) for i in res] 434 except Exception as err: --> 435 raise errors.ProgrammingError( 436 "Failed processing format-parameters; %s" % err) 437 else: ProgrammingError: Failed processing format-parameters; Python 'tuple' cannot be converted to a MySQL type ``` I probably missed some () or brackets or am i missing something else? Thanks
2021/10/17
[ "https://Stackoverflow.com/questions/69607510", "https://Stackoverflow.com", "https://Stackoverflow.com/users/17025019/" ]
The problem is in these lines: ```py year = rij[0], winner = rij[1], gender = rij[2], country = rij[3], time = rij[4], marathon = rij[5], ``` The trailing commas cause `year`, `winner`, `gender` and so on to be created as 1-tuples. It's the same as writing ```py year = (rij[0],) winner = (rij[1],) # and so on... ``` Delete the trailing commas and try again.
Your sql comad had a & instead of a %. I additionally simplified the data loop ``` import csv import mysql.connector as mysql marathons = [] with open ("test2.csv") as file: data = csv.reader(file) next(data) marathons = [tuple(row) for row in data] conn = mysql.connect( host="localhost", user="root", password="" ) c = conn.cursor() create_database_query = 'CREATE DATABASE IF NOT EXISTS marathon_file' c.execute(create_database_query) c.execute('USE marathon_file') c.execute("""CREATE TABLE IF NOT EXISTS winners( year INT(100), winner VARCHAR(255), gender VARCHAR(255), country VARCHAR(255), time TIME, marathon VARCHAR(255) ) """) print('CSV-bestand in de MySQL-database aan het laden...') insert_query = "INSERT INTO winners(year, winner, gender, country, time, marathon) VALUES (%s, %s, %s, %s, %s, %s);" c.executemany(insert_query, marathons) conn.commit() print('Bestand succesvol geladen!') ```
46,053,097
I have created and API using python+flask. When is try to hit the api using postman or chrome it works fine and I am able to get to the api. On the other hand when I try to use python ``` import requests requests.get("http://localhost:5050/") ``` I get 407. I guess that the proxy of the our environment is not allowing me to hit the localhost. But due to LAN settings in IE/Chrome the request went through. I did try to set proxies , auth in request and now I start getting 502(bad gateway). If I see on the API side I can't see a request come through. What can I do to troubleshoot the same.
2017/09/05
[ "https://Stackoverflow.com/questions/46053097", "https://Stackoverflow.com", "https://Stackoverflow.com/users/6128923/" ]
According to [requests module documentation](http://docs.python-requests.org/en/master/user/advanced/#proxies) you can either provide proxy details through environment variable **HTTP\_PROXY** (in case use Linux distribution): ``` $ export HTTP_PROXY="http://corporate-proxy:port" $ python >>> import requests >>> requests.get('http://localhost:5050/') ``` Or provide **proxies** keyword argument to get method directly: ``` import requests proxies = { 'http': 'http://coporate-proxy:port', } requests.get('http://localhost:5050/', proxies=proxies) ```
Try ``` import requests from flask_cors import CORS, cross_origin app = Flask(__name__) cors = CORS(app, resources={r"/*": {"origins": "*"}}) requests.get("http://localhost:5050/") ```
7,641,592
I [recently asked a question](https://stackoverflow.com/questions/7626848/maximum-python-object-which-can-be-passed-to-write) regarding how to save large python objects to file. I had previously run into problems converting massive Python dictionaries into string and writing them to file via `write()`. Now I am using pickle. Although it works, the files are incredibly large (> 5 GB). I have little experience in the field of such large files. I wanted to know if it would be faster, or even possible, to zip this pickle file prior to storing it to memory.
2011/10/03
[ "https://Stackoverflow.com/questions/7641592", "https://Stackoverflow.com", "https://Stackoverflow.com/users/654789/" ]
You can compress the data with [bzip2](http://docs.python.org/library/bz2.html): ``` from __future__ import with_statement # Only for Python 2.5 import bz2,json,contextlib hugeData = {'key': {'x': 1, 'y':2}} with contextlib.closing(bz2.BZ2File('data.json.bz2', 'wb')) as f: json.dump(hugeData, f) ``` Load it like this: ``` from __future__ import with_statement # Only for Python 2.5 import bz2,json,contextlib with contextlib.closing(bz2.BZ2File('data.json.bz2', 'rb')) as f: hugeData = json.load(f) ``` You can also compress the data using [zlib](http://docs.python.org/library/zlib.html) or [gzip](http://docs.python.org/library/gzip.html) with pretty much the same interface. However, both zlib and gzip's compression rates will be lower than the one achieved with bzip2 (or lzma).
> > faster, or even possible, to zip this pickle file prior to [writing] > > > Of course it's possible, but there's no reason to try to make an explicit zipped copy in memory (it might not fit!) before writing it, when you can *automatically cause it to be zipped as it is written, with built-in standard library functionality* ;) See <http://docs.python.org/library/gzip.html> . Basically, you create a special kind of stream with ``` gzip.GzipFile("output file name", "wb") ``` and then use it exactly like an ordinary `file` created with `open(...)` (or `file(...)` for that matter).
7,641,592
I [recently asked a question](https://stackoverflow.com/questions/7626848/maximum-python-object-which-can-be-passed-to-write) regarding how to save large python objects to file. I had previously run into problems converting massive Python dictionaries into string and writing them to file via `write()`. Now I am using pickle. Although it works, the files are incredibly large (> 5 GB). I have little experience in the field of such large files. I wanted to know if it would be faster, or even possible, to zip this pickle file prior to storing it to memory.
2011/10/03
[ "https://Stackoverflow.com/questions/7641592", "https://Stackoverflow.com", "https://Stackoverflow.com/users/654789/" ]
You can compress the data with [bzip2](http://docs.python.org/library/bz2.html): ``` from __future__ import with_statement # Only for Python 2.5 import bz2,json,contextlib hugeData = {'key': {'x': 1, 'y':2}} with contextlib.closing(bz2.BZ2File('data.json.bz2', 'wb')) as f: json.dump(hugeData, f) ``` Load it like this: ``` from __future__ import with_statement # Only for Python 2.5 import bz2,json,contextlib with contextlib.closing(bz2.BZ2File('data.json.bz2', 'rb')) as f: hugeData = json.load(f) ``` You can also compress the data using [zlib](http://docs.python.org/library/zlib.html) or [gzip](http://docs.python.org/library/gzip.html) with pretty much the same interface. However, both zlib and gzip's compression rates will be lower than the one achieved with bzip2 (or lzma).
Look at Google's [ProtoBuffers](http://code.google.com/apis/protocolbuffers/docs/techniques.html#large-data). Although they are not designed for large files out-of-the box, like audio-video files, they do well with object serialization as in your case, because they were designed for it. Practice shows that some day you may need to update structure of your files, and ProtoBuffers will handle it. Also, they are highly optimized for compression and speed. And you're not tied to Python, Java and C++ are well supported.
7,641,592
I [recently asked a question](https://stackoverflow.com/questions/7626848/maximum-python-object-which-can-be-passed-to-write) regarding how to save large python objects to file. I had previously run into problems converting massive Python dictionaries into string and writing them to file via `write()`. Now I am using pickle. Although it works, the files are incredibly large (> 5 GB). I have little experience in the field of such large files. I wanted to know if it would be faster, or even possible, to zip this pickle file prior to storing it to memory.
2011/10/03
[ "https://Stackoverflow.com/questions/7641592", "https://Stackoverflow.com", "https://Stackoverflow.com/users/654789/" ]
Python code would be extremely slow when it comes to implementing data serialization. If you try to create an equivalent to Pickle in pure Python, you'll see that it will be super slow. Fortunately the built-in modules which perform that are quite good. Apart from `cPickle`, you will find the `marshal` module, which is a lot faster. But it needs a real file handle (not from a file-like object). You can `import marshal as Pickle` and see the difference. I don't think you can make a custom serializer which is a lot faster than this... Here's an actual (not so old) [serious benchmark of Python serializers](http://kbyanc.blogspot.com/2007/07/python-serializer-benchmarks.html)
You can compress the data with [bzip2](http://docs.python.org/library/bz2.html): ``` from __future__ import with_statement # Only for Python 2.5 import bz2,json,contextlib hugeData = {'key': {'x': 1, 'y':2}} with contextlib.closing(bz2.BZ2File('data.json.bz2', 'wb')) as f: json.dump(hugeData, f) ``` Load it like this: ``` from __future__ import with_statement # Only for Python 2.5 import bz2,json,contextlib with contextlib.closing(bz2.BZ2File('data.json.bz2', 'rb')) as f: hugeData = json.load(f) ``` You can also compress the data using [zlib](http://docs.python.org/library/zlib.html) or [gzip](http://docs.python.org/library/gzip.html) with pretty much the same interface. However, both zlib and gzip's compression rates will be lower than the one achieved with bzip2 (or lzma).
7,641,592
I [recently asked a question](https://stackoverflow.com/questions/7626848/maximum-python-object-which-can-be-passed-to-write) regarding how to save large python objects to file. I had previously run into problems converting massive Python dictionaries into string and writing them to file via `write()`. Now I am using pickle. Although it works, the files are incredibly large (> 5 GB). I have little experience in the field of such large files. I wanted to know if it would be faster, or even possible, to zip this pickle file prior to storing it to memory.
2011/10/03
[ "https://Stackoverflow.com/questions/7641592", "https://Stackoverflow.com", "https://Stackoverflow.com/users/654789/" ]
You can compress the data with [bzip2](http://docs.python.org/library/bz2.html): ``` from __future__ import with_statement # Only for Python 2.5 import bz2,json,contextlib hugeData = {'key': {'x': 1, 'y':2}} with contextlib.closing(bz2.BZ2File('data.json.bz2', 'wb')) as f: json.dump(hugeData, f) ``` Load it like this: ``` from __future__ import with_statement # Only for Python 2.5 import bz2,json,contextlib with contextlib.closing(bz2.BZ2File('data.json.bz2', 'rb')) as f: hugeData = json.load(f) ``` You can also compress the data using [zlib](http://docs.python.org/library/zlib.html) or [gzip](http://docs.python.org/library/gzip.html) with pretty much the same interface. However, both zlib and gzip's compression rates will be lower than the one achieved with bzip2 (or lzma).
I'd just expand on phihag's answer. When trying to serialize an object approaching the size of RAM, **pickle/cPickle should be avoided**, since it [requires additional memory of 1-2 times the size of the object](http://www.shocksolution.com/2010/01/storing-large-numpy-arrays-on-disk-python-pickle-vs-hdf5adsf/) in order to serialize. That's true even when streaming it to BZ2File. In my case I was even running out of swap space. But the problem with JSON (and similarly with HDF files as mentioned in the linked article) is that it cannot serialize tuples, which in my data are used as keys to dicts. There is no great solution for this; the best I could find was to convert tuples to strings, which requires some memory of its own, but much less than pickle. Nowadays, you can also use [the ujson library](https://pypi.python.org/pypi/ujson), which is much faster than the json library. For tuples composed of strings (requires strings to contain no commas): ``` import ujson as json from bz2 import BZ2File bigdata = { ('a','b','c') : 25, ('d','e') : 13 } bigdata = dict([(','.join(k), v) for k, v in bigdata.viewitems()]) f = BZ2File('filename.json.bz2',mode='wb') json.dump(bigdata,f) f.close() ``` To re-compose the tuples: ``` bigdata = dict([(tuple(k.split(',')),v) for k,v in bigdata.viewitems()]) ``` Alternatively if e.g. your keys are 2-tuples of integers: ``` bigdata2 = { (1,2): 1.2, (2,3): 3.4} bigdata2 = dict([('%d,%d' % k, v) for k, v in bigdata2.viewitems()]) # ... save, load ... bigdata2 = dict([(tuple(map(int,k.split(','))),v) for k,v in bigdata2.viewitems()]) ``` Another advantage of this approach over pickle is that json appears to compress a significantly better than pickles when using bzip2 compression.
7,641,592
I [recently asked a question](https://stackoverflow.com/questions/7626848/maximum-python-object-which-can-be-passed-to-write) regarding how to save large python objects to file. I had previously run into problems converting massive Python dictionaries into string and writing them to file via `write()`. Now I am using pickle. Although it works, the files are incredibly large (> 5 GB). I have little experience in the field of such large files. I wanted to know if it would be faster, or even possible, to zip this pickle file prior to storing it to memory.
2011/10/03
[ "https://Stackoverflow.com/questions/7641592", "https://Stackoverflow.com", "https://Stackoverflow.com/users/654789/" ]
> > faster, or even possible, to zip this pickle file prior to [writing] > > > Of course it's possible, but there's no reason to try to make an explicit zipped copy in memory (it might not fit!) before writing it, when you can *automatically cause it to be zipped as it is written, with built-in standard library functionality* ;) See <http://docs.python.org/library/gzip.html> . Basically, you create a special kind of stream with ``` gzip.GzipFile("output file name", "wb") ``` and then use it exactly like an ordinary `file` created with `open(...)` (or `file(...)` for that matter).
Look at Google's [ProtoBuffers](http://code.google.com/apis/protocolbuffers/docs/techniques.html#large-data). Although they are not designed for large files out-of-the box, like audio-video files, they do well with object serialization as in your case, because they were designed for it. Practice shows that some day you may need to update structure of your files, and ProtoBuffers will handle it. Also, they are highly optimized for compression and speed. And you're not tied to Python, Java and C++ are well supported.
7,641,592
I [recently asked a question](https://stackoverflow.com/questions/7626848/maximum-python-object-which-can-be-passed-to-write) regarding how to save large python objects to file. I had previously run into problems converting massive Python dictionaries into string and writing them to file via `write()`. Now I am using pickle. Although it works, the files are incredibly large (> 5 GB). I have little experience in the field of such large files. I wanted to know if it would be faster, or even possible, to zip this pickle file prior to storing it to memory.
2011/10/03
[ "https://Stackoverflow.com/questions/7641592", "https://Stackoverflow.com", "https://Stackoverflow.com/users/654789/" ]
Python code would be extremely slow when it comes to implementing data serialization. If you try to create an equivalent to Pickle in pure Python, you'll see that it will be super slow. Fortunately the built-in modules which perform that are quite good. Apart from `cPickle`, you will find the `marshal` module, which is a lot faster. But it needs a real file handle (not from a file-like object). You can `import marshal as Pickle` and see the difference. I don't think you can make a custom serializer which is a lot faster than this... Here's an actual (not so old) [serious benchmark of Python serializers](http://kbyanc.blogspot.com/2007/07/python-serializer-benchmarks.html)
> > faster, or even possible, to zip this pickle file prior to [writing] > > > Of course it's possible, but there's no reason to try to make an explicit zipped copy in memory (it might not fit!) before writing it, when you can *automatically cause it to be zipped as it is written, with built-in standard library functionality* ;) See <http://docs.python.org/library/gzip.html> . Basically, you create a special kind of stream with ``` gzip.GzipFile("output file name", "wb") ``` and then use it exactly like an ordinary `file` created with `open(...)` (or `file(...)` for that matter).
7,641,592
I [recently asked a question](https://stackoverflow.com/questions/7626848/maximum-python-object-which-can-be-passed-to-write) regarding how to save large python objects to file. I had previously run into problems converting massive Python dictionaries into string and writing them to file via `write()`. Now I am using pickle. Although it works, the files are incredibly large (> 5 GB). I have little experience in the field of such large files. I wanted to know if it would be faster, or even possible, to zip this pickle file prior to storing it to memory.
2011/10/03
[ "https://Stackoverflow.com/questions/7641592", "https://Stackoverflow.com", "https://Stackoverflow.com/users/654789/" ]
Python code would be extremely slow when it comes to implementing data serialization. If you try to create an equivalent to Pickle in pure Python, you'll see that it will be super slow. Fortunately the built-in modules which perform that are quite good. Apart from `cPickle`, you will find the `marshal` module, which is a lot faster. But it needs a real file handle (not from a file-like object). You can `import marshal as Pickle` and see the difference. I don't think you can make a custom serializer which is a lot faster than this... Here's an actual (not so old) [serious benchmark of Python serializers](http://kbyanc.blogspot.com/2007/07/python-serializer-benchmarks.html)
Look at Google's [ProtoBuffers](http://code.google.com/apis/protocolbuffers/docs/techniques.html#large-data). Although they are not designed for large files out-of-the box, like audio-video files, they do well with object serialization as in your case, because they were designed for it. Practice shows that some day you may need to update structure of your files, and ProtoBuffers will handle it. Also, they are highly optimized for compression and speed. And you're not tied to Python, Java and C++ are well supported.
7,641,592
I [recently asked a question](https://stackoverflow.com/questions/7626848/maximum-python-object-which-can-be-passed-to-write) regarding how to save large python objects to file. I had previously run into problems converting massive Python dictionaries into string and writing them to file via `write()`. Now I am using pickle. Although it works, the files are incredibly large (> 5 GB). I have little experience in the field of such large files. I wanted to know if it would be faster, or even possible, to zip this pickle file prior to storing it to memory.
2011/10/03
[ "https://Stackoverflow.com/questions/7641592", "https://Stackoverflow.com", "https://Stackoverflow.com/users/654789/" ]
I'd just expand on phihag's answer. When trying to serialize an object approaching the size of RAM, **pickle/cPickle should be avoided**, since it [requires additional memory of 1-2 times the size of the object](http://www.shocksolution.com/2010/01/storing-large-numpy-arrays-on-disk-python-pickle-vs-hdf5adsf/) in order to serialize. That's true even when streaming it to BZ2File. In my case I was even running out of swap space. But the problem with JSON (and similarly with HDF files as mentioned in the linked article) is that it cannot serialize tuples, which in my data are used as keys to dicts. There is no great solution for this; the best I could find was to convert tuples to strings, which requires some memory of its own, but much less than pickle. Nowadays, you can also use [the ujson library](https://pypi.python.org/pypi/ujson), which is much faster than the json library. For tuples composed of strings (requires strings to contain no commas): ``` import ujson as json from bz2 import BZ2File bigdata = { ('a','b','c') : 25, ('d','e') : 13 } bigdata = dict([(','.join(k), v) for k, v in bigdata.viewitems()]) f = BZ2File('filename.json.bz2',mode='wb') json.dump(bigdata,f) f.close() ``` To re-compose the tuples: ``` bigdata = dict([(tuple(k.split(',')),v) for k,v in bigdata.viewitems()]) ``` Alternatively if e.g. your keys are 2-tuples of integers: ``` bigdata2 = { (1,2): 1.2, (2,3): 3.4} bigdata2 = dict([('%d,%d' % k, v) for k, v in bigdata2.viewitems()]) # ... save, load ... bigdata2 = dict([(tuple(map(int,k.split(','))),v) for k,v in bigdata2.viewitems()]) ``` Another advantage of this approach over pickle is that json appears to compress a significantly better than pickles when using bzip2 compression.
Look at Google's [ProtoBuffers](http://code.google.com/apis/protocolbuffers/docs/techniques.html#large-data). Although they are not designed for large files out-of-the box, like audio-video files, they do well with object serialization as in your case, because they were designed for it. Practice shows that some day you may need to update structure of your files, and ProtoBuffers will handle it. Also, they are highly optimized for compression and speed. And you're not tied to Python, Java and C++ are well supported.
7,641,592
I [recently asked a question](https://stackoverflow.com/questions/7626848/maximum-python-object-which-can-be-passed-to-write) regarding how to save large python objects to file. I had previously run into problems converting massive Python dictionaries into string and writing them to file via `write()`. Now I am using pickle. Although it works, the files are incredibly large (> 5 GB). I have little experience in the field of such large files. I wanted to know if it would be faster, or even possible, to zip this pickle file prior to storing it to memory.
2011/10/03
[ "https://Stackoverflow.com/questions/7641592", "https://Stackoverflow.com", "https://Stackoverflow.com/users/654789/" ]
Python code would be extremely slow when it comes to implementing data serialization. If you try to create an equivalent to Pickle in pure Python, you'll see that it will be super slow. Fortunately the built-in modules which perform that are quite good. Apart from `cPickle`, you will find the `marshal` module, which is a lot faster. But it needs a real file handle (not from a file-like object). You can `import marshal as Pickle` and see the difference. I don't think you can make a custom serializer which is a lot faster than this... Here's an actual (not so old) [serious benchmark of Python serializers](http://kbyanc.blogspot.com/2007/07/python-serializer-benchmarks.html)
I'd just expand on phihag's answer. When trying to serialize an object approaching the size of RAM, **pickle/cPickle should be avoided**, since it [requires additional memory of 1-2 times the size of the object](http://www.shocksolution.com/2010/01/storing-large-numpy-arrays-on-disk-python-pickle-vs-hdf5adsf/) in order to serialize. That's true even when streaming it to BZ2File. In my case I was even running out of swap space. But the problem with JSON (and similarly with HDF files as mentioned in the linked article) is that it cannot serialize tuples, which in my data are used as keys to dicts. There is no great solution for this; the best I could find was to convert tuples to strings, which requires some memory of its own, but much less than pickle. Nowadays, you can also use [the ujson library](https://pypi.python.org/pypi/ujson), which is much faster than the json library. For tuples composed of strings (requires strings to contain no commas): ``` import ujson as json from bz2 import BZ2File bigdata = { ('a','b','c') : 25, ('d','e') : 13 } bigdata = dict([(','.join(k), v) for k, v in bigdata.viewitems()]) f = BZ2File('filename.json.bz2',mode='wb') json.dump(bigdata,f) f.close() ``` To re-compose the tuples: ``` bigdata = dict([(tuple(k.split(',')),v) for k,v in bigdata.viewitems()]) ``` Alternatively if e.g. your keys are 2-tuples of integers: ``` bigdata2 = { (1,2): 1.2, (2,3): 3.4} bigdata2 = dict([('%d,%d' % k, v) for k, v in bigdata2.viewitems()]) # ... save, load ... bigdata2 = dict([(tuple(map(int,k.split(','))),v) for k,v in bigdata2.viewitems()]) ``` Another advantage of this approach over pickle is that json appears to compress a significantly better than pickles when using bzip2 compression.
21,068,471
Running the following python script through web site works fine and (as expected) stops the playback of MPD: ``` #!/usr/bin/env python import subprocess subprocess.call(["mpc", "stop"]) print ("Content-type: text/plain;charset=utf-8\n\n") print("Hello") ``` This script however causes an error (playback starts as expected): ``` #!/usr/bin/env python print("Content-type: text/plain;charset=utf-8\n\n") print ("Hello") import subprocess subprocess.call(["mpc", "play"]) ``` The error is: ``` malformed header from script. Bad header=Scorpions - Eye of the tiger -: play.py, referer: http://... ``` Apparently whatever is returned by the playback command is taken as the header. When run in terminal, the output looks fine. Why could it be?
2014/01/11
[ "https://Stackoverflow.com/questions/21068471", "https://Stackoverflow.com", "https://Stackoverflow.com/users/143211/" ]
1. You're running your script in some sort of CGI-like environment. I would strongly suggest using a light web framework like Flask or Bottle. 2. `mpc play` is writing to stdout. You need to silence it: ``` import os with open(os.devnull, 'w') as dev_null: subprocess.call(["mpc", "stop"], stdout=dev_null) ``` 3. For your HTTP headers to be valid, you need to separate them with `\r\n`, not `\n\n`.
You need to use `\r\n` line endings.
2,700,195
I have some data that I would like to save to a MAT file (version 4 or 5, or any version, for that matter). The catch: I wanted to do this without using matlab libraries, since this code will not necessary run in a machine with matlab. My program uses Java and C++, so any existing library in those languages that achieves this could help me out... I did some research but did not find anything in Java/C++. However, I found that scipy on python achieves this with `mio4.py` or `mio5.py`. I thought about implementing this on java or C++, but it seems a bit out of my time schedule. So the question is: is there any libraries in Java or C/C++ that permits saving MAT files without using Matlab libraries? Thanks a lot
2010/04/23
[ "https://Stackoverflow.com/questions/2700195", "https://Stackoverflow.com", "https://Stackoverflow.com/users/227103/" ]
C: [matio](http://sourceforge.net/projects/matio/) Java: [jmatio](http://sourceforge.net/projects/jmatio/) (I'm really tempted to, so I will, tell you to learn to google) But really, it's not that hard to write matfiles using `fwrite` if you don't need to handle some of the more complex stuff (nested structs, classes, functions, sparse matrix, etc). See: <http://www.mathworks.com/access/helpdesk/help/pdf_doc/matlab/matfile_format.pdf>
MAT files since version 7 are HDF5 based. I recall that they use some rather funny conventions, but you may be able to reverse engineer what you need. There are certainly HDF5 writing libraries for both Java and C++. Along these lines, Matlab can read/write several standard formats, including HDF5. It may be easiest to write your data in "standard" HDF5 and read it into the desired data structure within Matlab.
31,073,212
When running: mkvirtualenv test I get following error: ``` File "/usr/lib/python3/dist-packages/virtualenv.py", line 2378, in <module> main() File "/usr/lib/python3/dist-packages/virtualenv.py", line 830, in main symlink=options.symlink) File "/usr/lib/python3/dist-packages/virtualenv.py", line 999, in create_environment site_packages=site_packages, clear=clear, symlink=symlink)) File "/usr/lib/python3/dist-packages/virtualenv.py", line 1198, in install_python mkdir(lib_dir) File "/usr/lib/python3/dist-packages/virtualenv.py", line 451, in mkdir os.makedirs(path) File "/usr/lib/python2.7/os.py", line 150, in makedirs makedirs(head, mode) File "/usr/lib/python2.7/os.py", line 150, in makedirs makedirs(head, mode) File "/usr/lib/python2.7/os.py", line 157, in makedirs mkdir(name, mode) OSError: [Errno 13] Permission denied: 'test' ``` Why is the 'test' virtual environment not created? I did try to chmode -R 777 the virtualenv folder, but that did not solve it. I do have python 2.7 and 3.4 installed on Ubuntu 15.04
2015/06/26
[ "https://Stackoverflow.com/questions/31073212", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3294412/" ]
You are likely getting the error because you cannot create the virtualenv folder in the current working directory. If you do an `ls -ld .` you'll see the output of the current directory you're running the command from, e.g.: ``` ➜ ~ ls -ld . drwxr-xr-x+ 114 tfisher staff 3876 Jun 26 08:46 . ``` and if you do a `whoami`, you'll get the name of your current user. The interesting bit in the output is typically the first portion of that `ls -d .` command: `drwxr-xr-x+`. This means "this is a directory, with Read, Write, eXecution for the user, then Read eXecute for the group, and finally Read and eXecute for everyone else." If you do not have `w`rite permission, you will not be able to create the files and folders that virtualenv needs. If the current directory is one that you feel that you should personally own, e.g. `/home/musicformellons`, and you have sudo permission, you can rectify this by running: ``` sudo chown `whoami` . ``` The reason why this didn't just simply work is likely because you followed a guide that had you install a "virtualenvwrapper" using sudo permissions.
i have did the same the issue i found is : > > `echo $WORKON_HOME` > > > you will find : ***/home/user/.virtualenvs/extra\_path*** just yoy need to remove this extra\_path added after ***.virtualenvs*** path from your ***.bashrc*** and then *source* it again try again creating *mkvirtualenv*
31,073,212
When running: mkvirtualenv test I get following error: ``` File "/usr/lib/python3/dist-packages/virtualenv.py", line 2378, in <module> main() File "/usr/lib/python3/dist-packages/virtualenv.py", line 830, in main symlink=options.symlink) File "/usr/lib/python3/dist-packages/virtualenv.py", line 999, in create_environment site_packages=site_packages, clear=clear, symlink=symlink)) File "/usr/lib/python3/dist-packages/virtualenv.py", line 1198, in install_python mkdir(lib_dir) File "/usr/lib/python3/dist-packages/virtualenv.py", line 451, in mkdir os.makedirs(path) File "/usr/lib/python2.7/os.py", line 150, in makedirs makedirs(head, mode) File "/usr/lib/python2.7/os.py", line 150, in makedirs makedirs(head, mode) File "/usr/lib/python2.7/os.py", line 157, in makedirs mkdir(name, mode) OSError: [Errno 13] Permission denied: 'test' ``` Why is the 'test' virtual environment not created? I did try to chmode -R 777 the virtualenv folder, but that did not solve it. I do have python 2.7 and 3.4 installed on Ubuntu 15.04
2015/06/26
[ "https://Stackoverflow.com/questions/31073212", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3294412/" ]
You are likely getting the error because you cannot create the virtualenv folder in the current working directory. If you do an `ls -ld .` you'll see the output of the current directory you're running the command from, e.g.: ``` ➜ ~ ls -ld . drwxr-xr-x+ 114 tfisher staff 3876 Jun 26 08:46 . ``` and if you do a `whoami`, you'll get the name of your current user. The interesting bit in the output is typically the first portion of that `ls -d .` command: `drwxr-xr-x+`. This means "this is a directory, with Read, Write, eXecution for the user, then Read eXecute for the group, and finally Read and eXecute for everyone else." If you do not have `w`rite permission, you will not be able to create the files and folders that virtualenv needs. If the current directory is one that you feel that you should personally own, e.g. `/home/musicformellons`, and you have sudo permission, you can rectify this by running: ``` sudo chown `whoami` . ``` The reason why this didn't just simply work is likely because you followed a guide that had you install a "virtualenvwrapper" using sudo permissions.
I don't think you can't create a test virtualenv.
31,073,212
When running: mkvirtualenv test I get following error: ``` File "/usr/lib/python3/dist-packages/virtualenv.py", line 2378, in <module> main() File "/usr/lib/python3/dist-packages/virtualenv.py", line 830, in main symlink=options.symlink) File "/usr/lib/python3/dist-packages/virtualenv.py", line 999, in create_environment site_packages=site_packages, clear=clear, symlink=symlink)) File "/usr/lib/python3/dist-packages/virtualenv.py", line 1198, in install_python mkdir(lib_dir) File "/usr/lib/python3/dist-packages/virtualenv.py", line 451, in mkdir os.makedirs(path) File "/usr/lib/python2.7/os.py", line 150, in makedirs makedirs(head, mode) File "/usr/lib/python2.7/os.py", line 150, in makedirs makedirs(head, mode) File "/usr/lib/python2.7/os.py", line 157, in makedirs mkdir(name, mode) OSError: [Errno 13] Permission denied: 'test' ``` Why is the 'test' virtual environment not created? I did try to chmode -R 777 the virtualenv folder, but that did not solve it. I do have python 2.7 and 3.4 installed on Ubuntu 15.04
2015/06/26
[ "https://Stackoverflow.com/questions/31073212", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3294412/" ]
i have did the same the issue i found is : > > `echo $WORKON_HOME` > > > you will find : ***/home/user/.virtualenvs/extra\_path*** just yoy need to remove this extra\_path added after ***.virtualenvs*** path from your ***.bashrc*** and then *source* it again try again creating *mkvirtualenv*
I don't think you can't create a test virtualenv.
3,934,777
I have a few functions in my code where it makes much sense (seems even mandatory) to use memoization. I don't want to implement that manually for every function separately. Is there some way (for example [like in Python](http://wiki.python.org/moin/PythonDecoratorLibrary#Memoize)) I can just use an annotation or do something else so I get this automatically on those functions where I want it?
2010/10/14
[ "https://Stackoverflow.com/questions/3934777", "https://Stackoverflow.com", "https://Stackoverflow.com/users/133374/" ]
I don't think there is a language native implementation of memoization. But you can implement it easily, as a decorator of your method. You have to maintain a Map: the key of your Map is the parameter, the value the result. Here is a simple implementation, for a one-arg method: ``` Map<Integer, Integer> memoizator = new HashMap<Integer, Integer>(); public Integer memoizedMethod(Integer param) { if (!memoizator.containsKey(param)) { memoizator.put(param, method(param)); } return memoizator.get(param); } ```
You could use the [Function](http://guava-libraries.googlecode.com/svn/trunk/javadoc/com/google/common/base/Function.html) interface in Google's [guava](http://code.google.com/p/guava-libraries/) library to easily achieve what you're after: ``` import java.util.HashMap; import java.util.Map; import com.google.common.base.Function; public class MemoizerTest { /** * Memoizer takes a function as input, and returns a memoized version of the same function. * * @param <F> * the input type of the function * @param <T> * the output type of the function * @param inputFunction * the input function to be memoized * @return the new memoized function */ public static <F, T> Function<F, T> memoize(final Function<F, T> inputFunction) { return new Function<F, T>() { // Holds previous results Map<F, T> memoization = new HashMap<F, T>(); @Override public T apply(final F input) { // Check for previous results if (!memoization.containsKey(input)) { // None exists, so compute and store a new one memoization.put(input, inputFunction.apply(input)); } // At this point a result is guaranteed in the memoization return memoization.get(input); } }; } public static void main(final String[] args) { // Define a function (i.e. inplement apply) final Function<Integer, Integer> add2 = new Function<Integer, Integer>() { @Override public Integer apply(final Integer input) { System.out.println("Adding 2 to: " + input); return input + 2; } }; // Memoize the function final Function<Integer, Integer> memoizedAdd2 = MemoizerTest.memoize(add2); // Exercise the memoized function System.out.println(memoizedAdd2.apply(1)); System.out.println(memoizedAdd2.apply(2)); System.out.println(memoizedAdd2.apply(3)); System.out.println(memoizedAdd2.apply(2)); System.out.println(memoizedAdd2.apply(4)); System.out.println(memoizedAdd2.apply(1)); } } ``` Should print: Adding 2 to: 1 3 Adding 2 to: 2 4 Adding 2 to: 3 5 4 Adding 2 to: 4 6 3 You can see that the 2nd time memoizedAdd2 is called (applied) to the arguments 2 and 1, the computation in the apply is not actually ran, it just fetched the stored results.
3,934,777
I have a few functions in my code where it makes much sense (seems even mandatory) to use memoization. I don't want to implement that manually for every function separately. Is there some way (for example [like in Python](http://wiki.python.org/moin/PythonDecoratorLibrary#Memoize)) I can just use an annotation or do something else so I get this automatically on those functions where I want it?
2010/10/14
[ "https://Stackoverflow.com/questions/3934777", "https://Stackoverflow.com", "https://Stackoverflow.com/users/133374/" ]
Spring 3.1 now provides a [`@Cacheable` annotation](http://static.springsource.org/spring/docs/3.1.0.M1/spring-framework-reference/html/cache.html), which does exactly this. > > As the name implies, @Cacheable is used to demarcate methods that are cacheable - that is, methods for whom the result is stored into the cache so on subsequent invocations (with the same arguments), the value in the cache is returned without having to actually execute the method. > > >
I don't think there is a language native implementation of memoization. But you can implement it easily, as a decorator of your method. You have to maintain a Map: the key of your Map is the parameter, the value the result. Here is a simple implementation, for a one-arg method: ``` Map<Integer, Integer> memoizator = new HashMap<Integer, Integer>(); public Integer memoizedMethod(Integer param) { if (!memoizator.containsKey(param)) { memoizator.put(param, method(param)); } return memoizator.get(param); } ```
3,934,777
I have a few functions in my code where it makes much sense (seems even mandatory) to use memoization. I don't want to implement that manually for every function separately. Is there some way (for example [like in Python](http://wiki.python.org/moin/PythonDecoratorLibrary#Memoize)) I can just use an annotation or do something else so I get this automatically on those functions where I want it?
2010/10/14
[ "https://Stackoverflow.com/questions/3934777", "https://Stackoverflow.com", "https://Stackoverflow.com/users/133374/" ]
I don't think there is a language native implementation of memoization. But you can implement it easily, as a decorator of your method. You have to maintain a Map: the key of your Map is the parameter, the value the result. Here is a simple implementation, for a one-arg method: ``` Map<Integer, Integer> memoizator = new HashMap<Integer, Integer>(); public Integer memoizedMethod(Integer param) { if (!memoizator.containsKey(param)) { memoizator.put(param, method(param)); } return memoizator.get(param); } ```
[Cyclops](https://github.com/aol/cyclops) offers Memoisation for Functions, Suppliers, Callables, Predicates and by extension methods (via Method References) ([see javadoc](http://static.javadoc.io/com.aol.cyclops/cyclops-functions/4.0.2/com/aol/cyclops/functions/Memoise.html)) E.g. Given a variable called that counts the number of time our method is actually called, we can see the memoised function actually executes the method just once. ``` int called = 0; cached = Memoise.memoiseQuadFunction(this::addAll); assertThat(cached.apply(1,2,3,4),equalTo(10)); assertThat(cached.apply(1,2,3,4),equalTo(10)); assertThat(called,equalTo(1)); private int addAll(int a,int b,int c, int d){ called++; return a+b+c+d; } ```
3,934,777
I have a few functions in my code where it makes much sense (seems even mandatory) to use memoization. I don't want to implement that manually for every function separately. Is there some way (for example [like in Python](http://wiki.python.org/moin/PythonDecoratorLibrary#Memoize)) I can just use an annotation or do something else so I get this automatically on those functions where I want it?
2010/10/14
[ "https://Stackoverflow.com/questions/3934777", "https://Stackoverflow.com", "https://Stackoverflow.com/users/133374/" ]
I came across a memoization library called [Tek271](http://www.tek271.com/software/java/memoizer) which appears to use annotations to memoize functions as you describe.
You could use the [Function](http://guava-libraries.googlecode.com/svn/trunk/javadoc/com/google/common/base/Function.html) interface in Google's [guava](http://code.google.com/p/guava-libraries/) library to easily achieve what you're after: ``` import java.util.HashMap; import java.util.Map; import com.google.common.base.Function; public class MemoizerTest { /** * Memoizer takes a function as input, and returns a memoized version of the same function. * * @param <F> * the input type of the function * @param <T> * the output type of the function * @param inputFunction * the input function to be memoized * @return the new memoized function */ public static <F, T> Function<F, T> memoize(final Function<F, T> inputFunction) { return new Function<F, T>() { // Holds previous results Map<F, T> memoization = new HashMap<F, T>(); @Override public T apply(final F input) { // Check for previous results if (!memoization.containsKey(input)) { // None exists, so compute and store a new one memoization.put(input, inputFunction.apply(input)); } // At this point a result is guaranteed in the memoization return memoization.get(input); } }; } public static void main(final String[] args) { // Define a function (i.e. inplement apply) final Function<Integer, Integer> add2 = new Function<Integer, Integer>() { @Override public Integer apply(final Integer input) { System.out.println("Adding 2 to: " + input); return input + 2; } }; // Memoize the function final Function<Integer, Integer> memoizedAdd2 = MemoizerTest.memoize(add2); // Exercise the memoized function System.out.println(memoizedAdd2.apply(1)); System.out.println(memoizedAdd2.apply(2)); System.out.println(memoizedAdd2.apply(3)); System.out.println(memoizedAdd2.apply(2)); System.out.println(memoizedAdd2.apply(4)); System.out.println(memoizedAdd2.apply(1)); } } ``` Should print: Adding 2 to: 1 3 Adding 2 to: 2 4 Adding 2 to: 3 5 4 Adding 2 to: 4 6 3 You can see that the 2nd time memoizedAdd2 is called (applied) to the arguments 2 and 1, the computation in the apply is not actually ran, it just fetched the stored results.
3,934,777
I have a few functions in my code where it makes much sense (seems even mandatory) to use memoization. I don't want to implement that manually for every function separately. Is there some way (for example [like in Python](http://wiki.python.org/moin/PythonDecoratorLibrary#Memoize)) I can just use an annotation or do something else so I get this automatically on those functions where I want it?
2010/10/14
[ "https://Stackoverflow.com/questions/3934777", "https://Stackoverflow.com", "https://Stackoverflow.com/users/133374/" ]
Spring 3.1 now provides a [`@Cacheable` annotation](http://static.springsource.org/spring/docs/3.1.0.M1/spring-framework-reference/html/cache.html), which does exactly this. > > As the name implies, @Cacheable is used to demarcate methods that are cacheable - that is, methods for whom the result is stored into the cache so on subsequent invocations (with the same arguments), the value in the cache is returned without having to actually execute the method. > > >
I came across a memoization library called [Tek271](http://www.tek271.com/software/java/memoizer) which appears to use annotations to memoize functions as you describe.
3,934,777
I have a few functions in my code where it makes much sense (seems even mandatory) to use memoization. I don't want to implement that manually for every function separately. Is there some way (for example [like in Python](http://wiki.python.org/moin/PythonDecoratorLibrary#Memoize)) I can just use an annotation or do something else so I get this automatically on those functions where I want it?
2010/10/14
[ "https://Stackoverflow.com/questions/3934777", "https://Stackoverflow.com", "https://Stackoverflow.com/users/133374/" ]
I came across a memoization library called [Tek271](http://www.tek271.com/software/java/memoizer) which appears to use annotations to memoize functions as you describe.
[Cyclops](https://github.com/aol/cyclops) offers Memoisation for Functions, Suppliers, Callables, Predicates and by extension methods (via Method References) ([see javadoc](http://static.javadoc.io/com.aol.cyclops/cyclops-functions/4.0.2/com/aol/cyclops/functions/Memoise.html)) E.g. Given a variable called that counts the number of time our method is actually called, we can see the memoised function actually executes the method just once. ``` int called = 0; cached = Memoise.memoiseQuadFunction(this::addAll); assertThat(cached.apply(1,2,3,4),equalTo(10)); assertThat(cached.apply(1,2,3,4),equalTo(10)); assertThat(called,equalTo(1)); private int addAll(int a,int b,int c, int d){ called++; return a+b+c+d; } ```
3,934,777
I have a few functions in my code where it makes much sense (seems even mandatory) to use memoization. I don't want to implement that manually for every function separately. Is there some way (for example [like in Python](http://wiki.python.org/moin/PythonDecoratorLibrary#Memoize)) I can just use an annotation or do something else so I get this automatically on those functions where I want it?
2010/10/14
[ "https://Stackoverflow.com/questions/3934777", "https://Stackoverflow.com", "https://Stackoverflow.com/users/133374/" ]
Spring 3.1 now provides a [`@Cacheable` annotation](http://static.springsource.org/spring/docs/3.1.0.M1/spring-framework-reference/html/cache.html), which does exactly this. > > As the name implies, @Cacheable is used to demarcate methods that are cacheable - that is, methods for whom the result is stored into the cache so on subsequent invocations (with the same arguments), the value in the cache is returned without having to actually execute the method. > > >
You could use the [Function](http://guava-libraries.googlecode.com/svn/trunk/javadoc/com/google/common/base/Function.html) interface in Google's [guava](http://code.google.com/p/guava-libraries/) library to easily achieve what you're after: ``` import java.util.HashMap; import java.util.Map; import com.google.common.base.Function; public class MemoizerTest { /** * Memoizer takes a function as input, and returns a memoized version of the same function. * * @param <F> * the input type of the function * @param <T> * the output type of the function * @param inputFunction * the input function to be memoized * @return the new memoized function */ public static <F, T> Function<F, T> memoize(final Function<F, T> inputFunction) { return new Function<F, T>() { // Holds previous results Map<F, T> memoization = new HashMap<F, T>(); @Override public T apply(final F input) { // Check for previous results if (!memoization.containsKey(input)) { // None exists, so compute and store a new one memoization.put(input, inputFunction.apply(input)); } // At this point a result is guaranteed in the memoization return memoization.get(input); } }; } public static void main(final String[] args) { // Define a function (i.e. inplement apply) final Function<Integer, Integer> add2 = new Function<Integer, Integer>() { @Override public Integer apply(final Integer input) { System.out.println("Adding 2 to: " + input); return input + 2; } }; // Memoize the function final Function<Integer, Integer> memoizedAdd2 = MemoizerTest.memoize(add2); // Exercise the memoized function System.out.println(memoizedAdd2.apply(1)); System.out.println(memoizedAdd2.apply(2)); System.out.println(memoizedAdd2.apply(3)); System.out.println(memoizedAdd2.apply(2)); System.out.println(memoizedAdd2.apply(4)); System.out.println(memoizedAdd2.apply(1)); } } ``` Should print: Adding 2 to: 1 3 Adding 2 to: 2 4 Adding 2 to: 3 5 4 Adding 2 to: 4 6 3 You can see that the 2nd time memoizedAdd2 is called (applied) to the arguments 2 and 1, the computation in the apply is not actually ran, it just fetched the stored results.
3,934,777
I have a few functions in my code where it makes much sense (seems even mandatory) to use memoization. I don't want to implement that manually for every function separately. Is there some way (for example [like in Python](http://wiki.python.org/moin/PythonDecoratorLibrary#Memoize)) I can just use an annotation or do something else so I get this automatically on those functions where I want it?
2010/10/14
[ "https://Stackoverflow.com/questions/3934777", "https://Stackoverflow.com", "https://Stackoverflow.com/users/133374/" ]
You could use the [Function](http://guava-libraries.googlecode.com/svn/trunk/javadoc/com/google/common/base/Function.html) interface in Google's [guava](http://code.google.com/p/guava-libraries/) library to easily achieve what you're after: ``` import java.util.HashMap; import java.util.Map; import com.google.common.base.Function; public class MemoizerTest { /** * Memoizer takes a function as input, and returns a memoized version of the same function. * * @param <F> * the input type of the function * @param <T> * the output type of the function * @param inputFunction * the input function to be memoized * @return the new memoized function */ public static <F, T> Function<F, T> memoize(final Function<F, T> inputFunction) { return new Function<F, T>() { // Holds previous results Map<F, T> memoization = new HashMap<F, T>(); @Override public T apply(final F input) { // Check for previous results if (!memoization.containsKey(input)) { // None exists, so compute and store a new one memoization.put(input, inputFunction.apply(input)); } // At this point a result is guaranteed in the memoization return memoization.get(input); } }; } public static void main(final String[] args) { // Define a function (i.e. inplement apply) final Function<Integer, Integer> add2 = new Function<Integer, Integer>() { @Override public Integer apply(final Integer input) { System.out.println("Adding 2 to: " + input); return input + 2; } }; // Memoize the function final Function<Integer, Integer> memoizedAdd2 = MemoizerTest.memoize(add2); // Exercise the memoized function System.out.println(memoizedAdd2.apply(1)); System.out.println(memoizedAdd2.apply(2)); System.out.println(memoizedAdd2.apply(3)); System.out.println(memoizedAdd2.apply(2)); System.out.println(memoizedAdd2.apply(4)); System.out.println(memoizedAdd2.apply(1)); } } ``` Should print: Adding 2 to: 1 3 Adding 2 to: 2 4 Adding 2 to: 3 5 4 Adding 2 to: 4 6 3 You can see that the 2nd time memoizedAdd2 is called (applied) to the arguments 2 and 1, the computation in the apply is not actually ran, it just fetched the stored results.
[Cyclops](https://github.com/aol/cyclops) offers Memoisation for Functions, Suppliers, Callables, Predicates and by extension methods (via Method References) ([see javadoc](http://static.javadoc.io/com.aol.cyclops/cyclops-functions/4.0.2/com/aol/cyclops/functions/Memoise.html)) E.g. Given a variable called that counts the number of time our method is actually called, we can see the memoised function actually executes the method just once. ``` int called = 0; cached = Memoise.memoiseQuadFunction(this::addAll); assertThat(cached.apply(1,2,3,4),equalTo(10)); assertThat(cached.apply(1,2,3,4),equalTo(10)); assertThat(called,equalTo(1)); private int addAll(int a,int b,int c, int d){ called++; return a+b+c+d; } ```
3,934,777
I have a few functions in my code where it makes much sense (seems even mandatory) to use memoization. I don't want to implement that manually for every function separately. Is there some way (for example [like in Python](http://wiki.python.org/moin/PythonDecoratorLibrary#Memoize)) I can just use an annotation or do something else so I get this automatically on those functions where I want it?
2010/10/14
[ "https://Stackoverflow.com/questions/3934777", "https://Stackoverflow.com", "https://Stackoverflow.com/users/133374/" ]
Spring 3.1 now provides a [`@Cacheable` annotation](http://static.springsource.org/spring/docs/3.1.0.M1/spring-framework-reference/html/cache.html), which does exactly this. > > As the name implies, @Cacheable is used to demarcate methods that are cacheable - that is, methods for whom the result is stored into the cache so on subsequent invocations (with the same arguments), the value in the cache is returned without having to actually execute the method. > > >
[Cyclops](https://github.com/aol/cyclops) offers Memoisation for Functions, Suppliers, Callables, Predicates and by extension methods (via Method References) ([see javadoc](http://static.javadoc.io/com.aol.cyclops/cyclops-functions/4.0.2/com/aol/cyclops/functions/Memoise.html)) E.g. Given a variable called that counts the number of time our method is actually called, we can see the memoised function actually executes the method just once. ``` int called = 0; cached = Memoise.memoiseQuadFunction(this::addAll); assertThat(cached.apply(1,2,3,4),equalTo(10)); assertThat(cached.apply(1,2,3,4),equalTo(10)); assertThat(called,equalTo(1)); private int addAll(int a,int b,int c, int d){ called++; return a+b+c+d; } ```
71,972,703
I am trying to to a very simple python request using `requests.get` but am getting the following error using this code: ``` url = 'https://www.tesco.com/' status = requests.get(url) ``` The error: ``` requests.exceptions.SSLError: HTTPSConnectionPool(host='www.tesco.com', port=443): Max retries exceeded with url: / (Caused by SSLError(SSLCertVerificationError(1, '[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: unable to get local issuer certificate (_ssl.c:997)'))) ``` Can anyone explain to me how to fix this and more importantly what the error means? Many Thanks
2022/04/22
[ "https://Stackoverflow.com/questions/71972703", "https://Stackoverflow.com", "https://Stackoverflow.com/users/10574250/" ]
Explanation =========== The errors is caused by an invalid or expired [SSL Certificate](https://www.gogetssl.com/wiki/ssl-basics/what-is-ssl-tls/) When making a GET request to a server such as `www.tesco.com` you have 2 options, an [http](https://en.wikipedia.org/wiki/Hypertext_Transfer_Protocol) and an [https](https://en.wikipedia.org/wiki/HTTPS), in the case of https the server will provide your requestor (your script) with an SSL certificate which allows you to verify that you are connecting to a legitimate website, also this helps secure and encrypt the data being transfered between your script and the server Solution ======== Just disable the SSL check ```py url = 'https://www.tesco.com/' requests.get(url, verify=False) ``` OR -- Use Session and Disable the SSL Cert Check ```py import requests, os url = 'https://www.tesco.com/' # Use Session and Disable the SSL Cert Check session = requests.Session() session.verify = False session.trust_env = False session.get(url=url) ``` [Similar post](https://stackoverflow.com/questions/15445981/how-do-i-disable-the-security-certificate-check-in-python-requests) Extra Info 1 ============ Ensure the date and time is set correctly, as the request library checks the valid date range that the SSL certificate is valid in compared to your local date and time. as this is sometimes a common issue Extra Info 2 ============ You may need to get the latest updated Root CA Certificates installed on your machine [Download Here](https://www.entrust.com/resources/certificate-solutions/tools/root-certificate-downloads)
Paraphrasing [similar post](https://stackoverflow.com/questions/41287979/cant-access-certain-sites-requests-get-in-python-3) to your specific question. Response 403 means forbidden, in other words, the website understands the request but doesn't allow access. It could be a security measure to prevent scraping. As a workaround, you can add a header in your request so that the code acts as if you're accessing it using a web browser. ``` url = "https://www.tesco.com" headers = {'user-agent': 'Safari/537.36'} response = requests.get(url, headers=headers) print(response) ``` You should get response 200. 'user-agent' in the headers makes it seem that you're accessing through a Safari browser.
69,751,866
I am getting this error while Executing simple **Recursion Program** in **Python**. ``` RecursionError Traceback (most recent call last) <ipython-input-19-e831d27779c8> in <module> 4 num = 7 5 ----> 6 factorial(num) <ipython-input-19-e831d27779c8> in factorial(n) 1 def factorial(n): ----> 2 return (n * factorial(n-1)) 3 4 num = 7 5 ... last 1 frames repeated, from the frame below ... <ipython-input-19-e831d27779c8> in factorial(n) 1 def factorial(n): ----> 2 return (n * factorial(n-1)) 3 4 num = 7 5 RecursionError: maximum recursion depth exceeded ``` **My program is:** ``` def factorial(n): return (n * factorial(n-1)) num = 7 factorial(num) ``` Please help. Thanks in advance!
2021/10/28
[ "https://Stackoverflow.com/questions/69751866", "https://Stackoverflow.com", "https://Stackoverflow.com/users/15926850/" ]
A recursive function has a simple rule to follow. 1. Create an exit condition 2. Call yourself (the function) somewhere. Your factorial function only calls itself. And it will not stop in any condition (goes on to negative). Then you hit maximum recursion depth. You should stop when you hit a certain point. In your example it's when `n==1`. Because `1!=1` ``` def factorial(n): if n == 1: return 1 return (n * factorial(n-1)) ```
You have to return another value at some point. Example below: ``` def factorial(n): if n == 1: return 1 return (n * factorial(n-1)) ``` Else, your recursive loop will not stop and go to - infinity.
24,023,512
I know this is probably not a good style, but I was wondering if it is possible to construct a class when a static method is called ``` class myClass(): def __init__(self): self.variable = "this worked" @staticmethod def test_class(var=myClass().variable): print self.variable if "__name__" == "__main__": myClass.test_class() ``` Right now it returns ``` NameError: name 'myClass' is not defined ``` Here is what I am suspecting, in default, the python interpreter will scan the class and register each function, when it register the function, it checks the function's default variable, the default variable have to be defined?
2014/06/03
[ "https://Stackoverflow.com/questions/24023512", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3692553/" ]
Perhaps the easiest is to turn it into a `classmethod` instead: ``` class myClass(object): def __init__(self): self.variable = "this worked" @classmethod def test_class(cls): var = cls().variable print var if __name__ == "__main__": myClass.test_class() ``` See [What is the difference between @staticmethod and @classmethod in Python?](https://stackoverflow.com/questions/136097/what-is-the-difference-between-staticmethod-and-classmethod-in-python) It is not entirely clear from your question what the use case is; it could well be that there's a better way to do what you're actually trying to do.
Yes, the default value for a function argument has to be definable at the point that the function appears, and a class isn't actually finished defining until the end of the "class block." The easiest way to do what you're trying to do is: ``` @staticmethod def test_class(var=None): if var is None: var = myClass().variable print var # I assume this is what you meant to write for this line; otherwise, the function makes no sense. ``` This has the one downside that you can't pass `None` to `myClass.test_class` and get it to print out. If you want to do this, try: ``` @staticmethod def test_class(*vars): if len(vars) == 0: var = myClass().variable #elif len(vars) > 1: # Throw an error yelling at the user for calling the function wrong (optional) else: var = vars[0] print var ```
59,867,504
I am very new to the Python language and have a small program. It had been working but something change and now I can't get it to run. It's having a problem with finding 'pyodbc'. I installed the 'pyodbc' package so I don't understand why there error. I am using Python 3.7.6. Thank you for your help! **pip install pyodbc** ``` WARNING: pip is being invoked by an old script wrapper. This will fail in a future version of pip. Please see https://github.com/pypa/pip/issues/5599 for advice on fixing the underlying issue. To avoid this problem you can invoke Python with '-m pip' instead of running pip directly. Requirement already satisfied: pyodbc in c:\users\c113850\appdata\roaming\python\python37\site-packages (4.0.28) ``` **Code:** ``` import requests import pyodbc from bs4 import BeautifulSoup from datetime import datetime import pytz import time import azure.functions page = requests.get("https://samplepage.html") if page.status_code == 200: print(page.status_code) #print(page.content) soup = BeautifulSoup(page.content, 'html.parser') print(soup.title) rows = soup.find_all('tr') # for row in rows: # Print all occurrences # print(row.get_text()) print(rows[0]) print(rows[7]) pjmtime = rows[0].td.get_text() print("PJM = ",pjmtime) #dt_string = "Tue Jan 21 18:00:00 EST 2020" dt_object = datetime.strptime(pjmtime, "%a %b %d %H:%M:%S EST %Y") print("Timestamp =", dt_object) eastern=pytz.timezone('US/Eastern') date_eastern=eastern.localize(dt_object,is_dst=None) date_utc=date_eastern.astimezone(pytz.utc) print("UTC =", date_utc) row = soup.find(text='PRICE').parent.parent name = row.select('td')[0].get_text() typed = row.select('td')[1].get_text() weighted = row.select('td')[2].get_text() hourly = row.select('td')[3].get_text() server = 'db.database.windows.net' database = '...' username = '...' password = '...' driver = '{ODBC Driver 17 for SQL Server}' cnxn = pyodbc.connect('DRIVER='+driver+';SERVER='+server+';PORT=1433;DATABASE='+database+';UID='+username+';PWD='+ password) cursor = cnxn.cursor() print("insert into [PJMLocationalMarginalPrice] ([Name],[Type],[WeightedAverage],[HourlyIntegrated],[TimeStamp]) values(?,?,?,?,?)", (name,typed,weighted,hourly,date_utc)) cursor.execute("insert into [PJMLocationalMarginalPrice] ([Name],[Type],[WeightedAverage],[HourlyIntegrated],[TimeStamp]) values (?,?,?,?,?)", (name,typed,weighted,hourly,date_utc)) cnxn.commit() else: print("Error: page not open") ``` **Error:** ``` Traceback (most recent call last): File "c:/Users/C113850/PycharmProjects/Scraping101/Scraping.py", line 2, in <module> import pyodbc ImportError: DLL load failed: The specified module could not be found. ``` Update: I was looking at the folders under site-packages and noticed that 'pyodbc' folder is not there but 'pyodbc-4.0.28.dist-info' folder is there. [![enter image description here](https://i.stack.imgur.com/wXnrH.png)](https://i.stack.imgur.com/wXnrH.png)
2020/01/22
[ "https://Stackoverflow.com/questions/59867504", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3216326/" ]
If I'm understanding your question correctly and you're looking for how frequent two of the same categories are 1 in the same row (e.g. pairwise like @M-- asked), here's how I've done it in the past. I'm sure there's a more graceful way of going about it though :D ``` library(dplyr) library(tidyr) test.df <- structure(list(Type_SunflowerSeeds = c(1L, 1L, 1L, 0L, 0L, 1L, 1L, 0L, 0L, 0L), Type_SafflowerSeeds = c(0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L), Type_Nyjer = c(0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L), Type_EconMix = c(1L, 1L, 0L, 1L, 1L, 0L, 1L, 0L, 0L, 0L), Type_PremMix = c(0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 1L), Type_Grains = c(0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L), Type_Nuts = c(0L, 0L, 1L, 1L, 0L, 0L, 0L, 0L, 0L, 0L), Type_Suet = c(1L, 0L, 0L, 0L, 0L, 1L, 0L, 0L, 0L, 0L), Type_SugarWater = c(1L, 0L, 0L, 0L, 1L, 1L, 1L, 0L, 1L, 1L), Type_FruitOrJams = c(0L, 0L, 0L, 1L, 0L, 0L, 0L, 0L, 0L, 0L), Type_Mealworms = c(0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L), Type_Corn = c(0L, 0L, 0L, 0L, 0L, 0L, 0L, 1L, 0L, 0L), Type_BarkOrPeanutButter = c(0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L), Type_Scraps = c(1L, 1L, 1L, 1L, 0L, 0L, 0L, 0L, 0L, 0L), Type_Bread = c(0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L), Type_Other = c(0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L), total = c(5, 3, 3, 4, 2, 3, 3, 1, 1, 2)), row.names = c(NA, 10L), class = "data.frame") test.df %>% mutate(food.id = 1:n()) %>% gather(key = "type1", value = "val", -food.id, -total) %>% #create an ID column for each row filter(val==1) %>% select(food.id, type1) %>% #now we have a data.frame with one column for food.id and # one column for every food.type it is associated with left_join( # this left join is essentially doing the same thing we did before test.df %>% mutate(food.id = 1:n()) %>% gather(key = "type2", value = "val", -food.id, -total) %>% filter(val==1) %>% select(food.id, type2), by = c("food.id") # now we're matching each food with all of its associated types ) %>% mutate(type1.n = as.numeric(factor(type1)), # quick way of making sure we're not counting duplicates # (e.g. if type1 = Type_SunflowerSeeds and type2 = Type_SafflowerSeeds, that's the same if they were switched) type2.n = as.numeric(factor(type2))) %>% filter(type1 > type2) %>% # this filter step takes care of the flip flopping issue group_by(type1, type2) %>% summarise( #finally, count the combinations/pairwise values n.times = n() ) %>% ungroup() %>% arrange(desc(n.times), type1, type2) ``` With the output of: ``` type1 type2 n.times <chr> <chr> <int> 1 Type_Scraps Type_EconMix 3 2 Type_SugarWater Type_EconMix 3 3 Type_SunflowerSeeds Type_EconMix 3 4 Type_SunflowerSeeds Type_Scraps 3 5 Type_SunflowerSeeds Type_SugarWater 3 6 Type_Scraps Type_Nuts 2 7 Type_SugarWater Type_Suet 2 8 Type_SunflowerSeeds Type_Suet 2 9 Type_FruitOrJams Type_EconMix 1 10 Type_Nuts Type_EconMix 1 11 Type_Nuts Type_FruitOrJams 1 12 Type_Scraps Type_FruitOrJams 1 13 Type_Suet Type_EconMix 1 14 Type_Suet Type_Scraps 1 15 Type_SugarWater Type_PremMix 1 16 Type_SugarWater Type_Scraps 1 17 Type_SunflowerSeeds Type_Nuts 1 ``` To extend this and do a three-way combination count, you can follow this code. I've also added some additional comments to walk-through what's going on: ``` # create a baseline data.frame with food.id and every food type that it matches food.type.long.df <- test.df %>% mutate(food.id = 1:n()) %>% gather(key = "type1", value = "val", -food.id, -total) %>% filter(val==1) %>% select(food.id, type1) %>% arrange(food.id) # join the baseline data.frame to itself to see all possible combinations of food types # note: this includes repeated types like type1=Type_Corn and type2=Type_Corn # this also includes rows where the types are simply flip-flopped types # ex. Row 2 is type1=Type_SunflowerSeeds and type2 = Type_EconMix # but Row 6 is type1=Type_EconMix and type2 = Type_SunflowerSeeds - we don't want to count this combinations twice food.2types.df <- food.type.long.df %>% left_join( select(food.type.long.df, food.id, type2 = type1), by = "food.id" ) %>% arrange(food.id) # let's add the third type as well; as with before, the same issues are in this df but we'll fix the duplicates # and flip flops later food.3types.df <- food.2types.df %>% left_join( select(food.type.long.df, food.id, type3 = type1), by = "food.id" ) %>% arrange(food.id) food.3types.df.fixed <- food.3types.df %>% distinct() %>% mutate(type1.n = as.numeric(factor(type1)), # assign each type1 a number (in alphabetical order) type2.n = as.numeric(factor(type2)), # assign each type2 a number (in alphabetical order) type3.n = as.numeric(factor(type3))) %>% # assign each type3 a number (in alphabetical order) filter(type1 > type2) %>% # to remove duplicates and flip-flopped rows for types 1 and 2, use a strict inequality filter(type2 > type3) # to remove duplicates and flip-flopped rows for types 2 and 3, use a strict inequality food.3type.combination.count <- food.3types.df.fixed %>% group_by(type1, type2, type3) %>% # group by all three types you want to count summarise( n.times = n() ) %>% ungroup() %>% arrange(desc(n.times), type1, type2, type3) ``` With the output: ``` type1 type2 type3 n.times <chr> <chr> <chr> <int> 1 Type_SunflowerSeeds Type_Scraps Type_EconMix 2 2 Type_SunflowerSeeds Type_SugarWater Type_EconMix 2 3 Type_SunflowerSeeds Type_SugarWater Type_Suet 2 4 Type_Nuts Type_FruitOrJams Type_EconMix 1 5 Type_Scraps Type_FruitOrJams Type_EconMix 1 6 Type_Scraps Type_Nuts Type_EconMix 1 7 Type_Scraps Type_Nuts Type_FruitOrJams 1 8 Type_Suet Type_Scraps Type_EconMix 1 9 Type_SugarWater Type_Scraps Type_EconMix 1 10 Type_SugarWater Type_Suet Type_EconMix 1 11 Type_SugarWater Type_Suet Type_Scraps 1 12 Type_SunflowerSeeds Type_Scraps Type_Nuts 1 13 Type_SunflowerSeeds Type_Suet Type_EconMix 1 14 Type_SunflowerSeeds Type_Suet Type_Scraps 1 15 Type_SunflowerSeeds Type_SugarWater Type_Scraps 1 ```
You can use arules which is geared from this kind of analysis. You can read more about some of its uses [here](https://cran.r-project.org/web/packages/arules/vignettes/arules.pdf) So this is your data: ``` df = structure(list(Type_SunflowerSeeds = c(1L, 1L, 1L, 0L, 0L), Type_SafflowerSeeds = c(0L, 0L, 0L, 0L, 0L), Type_Nyjer = c(0L, 0L, 0L, 0L, 0L), Type_EconMix = c(1L, 1L, 0L, 1L, 1L), Type_PremMix = c(0L, 0L, 0L, 0L, 0L), Type_Grains = c(0L, 0L, 0L, 0L, 0L), Type_Nuts = c(0L, 0L, 1L, 1L, 0L), Type_Suet = c(1L, 0L, 0L, 0L, 0L), Type_SugarWater = c(1L, 0L, 0L, 0L, 1L), Type_FruitOrJams = c(0L, 0L, 0L, 1L, 0L), Type_Mealworms = c(0L, 0L, 0L, 0L, 0L), Type_Corn = c(0L, 0L, 0L, 0L, 0L), Type_BarkOrPeanutButter = c(0L, 0L, 0L, 0L, 0L), Type_Scraps = c(1L, 1L, 1L, 1L, 0L), Type_Bread = c(0L, 0L, 0L, 0L, 0L), Type_Other = c(0L, 0L, 0L, 0L, 0L), total = c(5, 3, 3, 4, 2)), row.names = c(NA, 5L), class = "data.frame") ``` We make it a matrix and convert it to a `transactions` object, i omit the last column because you don't need total: ``` library(arules) m = as(as.matrix(df[,-ncol(df)]),"transactions") summary(m) #gives you a lot of information about this data # now we get a co-occurence matrix counts = crossTable(m) ``` To get the the data frame you stated, you need to use `dplyr` and `tidyr`: ``` # convert to data.frame counts[upper.tri(counts)]=NA diag(counts)=NA data.frame(counts) %>% # add rownames as item1 tibble::rownames_to_column("item1") %>% # make it long format, like you wanted pivot_longer(-item1,names_to="item2") %>% # remove rows where item1 == item2 filter(!is.na(value)) %>% # sort arrange(desc(value)) # A tibble: 120 x 3 item1 item2 value <chr> <chr> <int> 1 Type_Scraps Type_SunflowerSeeds 3 2 Type_Scraps Type_EconMix 3 3 Type_EconMix Type_SunflowerSeeds 2 4 Type_SugarWater Type_EconMix 2 ``` The above can be simplified by using `apriori` in arules: ``` # number of combinations N = 2 # create apriori object rules = apriori(m,parameter=list(maxlen=N,minlen=N,conf =0.01,support=0.01)) gi <- generatingItemsets(rules) d <- which(duplicated(gi)) rules = sort(rules[-d]) # output results data.frame( lhs=labels(lhs(rules)), rhs=labels(rhs(rules)), count=quality(rules)$count) lhs rhs count 1 {Type_SunflowerSeeds} {Type_Scraps} 3 2 {Type_EconMix} {Type_Scraps} 3 3 {Type_SugarWater} {Type_EconMix} 2 4 {Type_Nuts} {Type_Scraps} 2 5 {Type_SunflowerSeeds} {Type_EconMix} 2 6 {Type_FruitOrJams} {Type_Nuts} 1 ``` For occurrence of 3, simply change N above to 3.
55,483,057
I have the following task in one of my ansible playbook: ``` - name: Generate vault token uri: url: "{{vault_address}}/v1/auth/github/login" method: POST body: "{ \"token\": \"{{ token }}\" }" validate_certs: no body_format: json register: vault_token - name: debug vault debug: msg: "{{vault_address}}/v1/{{vault_path}}/db2w-flex/{{region_name}}" - name: Create secret in vault uri: url: "{{vault_address}}/v1/{{vault_path}}/db2w-flex/{{region_name}}" method: POST body: '{ "secret_access_key": "{{ (new_access_key.stdout |from_json).AccessKey.SecretAccessKey }}", "access_key_id": "{{ (new_access_key.stdout |from_json).AccessKey.AccessKeyId }}" }' validate_certs: no status_code: 204 headers: X-VAULT-TOKEN: '{{ vault_token.json.auth.client_token }}' body_format: json ``` This task keeps failing with: ``` fatal: [localhost]: FAILED! => {"cache_control": "no-store", "changed": false, "content": "{\"errors\":[\"missing client token\"]}\n", "content_length": "36", "content_type": "application/json", "date": "Tue, 02 Apr 2019 20:16:03 GMT", "failed": true, "json": {"errors": ["missing client token"]}, "msg": "Status code was not [204]", "redirected": false, "status": 400} ``` The same task works fine on another host (a kubernetes POD). I am running this on a new system (docker container). The client token is generated properly. I am able to do a POST via curl so there are not network issues on the container. Is there any way to debug what headers are being passed , what url is being hit etc? I don't know what is missing. I am not using vault cli. Its all HTTP based. After adding -vvvv here is the log: ``` TASK [Create secret in vault] ************************************************** task path: /deployment/updateVault.yml:16 ESTABLISH LOCAL CONNECTION FOR USER: root 127.0.0.1 EXEC ( umask 22 && mkdir -p "$( echo $HOME/.ansible/tmp/ansible-tmp-1554241833.81-120109582683210 )" && echo "$( echo $HOME/.ansible/tmp/ansible-tmp-1554241833.81-120109582683210 )" ) 127.0.0.1 PUT /tmp/tmpDgx1F_ TO /root/.ansible/tmp/ansible-tmp-1554241833.81-120109582683210/uri 127.0.0.1 EXEC LANG=en_US.UTF-8 LC_ALL=en_US.UTF-8 LC_MESSAGES=en_US.UTF-8 /usr/bin/python /root/.ansible/tmp/ansible-tmp-1554241833.81-120109582683210/uri; rm -rf "/root/.ansible/tmp/ansible-tmp-1554241833.81-120109582683210/" > /dev/null 2>&1 fatal: [localhost]: FAILED! => {"cache_control": "no-store", "changed": false, "content": "{\"errors\":[\"missing client token\"]}\n", "content_length": "36", "content_type": "application/json", "date": "Tue, 02 Apr 2019 21:50:34 GMT", "failed": true, "invocation": {"module_args": {"backup": null, "body": {"access_key_id": "xxxxxxxxxxxx", "secret_access_key": "5ovB684peTr8YpnNMQBn+xxxxxxxxxxx+"}, "body_format": "json", "content": null, "creates": null, "delimiter": null, "dest": null, "directory_mode": null, "follow": false, "follow_redirects": "safe", "force": null, "force_basic_auth": false, "group": null, "headers": {"X-VAULT-TOKEN": "eee90f1c-3c9e-edaf-32e6-b6cexxxxx"}, "method": "POST", "mode": null, "owner": null, "password": null, "regexp": null, "remote_src": null, "removes": null, "return_content": false, "selevel": null, "serole": null, "setype": null, "seuser": null, "src": null, "status_code": ["204"], "timeout": 30, "url": "https://vserv-us.sos.ibm.com:8200/v1/generic/user/akanksha-jain/db2w-flex/us", "user": null, "validate_certs": false}, "module_name": "uri"}, "json": {"errors": ["missing client token"]}, "msg": "Status code was not [204]", "redirected": false, "status": 400} ``` I tried running a curl with all the above details from the same pod and it went fine.
2019/04/02
[ "https://Stackoverflow.com/questions/55483057", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5996587/" ]
Add `-vvvv` to your command line to debug. As you specified `body_format: json`, you can simplify your `body` part: ``` - name: Generate vault token uri: url: "{{vault_address}}/v1/auth/github/login" method: POST body: token: mytoken validate_certs: no body_format: json ```
I was able to get past this issue with ansible version `2.7.9` I was on `2.0.0.2`
11,639,577
I installed oauth2 by just downloading tar.gz package and doing `python setup.py install`. However I'm getting this error ``` bash-3.2$ python Python 2.7.1 (r271:86832, Jul 31 2011, 19:30:53) [GCC 4.2.1 (Based on Apple Inc. build 5658) (LLVM build 2335.15.00)] on darwin Type "help", "copyright", "credits" or "license" for more information. >>> import oauth2 Traceback (most recent call last): File "<stdin>", line 1, in <module> ImportError: No module named oauth2 >>> ``` The path to oauth2 is in PYTHONPATH (so that shouldn't be the issue) as I added this line to ~/.bashrc: ``` PYTHOHPATH=$PYTHONPATH:/Users/me/Downloads/oauth2-1.5.211/ ``` However, when I do this: ``` bash-3.2$ cd /System/Library/Frameworks/Python.framework/Versions/2.7/ bash-3.2$ ls Extras Headers Mac Python Resources _CodeSignature bin include lib bash-3.2$ Python Python 2.7.1 (r271:86882M, Nov 30 2010, 09:39:13) [GCC 4.0.1 (Apple Inc. build 5494)] on darwin Type "help", "copyright", "credits" or "license" for more information. >>> import oauth2 >>> ``` it works just fine. Any idea how I should install oauth2 to avoid ImportError from `python`? P/S: this is the simlink for `python` command ``` python -> ../../System/Library/Frameworks/Python.framework/Versions/2.7/bin/python2.7 ```
2012/07/24
[ "https://Stackoverflow.com/questions/11639577", "https://Stackoverflow.com", "https://Stackoverflow.com/users/730403/" ]
I don't have an answer, but I have some general suggestions: Run `python setup.py install` with the same python that you intend to use it from (in your case one is capitalised, the other is not). I always `export` my bashrc variables to ensure they are global, but I am not sure that is your issue here. When running scripts in the pwd, always run them with `./`. In your case run python as `./Python` to have confidence that you are running the executable you think you are running. Check your spelling of PYTHONPATH. If you think you have it right, do `import sys; print('\n'.join(sys.path))` from within your python session and ensure that the appropriate directory is there
it looks like you have two different versions of python installed, and one of them you launched using Python as opposed to python. Since your second example workd, it looks like you've installed oauth2 using Python.
34,579,327
I am receiving this error in Python 3.5.1. > > json.decoder.JSONDecodeError: Expecting value: line 1 column 1 (char 0) > > > Here is my code: ``` import json import urllib.request connection = urllib.request.urlopen('http://python-data.dr-chuck.net/comments_220996.json') js = connection.read() print(js) info = json.loads(str(js)) ``` [![image](https://i.stack.imgur.com/sfOl3.png)](https://i.stack.imgur.com/sfOl3.png)
2016/01/03
[ "https://Stackoverflow.com/questions/34579327", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4679487/" ]
If you look at the output you receive from `print()` and also in your Traceback, you'll see the value you get back is not a string, it's a bytes object (prefixed by `b`): ```none b'{\n "note":"This file ..... ``` If you fetch the URL using a tool such as `curl -v`, you will see that the content type is ```none Content-Type: application/json; charset=utf-8 ``` So it's JSON, encoded as UTF-8, and Python is considering it a byte stream, not a simple string. In order to parse this, you need to convert it into a string first. Change the last line of code to this: ```py info = json.loads(js.decode("utf-8")) ```
in my case, some characters like " , :"'{}[] " maybe corrupt the JSON format, so use *try json.loads(str) except* to check your input
25,937,443
In Python, I have three lists containing x and y coordinates. Each list contains 128 points. How can I find the the closest three points in an efficient way? This is my working python code but it isn't efficient enough: ``` def findclosest(c1, c2, c3): mina = 999999999 for i in c1: for j in c2: for k in c3: # calculate sum of distances between points d = xy3dist(i,j,k) if d < mina: mina = d def xy3dist(a, b, c): l1 = math.sqrt((a[0]-b[0]) ** 2 + (a[1]-b[1]) ** 2 ) l2 = math.sqrt((b[0]-c[0]) ** 2 + (b[1]-c[1]) ** 2 ) l3 = math.sqrt((a[0]-c[0]) ** 2 + (a[1]-c[1]) ** 2 ) return l1+l2+l3 ``` Any idea how this can be done using numpy?
2014/09/19
[ "https://Stackoverflow.com/questions/25937443", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4058928/" ]
As written, this is problematic, you are trying to write to a vector for which you did not yet allocate memory. Option 1 - Resize your vectors ahead of time ``` vector< vector<int> > matrix; cout << "Filling matrix with test numbers."; matrix.resize(4); // resize top level vector for (int i = 0; i < 4; i++) { matrix[i].resize(4); // resize each of the contained vectors for (int j = 0; j < 4; j++) { matrix[i][j] = 5; } } ``` Option 2 - Size your vector when you declare it ``` vector<vector<int>> matrix(4, vector<int>(4)); ``` Option 3 - Use `push_back` to resize the vector as needed. ``` vector< vector<int> > matrix; cout << "Filling matrix with test numbers."; for (int i = 0; i < 4; i++) { vector<int> temp; for (int j = 0; j < 4; j++) { temp.push_back(5); } matrix.push_back(temp); } ```
You have not allocated any space for your 2d vector. So in your current code, you are trying to access some memory that does not belong to your program's memory space. This will result in Segmentation Fault. try: ``` vector<vector<int> > matrix(4, vector<int>(4)); ``` If you want to give all elements the same value, you can try: ``` vector<vector<int> > matrix(4, vector<int>(4,5)); // all values are now 5 ```
25,937,443
In Python, I have three lists containing x and y coordinates. Each list contains 128 points. How can I find the the closest three points in an efficient way? This is my working python code but it isn't efficient enough: ``` def findclosest(c1, c2, c3): mina = 999999999 for i in c1: for j in c2: for k in c3: # calculate sum of distances between points d = xy3dist(i,j,k) if d < mina: mina = d def xy3dist(a, b, c): l1 = math.sqrt((a[0]-b[0]) ** 2 + (a[1]-b[1]) ** 2 ) l2 = math.sqrt((b[0]-c[0]) ** 2 + (b[1]-c[1]) ** 2 ) l3 = math.sqrt((a[0]-c[0]) ** 2 + (a[1]-c[1]) ** 2 ) return l1+l2+l3 ``` Any idea how this can be done using numpy?
2014/09/19
[ "https://Stackoverflow.com/questions/25937443", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4058928/" ]
As written, this is problematic, you are trying to write to a vector for which you did not yet allocate memory. Option 1 - Resize your vectors ahead of time ``` vector< vector<int> > matrix; cout << "Filling matrix with test numbers."; matrix.resize(4); // resize top level vector for (int i = 0; i < 4; i++) { matrix[i].resize(4); // resize each of the contained vectors for (int j = 0; j < 4; j++) { matrix[i][j] = 5; } } ``` Option 2 - Size your vector when you declare it ``` vector<vector<int>> matrix(4, vector<int>(4)); ``` Option 3 - Use `push_back` to resize the vector as needed. ``` vector< vector<int> > matrix; cout << "Filling matrix with test numbers."; for (int i = 0; i < 4; i++) { vector<int> temp; for (int j = 0; j < 4; j++) { temp.push_back(5); } matrix.push_back(temp); } ```
``` vector<int> v2d1(3, 7); vector<vector<int> > v2d2(4, v2d1); for (int i = 0; i < v2d2.size(); i++) { for(int j=0; j <v2d2[i].size(); j++) { cout<<v2d2[i][j]<<" "; } cout << endl; } ```
25,937,443
In Python, I have three lists containing x and y coordinates. Each list contains 128 points. How can I find the the closest three points in an efficient way? This is my working python code but it isn't efficient enough: ``` def findclosest(c1, c2, c3): mina = 999999999 for i in c1: for j in c2: for k in c3: # calculate sum of distances between points d = xy3dist(i,j,k) if d < mina: mina = d def xy3dist(a, b, c): l1 = math.sqrt((a[0]-b[0]) ** 2 + (a[1]-b[1]) ** 2 ) l2 = math.sqrt((b[0]-c[0]) ** 2 + (b[1]-c[1]) ** 2 ) l3 = math.sqrt((a[0]-c[0]) ** 2 + (a[1]-c[1]) ** 2 ) return l1+l2+l3 ``` Any idea how this can be done using numpy?
2014/09/19
[ "https://Stackoverflow.com/questions/25937443", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4058928/" ]
You have not allocated any space for your 2d vector. So in your current code, you are trying to access some memory that does not belong to your program's memory space. This will result in Segmentation Fault. try: ``` vector<vector<int> > matrix(4, vector<int>(4)); ``` If you want to give all elements the same value, you can try: ``` vector<vector<int> > matrix(4, vector<int>(4,5)); // all values are now 5 ```
``` vector<int> v2d1(3, 7); vector<vector<int> > v2d2(4, v2d1); for (int i = 0; i < v2d2.size(); i++) { for(int j=0; j <v2d2[i].size(); j++) { cout<<v2d2[i][j]<<" "; } cout << endl; } ```
14,510,286
I'm currently writing an application which allows the user to extend it via a 'plugin' type architecture. They can write additional python classes based on a BaseClass object I provide, and these are loaded against various application signals. The exact number and names of the classes loaded as plugins is unknown before the application is started, but are only loaded once at startup. During my research into the best way to tackle this I've come up with two common solutions. **Option 1 - Roll your own using imp, pkgutil, etc.** See for instance, [this answer](https://stackoverflow.com/questions/2267984/dynamic-class-loading-in-python-2-6-runtimewarning-parent-module-plugins-not) or [this one](https://stackoverflow.com/questions/301134/dynamic-module-import-in-python). **Option 2 - Use a plugin manager library** Randomly picking a couple * [straight.plugin](https://github.com/ironfroggy/straight.plugin) * [yapsy](http://yapsy.sourceforge.net/) * [this approach](http://martyalchin.com/2008/jan/10/simple-plugin-framework/) My question is - on the proviso that the application must be restarted in order to load new plugins - is there any benefit of the above methods over something inspired from [this SO answer](https://stackoverflow.com/questions/1796180/python-get-list-of-all-classes-within-current-module) and [this one](https://stackoverflow.com/a/8093671/233608) such as: ``` import inspect import sys import my_plugins def predicate(c): # filter to classes return inspect.isclass(c) def load_plugins(): for name, obj in inspect.getmembers(sys.modules['my_plugins'], predicate): obj.register_signals() ``` Are there any disadvantages to this approach compared to the ones above? (other than all the plugins must be in the same file) Thanks! **EDIT** Comments request further information... the only additional thing I can think to add is that the plugins use the [blinker](http://discorporate.us/projects/Blinker/) library to provide signals that they subscribe to. Each plugin may subscribe to different signals of different types and hence must have its own specific "register" method.
2013/01/24
[ "https://Stackoverflow.com/questions/14510286", "https://Stackoverflow.com", "https://Stackoverflow.com/users/233608/" ]
The [metaclass approach](http://martyalchin.com/2008/jan/10/simple-plugin-framework/) is useful for this issue in Python < 3.6 (see @quasoft's answer for Python 3.6+). It is very simple and acts automatically on any imported module. In addition, complex logic can be applied to plugin registration with very little effort. It requires: The [metaclass](https://stackoverflow.com/questions/100003/what-is-a-metaclass-in-python) approach works like the following: 1) A custom `PluginMount` metaclass is defined which maintains a list of all plugins 2) A `Plugin` class is defined which sets `PluginMount` as its metaclass 3) When an object deriving from `Plugin` - for instance `MyPlugin` is imported, it triggers the `__init__` method on the metaclass. This registers the plugin and performs any application specific logic and event subscription. Alternatively if you put the `PluginMount.__init__` logic in `PluginMount.__new__` it is called whenver a new instance of a `Plugin` derived class is created. ``` class PluginMount(type): """ A plugin mount point derived from: http://martyalchin.com/2008/jan/10/simple-plugin-framework/ Acts as a metaclass which creates anything inheriting from Plugin """ def __init__(cls, name, bases, attrs): """Called when a Plugin derived class is imported""" if not hasattr(cls, 'plugins'): # Called when the metaclass is first instantiated cls.plugins = [] else: # Called when a plugin class is imported cls.register_plugin(cls) def register_plugin(cls, plugin): """Add the plugin to the plugin list and perform any registration logic""" # create a plugin instance and store it # optionally you could just store the plugin class and lazily instantiate instance = plugin() # save the plugin reference cls.plugins.append(instance) # apply plugin logic - in this case connect the plugin to blinker signals # this must be defined in the derived class instance.register_signals() ``` Then a base plugin class which looks like: ``` class Plugin(object): """A plugin which must provide a register_signals() method""" __metaclass__ = PluginMount ``` Finally, an actual plugin class would look like the following: ``` class MyPlugin(Plugin): def register_signals(self): print "Class created and registering signals" def other_plugin_stuff(self): print "I can do other plugin stuff" ``` Plugins can be accessed from any python module that has imported `Plugin`: ``` for plugin in Plugin.plugins: plugin.other_plugin_stuff() ``` See [the full working example](https://gist.github.com/will-hart/5899567)
The approach from will-hart was the most useful one to me! For i needed more control I wrapped the Plugin Base class in a function like: ``` def get_plugin_base(name='Plugin', cls=object, metaclass=PluginMount): def iter_func(self): for mod in self._models: yield mod bases = not isinstance(cls, tuple) and (cls,) or cls class_dict = dict( _models=None, session=None ) class_dict['__iter__'] = iter_func return metaclass(name, bases, class_dict) ``` and then: ``` from plugin import get_plugin_base Plugin = get_plugin_base() ``` This allows to add additional baseclasses or switching to another metaclass.
14,510,286
I'm currently writing an application which allows the user to extend it via a 'plugin' type architecture. They can write additional python classes based on a BaseClass object I provide, and these are loaded against various application signals. The exact number and names of the classes loaded as plugins is unknown before the application is started, but are only loaded once at startup. During my research into the best way to tackle this I've come up with two common solutions. **Option 1 - Roll your own using imp, pkgutil, etc.** See for instance, [this answer](https://stackoverflow.com/questions/2267984/dynamic-class-loading-in-python-2-6-runtimewarning-parent-module-plugins-not) or [this one](https://stackoverflow.com/questions/301134/dynamic-module-import-in-python). **Option 2 - Use a plugin manager library** Randomly picking a couple * [straight.plugin](https://github.com/ironfroggy/straight.plugin) * [yapsy](http://yapsy.sourceforge.net/) * [this approach](http://martyalchin.com/2008/jan/10/simple-plugin-framework/) My question is - on the proviso that the application must be restarted in order to load new plugins - is there any benefit of the above methods over something inspired from [this SO answer](https://stackoverflow.com/questions/1796180/python-get-list-of-all-classes-within-current-module) and [this one](https://stackoverflow.com/a/8093671/233608) such as: ``` import inspect import sys import my_plugins def predicate(c): # filter to classes return inspect.isclass(c) def load_plugins(): for name, obj in inspect.getmembers(sys.modules['my_plugins'], predicate): obj.register_signals() ``` Are there any disadvantages to this approach compared to the ones above? (other than all the plugins must be in the same file) Thanks! **EDIT** Comments request further information... the only additional thing I can think to add is that the plugins use the [blinker](http://discorporate.us/projects/Blinker/) library to provide signals that they subscribe to. Each plugin may subscribe to different signals of different types and hence must have its own specific "register" method.
2013/01/24
[ "https://Stackoverflow.com/questions/14510286", "https://Stackoverflow.com", "https://Stackoverflow.com/users/233608/" ]
Since [Python 3.6](https://docs.python.org/3/whatsnew/3.6.html) a new class method [`__init_subclass__`](https://docs.python.org/3/reference/datamodel.html#object.__init_subclass__) is added, that is called on a base class, whenever a new subclass is created. This method can further simplify the solution offered by will-hart above, by removing the metaclass. The [`__init_subclass__`](https://docs.python.org/3/reference/datamodel.html#object.__init_subclass__) method was introduced with [PEP 487: Simpler customization of class creation](https://www.python.org/dev/peps/pep-0487/). The PEP comes with a minimal example for a plugin architecture: > > It is now possible to customize subclass creation without using a > metaclass. The new [`__init_subclass__`](https://docs.python.org/3/reference/datamodel.html#object.__init_subclass__) classmethod will be called on > the base class whenever a new subclass is created: > > > > ``` > class PluginBase: > subclasses = [] > > def __init_subclass__(cls, **kwargs): > super().__init_subclass__(**kwargs) > cls.subclasses.append(cls) > > class Plugin1(PluginBase): > pass > > class Plugin2(PluginBase): > pass > > ``` > > The PEP example above stores references to the classes in the `Plugin.plugins` field. If you want to store instances of the plugin classes, you can use a structure like this: ``` class Plugin: """Base class for all plugins. Singleton instances of subclasses are created automatically and stored in Plugin.plugins class field.""" plugins = [] def __init_subclass__(cls, **kwargs): super().__init_subclass__(**kwargs) cls.plugins.append(cls()) class MyPlugin1(Plugin): def __init__(self): print("MyPlugin1 instance created") def do_work(self): print("Do something") class MyPlugin2(Plugin): def __init__(self): print("MyPlugin2 instance created") def do_work(self): print("Do something else") for plugin in Plugin.plugins: plugin.do_work() ``` which outputs: ``` MyPlugin1 instance created MyPlugin2 instance created Do something Do something else ```
The approach from will-hart was the most useful one to me! For i needed more control I wrapped the Plugin Base class in a function like: ``` def get_plugin_base(name='Plugin', cls=object, metaclass=PluginMount): def iter_func(self): for mod in self._models: yield mod bases = not isinstance(cls, tuple) and (cls,) or cls class_dict = dict( _models=None, session=None ) class_dict['__iter__'] = iter_func return metaclass(name, bases, class_dict) ``` and then: ``` from plugin import get_plugin_base Plugin = get_plugin_base() ``` This allows to add additional baseclasses or switching to another metaclass.
1,933,217
I'm looking for a way to script a transparent forward proxy such as the ones that users point their browsers to in proxy settings. I've discovered a distinct tradeoff in forward proxies between scriptability and robustness. For example, their are countless proxies developed in [Ruby](http://github.com/whymirror/mousehole) and [Python](http://proxies.xhaus.com/python/) that allow you to inspect each request response and log, modify, filter at will ... however these either fail to proxy everything needed or crash after 20 minutes of use. On the other hand I suspect that Squid and Apache are quite robust and stable, however for the life of me I can't determine how I can develop dynamic behavior through scripting. Ultimately I would like to set quota's and dynamically filter on that quota. Part of me feels like mixing [mod\_proxy](http://httpd.apache.org/docs/1.3/mod/mod_proxy.html) and mod\_perl?? could allow interesting dynamic proxies, but its hard to know where to begin and know if its even possible. Please advise.
2009/12/19
[ "https://Stackoverflow.com/questions/1933217", "https://Stackoverflow.com", "https://Stackoverflow.com/users/143725/" ]
Squid and Apache both have mechanisms to call external scripts for allow/deny decisions per-request. This allows you to use either for their proxy engines, but call your external script per request for processing of arbitrary complexity. Your code only has to manage the business logic, not the heavy lifting. In Apache, I've never used `mod_proxy` in this way, but I have used `mod_rewrite`. mod\_rewrite also allows you to proxy requests. The [`RequestMap`](http://httpd.apache.org/docs/2.0/mod/mod_rewrite.html#rewritemap) directive allows you to pass the decision to an external script: > > MapType: prg, MapSource: Unix filesystem path to valid regular file > > > Here the source is a program, not a map file. To create it you can use a language of your choice, but the result has to be an executable program (either object-code or a script with the magic cookie trick '#!/path/to/interpreter' as the first line). > > > This program is started once, when the Apache server is started, and then communicates with the rewriting engine via its stdin and stdout file-handles. For each map-function lookup it will receive the key to lookup as a newline-terminated string on stdin. It then has to give back the looked-up value as a newline-terminated string on stdout or the four-character string ``NULL'' if it fails (i.e., there is no corresponding value for the given key). > > > With Squid, you can get similar functionality via the [`external_acl_type`](http://www.visolve.com/squid/squid30/externalsupport.php#external_acl_type) directive: > > This tag defines how the external acl classes using a helper program should look up the status. > > > g'luck!
If you looking for a Perl solution then take a look at [`HTTP::Proxy`](http://search.cpan.org/dist/HTTP-Proxy/) Not sure of any mod\_perl solutions though. [CPAN](http://search.cpan.org) does bring up [`Apache::Proxy`](http://search.cpan.org/dist/Apache-Proxy/) and Googling brings up [MyProxy](http://sourceforge.net/projects/myproxy/). However note, both of these are a bit old so YMMV but you may find them a useful leg up.
1,933,217
I'm looking for a way to script a transparent forward proxy such as the ones that users point their browsers to in proxy settings. I've discovered a distinct tradeoff in forward proxies between scriptability and robustness. For example, their are countless proxies developed in [Ruby](http://github.com/whymirror/mousehole) and [Python](http://proxies.xhaus.com/python/) that allow you to inspect each request response and log, modify, filter at will ... however these either fail to proxy everything needed or crash after 20 minutes of use. On the other hand I suspect that Squid and Apache are quite robust and stable, however for the life of me I can't determine how I can develop dynamic behavior through scripting. Ultimately I would like to set quota's and dynamically filter on that quota. Part of me feels like mixing [mod\_proxy](http://httpd.apache.org/docs/1.3/mod/mod_proxy.html) and mod\_perl?? could allow interesting dynamic proxies, but its hard to know where to begin and know if its even possible. Please advise.
2009/12/19
[ "https://Stackoverflow.com/questions/1933217", "https://Stackoverflow.com", "https://Stackoverflow.com/users/143725/" ]
If you looking for a Perl solution then take a look at [`HTTP::Proxy`](http://search.cpan.org/dist/HTTP-Proxy/) Not sure of any mod\_perl solutions though. [CPAN](http://search.cpan.org) does bring up [`Apache::Proxy`](http://search.cpan.org/dist/Apache-Proxy/) and Googling brings up [MyProxy](http://sourceforge.net/projects/myproxy/). However note, both of these are a bit old so YMMV but you may find them a useful leg up.
I'd use [squid](http://www.squid-cache.org/), which can execute other programs to change the requests on the fly.
1,933,217
I'm looking for a way to script a transparent forward proxy such as the ones that users point their browsers to in proxy settings. I've discovered a distinct tradeoff in forward proxies between scriptability and robustness. For example, their are countless proxies developed in [Ruby](http://github.com/whymirror/mousehole) and [Python](http://proxies.xhaus.com/python/) that allow you to inspect each request response and log, modify, filter at will ... however these either fail to proxy everything needed or crash after 20 minutes of use. On the other hand I suspect that Squid and Apache are quite robust and stable, however for the life of me I can't determine how I can develop dynamic behavior through scripting. Ultimately I would like to set quota's and dynamically filter on that quota. Part of me feels like mixing [mod\_proxy](http://httpd.apache.org/docs/1.3/mod/mod_proxy.html) and mod\_perl?? could allow interesting dynamic proxies, but its hard to know where to begin and know if its even possible. Please advise.
2009/12/19
[ "https://Stackoverflow.com/questions/1933217", "https://Stackoverflow.com", "https://Stackoverflow.com/users/143725/" ]
Squid and Apache both have mechanisms to call external scripts for allow/deny decisions per-request. This allows you to use either for their proxy engines, but call your external script per request for processing of arbitrary complexity. Your code only has to manage the business logic, not the heavy lifting. In Apache, I've never used `mod_proxy` in this way, but I have used `mod_rewrite`. mod\_rewrite also allows you to proxy requests. The [`RequestMap`](http://httpd.apache.org/docs/2.0/mod/mod_rewrite.html#rewritemap) directive allows you to pass the decision to an external script: > > MapType: prg, MapSource: Unix filesystem path to valid regular file > > > Here the source is a program, not a map file. To create it you can use a language of your choice, but the result has to be an executable program (either object-code or a script with the magic cookie trick '#!/path/to/interpreter' as the first line). > > > This program is started once, when the Apache server is started, and then communicates with the rewriting engine via its stdin and stdout file-handles. For each map-function lookup it will receive the key to lookup as a newline-terminated string on stdin. It then has to give back the looked-up value as a newline-terminated string on stdout or the four-character string ``NULL'' if it fails (i.e., there is no corresponding value for the given key). > > > With Squid, you can get similar functionality via the [`external_acl_type`](http://www.visolve.com/squid/squid30/externalsupport.php#external_acl_type) directive: > > This tag defines how the external acl classes using a helper program should look up the status. > > > g'luck!
I've been working on a HTTP library in python, written with proxy servers specifically in mind as a use case. It isn't very mature at this point (certainly needs more testing, and unit tests), but it's complete enough that I find it useful. I don't know if it would meet any of your needs or not. The library is called httpmessage, the google-code site is found [here](http://code.google.com/p/httpmessage/). There is an example of writing a proxy server on the [examples page](http://docs.httpmessage.googlecode.com/hg/examples.html). I'm happy to receive feedback and/or bug fixes.
1,933,217
I'm looking for a way to script a transparent forward proxy such as the ones that users point their browsers to in proxy settings. I've discovered a distinct tradeoff in forward proxies between scriptability and robustness. For example, their are countless proxies developed in [Ruby](http://github.com/whymirror/mousehole) and [Python](http://proxies.xhaus.com/python/) that allow you to inspect each request response and log, modify, filter at will ... however these either fail to proxy everything needed or crash after 20 minutes of use. On the other hand I suspect that Squid and Apache are quite robust and stable, however for the life of me I can't determine how I can develop dynamic behavior through scripting. Ultimately I would like to set quota's and dynamically filter on that quota. Part of me feels like mixing [mod\_proxy](http://httpd.apache.org/docs/1.3/mod/mod_proxy.html) and mod\_perl?? could allow interesting dynamic proxies, but its hard to know where to begin and know if its even possible. Please advise.
2009/12/19
[ "https://Stackoverflow.com/questions/1933217", "https://Stackoverflow.com", "https://Stackoverflow.com/users/143725/" ]
I've been working on a HTTP library in python, written with proxy servers specifically in mind as a use case. It isn't very mature at this point (certainly needs more testing, and unit tests), but it's complete enough that I find it useful. I don't know if it would meet any of your needs or not. The library is called httpmessage, the google-code site is found [here](http://code.google.com/p/httpmessage/). There is an example of writing a proxy server on the [examples page](http://docs.httpmessage.googlecode.com/hg/examples.html). I'm happy to receive feedback and/or bug fixes.
I'd use [squid](http://www.squid-cache.org/), which can execute other programs to change the requests on the fly.
1,933,217
I'm looking for a way to script a transparent forward proxy such as the ones that users point their browsers to in proxy settings. I've discovered a distinct tradeoff in forward proxies between scriptability and robustness. For example, their are countless proxies developed in [Ruby](http://github.com/whymirror/mousehole) and [Python](http://proxies.xhaus.com/python/) that allow you to inspect each request response and log, modify, filter at will ... however these either fail to proxy everything needed or crash after 20 minutes of use. On the other hand I suspect that Squid and Apache are quite robust and stable, however for the life of me I can't determine how I can develop dynamic behavior through scripting. Ultimately I would like to set quota's and dynamically filter on that quota. Part of me feels like mixing [mod\_proxy](http://httpd.apache.org/docs/1.3/mod/mod_proxy.html) and mod\_perl?? could allow interesting dynamic proxies, but its hard to know where to begin and know if its even possible. Please advise.
2009/12/19
[ "https://Stackoverflow.com/questions/1933217", "https://Stackoverflow.com", "https://Stackoverflow.com/users/143725/" ]
Squid and Apache both have mechanisms to call external scripts for allow/deny decisions per-request. This allows you to use either for their proxy engines, but call your external script per request for processing of arbitrary complexity. Your code only has to manage the business logic, not the heavy lifting. In Apache, I've never used `mod_proxy` in this way, but I have used `mod_rewrite`. mod\_rewrite also allows you to proxy requests. The [`RequestMap`](http://httpd.apache.org/docs/2.0/mod/mod_rewrite.html#rewritemap) directive allows you to pass the decision to an external script: > > MapType: prg, MapSource: Unix filesystem path to valid regular file > > > Here the source is a program, not a map file. To create it you can use a language of your choice, but the result has to be an executable program (either object-code or a script with the magic cookie trick '#!/path/to/interpreter' as the first line). > > > This program is started once, when the Apache server is started, and then communicates with the rewriting engine via its stdin and stdout file-handles. For each map-function lookup it will receive the key to lookup as a newline-terminated string on stdin. It then has to give back the looked-up value as a newline-terminated string on stdout or the four-character string ``NULL'' if it fails (i.e., there is no corresponding value for the given key). > > > With Squid, you can get similar functionality via the [`external_acl_type`](http://www.visolve.com/squid/squid30/externalsupport.php#external_acl_type) directive: > > This tag defines how the external acl classes using a helper program should look up the status. > > > g'luck!
I'd use [squid](http://www.squid-cache.org/), which can execute other programs to change the requests on the fly.
29,397,839
I am SSHed into a remote machine and I do not have rights to download python packages but I want to use 3rd party applications for my project. I found `cx_freeze` but I'm not sure if that is what I need. What I want to achieve is to be able to run different parts of my project (will mains everywhere) with command line arguments on the remote machine. My project will be filled with a few 3rd party python packages. Not sure how to get around this as I cannot `pip install` and am not a sudoer. I can SCP files to the remote machine
2015/04/01
[ "https://Stackoverflow.com/questions/29397839", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1815710/" ]
When you pass an primitive array such as `char[]` to `Arrays.asList`, that method can't return a `List<char>`, because primitive types aren't allowed as type arguments. But it can and does produce a `List<char[]>`. Your random `char` is never equal to the single `char[]` inside the `List`, so any duplicate `char` is allowed. If you use a `Character[]` instead of the `char[]` for `cipherAlpha`, and change the return type of the method to `Character[]`, then `Arrays.asList` will infer the type argument `Character` correctly, allowing for your duplicate check to work correctly. Second, `nextInt(25)` will generate a random index between `0` and `24`, not `25`. You can use `ALPHABET.length`, which is 26 here. With the first change but without this change, you will only have 25 distinct characters, and you will never find a 26th distinct character, looping forever.
Add you Alphabet in a ArrayList and remove the element selected at each turn of your while. Then update your rand.nextInt like: ``` rand.nextInt(AlphabetList.size()); ``` And your ALPHABET like: ``` List<char> AlphabetList = Arrays.asList('a', 'b', 'c', 'd', 'e', 'f', 'g', 'h', 'i', 'j','k', 'l', 'm', 'n', 'o', 'p', 'r', 's', 't', 'u', 'v','w', 'x', 'y', 'z', ' '); ```
29,397,839
I am SSHed into a remote machine and I do not have rights to download python packages but I want to use 3rd party applications for my project. I found `cx_freeze` but I'm not sure if that is what I need. What I want to achieve is to be able to run different parts of my project (will mains everywhere) with command line arguments on the remote machine. My project will be filled with a few 3rd party python packages. Not sure how to get around this as I cannot `pip install` and am not a sudoer. I can SCP files to the remote machine
2015/04/01
[ "https://Stackoverflow.com/questions/29397839", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1815710/" ]
When you pass an primitive array such as `char[]` to `Arrays.asList`, that method can't return a `List<char>`, because primitive types aren't allowed as type arguments. But it can and does produce a `List<char[]>`. Your random `char` is never equal to the single `char[]` inside the `List`, so any duplicate `char` is allowed. If you use a `Character[]` instead of the `char[]` for `cipherAlpha`, and change the return type of the method to `Character[]`, then `Arrays.asList` will infer the type argument `Character` correctly, allowing for your duplicate check to work correctly. Second, `nextInt(25)` will generate a random index between `0` and `24`, not `25`. You can use `ALPHABET.length`, which is 26 here. With the first change but without this change, you will only have 25 distinct characters, and you will never find a 26th distinct character, looping forever.
By default the Random numbers generated are duplicate, To overcome this you can keep on adding numbers to the Collection Set till you have all unique numbers ``` while(set.size()< 26) { while (set.add(random.nextInt(26)) != true); } ``` Other way is add all integer to the list & then shuffle it. ``` for (int i=0; i<26; i++) { numberList.add(new Integer(i)); } Collections.shuffle(list); ``` You can directly do shuffle on the characters if you wish. ``` List<Character> characters = Arrays.asList('a', 'b', 'c', 'd'); Collections.shuffle(characters); ```
70,702,139
I am building a snap to test integration of a python script and a python SDK with snapcraft and there appears to be a conflict when two python 'parts' are built in the same snap. What is the best way to build a snap with multiple python modules? I have a simple script which imports the SDK and then prints some information. I also have the python SDK library (<https://help.iotconnect.io/documentation/sdk-reference/device-sdks-flavors/download-python-sdk/>) in a different folder. I have defined the two parts, and each one can be built stand alone (snapcraft build PARTNAME), however it seems the python internals are conflicting at the next step of 'staging' them together. tree output of structure ``` tree . ├── basictest │   ├── basictest.py │   ├── __init__.py │   └── setup.py ├── iotconnect-sdk-3.0.1 │   ├── iotconnect │   │   ├── assets │   │   │   ├── config.json │   │   │   └── crt.txt │   │   ├── client │   │   │   ├── httpclient.py │   │   │   ├── __init__.py │   │   │   ├── mqttclient.py │   │   │   └── offlineclient.py │   │   ├── common │   │   │   ├── data_evaluation.py │   │   │   ├── infinite_timer.py │   │   │   ├── __init__.py │   │   │   └── rule_evaluation.py │   │   ├── __init__.py │   │   ├── IoTConnectSDKException.py │   │   ├── IoTConnectSDK.py │   │   └── __pycache__ │   │   ├── __init__.cpython-38.pyc │   │   └── IoTConnectSDK.cpython-38.pyc │   ├── iotconnect_sdk.egg-info │   │   ├── dependency_links.txt │   │   ├── not-zip-safe │   │   ├── PKG-INFO │   │   ├── requires.txt │   │   ├── SOURCES.txt │   │   └── top_level.txt │   ├── PKG-INFO │   ├── README.md │   ├── setup.cfg │   └── setup.py └── snap └── snapcraft.yaml 9 directories, 30 files ``` snapcraft.yaml ``` name: basictest base: core20 version: '0.1' summary: Test snap to verifiy integration with python SDK description: | Test snap to verifiy integration with python SDK grade: devel confinement: devmode apps: basictest: command: bin/basictest parts: lib-basictest: plugin: python source: ./basictest/ after: [lib-pythonsdk] disable-parallel: true lib-pythonsdk: plugin: python source: ./iotconnect-sdk-3.0.1/ ``` Running 'snapcraft' shows tons of errors which look like conflicts between the two 'parts' related to the internals of python. snapcraft output ``` snapcraft Launching a VM. Skipping pull lib-pythonsdk (already ran) Skipping pull lib-basictest (already ran) Skipping build lib-pythonsdk (already ran) Skipping build lib-basictest (already ran) Failed to stage: Parts 'lib-pythonsdk' and 'lib-basictest' have the following files, but with different contents: bin/activate bin/activate.csh bin/activate.fish lib/python3.8/site-packages/_distutils_hack/__pycache__/__init__.cpython-38.pyc lib/python3.8/site-packages/_distutils_hack/__pycache__/override.cpython-38.pyc lib/python3.8/site-packages/pip-21.3.1.dist-info/RECORD lib/python3.8/site-packages/pip/__pycache__/__init__.cpython-38.pyc lib/python3.8/site-packages/pip/__pycache__/__main__.cpython-38.pyc lib/python3.8/site-packages/pip/_internal/__pycache__/__init__.cpython-38.pyc lib/python3.8/site-packages/pip/_internal/__pycache__/build_env.cpython-38.pyc lib/python3.8/site-packages/pip/_internal/__pycache__/cache.cpython-38.pyc lib/python3.8/site-packages/pip/_internal/__pycache__/configuration.cpython-38.pyc lib/python3.8/site-packages/pip/_internal/__pycache__/exceptions.cpython-38.pyc lib/python3.8/site-packages/pip/_internal/__pycache__/main.cpython-38.pyc lib/python3.8/site-packages/pip/_internal/__pycache__/pyproject.cpython-38.pyc lib/python3.8/site-packages/pip/_internal/__pycache__/self_outdated_check.cpython-38.pyc lib/python3.8/site-packages/pip/_internal/__pycache__/wheel_builder.cpython-38.pyc lib/python3.8/site-packages/pip/_internal/cli/__pycache__/__init__.cpython-38.pyc lib/python3.8/site-packages/pip/_internal/cli/__pycache__/autocompletion.cpython-38.pyc lib/python3.8/site-packages/pip/_internal/cli/__pycache__/base_command.cpython-38.pyc lib/python3.8/site-packages/pip/_internal/cli/__pycache__/cmdoptions.cpython-38.pyc lib/python3.8/site-packages/pip/_internal/cli/__pycache__/command_context.cpython-38.pyc lib/python3.8/site-packages/pip/_internal/cli/__pycache__/main.cpython-38.pyc lib/python3.8/site-packages/pip/_internal/cli/__pycache__/main_parser.cpython-38.pyc lib/python3.8/site-packages/pip/_internal/cli/__pycache__/parser.cpython-38.pyc lib/python3.8/site-packages/pip/_internal/cli/__pycache__/progress_bars.cpython-38.pyc lib/python3.8/site-packages/pip/_internal/cli/__pycache__/req_command.cpython-38.pyc lib/python3.8/site-packages/pip/_internal/cli/__pycache__/spinners.cpython-38.pyc lib/python3.8/site-packages/pip/_internal/cli/__pycache__/status_codes.cpython-38.pyc lib/python3.8/site-packages/pip/_internal/commands/__pycache__/__init__.cpython-38.pyc lib/python3.8/site-packages/pip/_internal/commands/__pycache__/cache.cpython-38.pyc lib/python3.8/site-packages/pip/_internal/commands/__pycache__/check.cpython-38.pyc lib/python3.8/site-packages/pip/_internal/commands/__pycache__/completion.cpython-38.pyc lib/python3.8/site-packages/pip/_internal/commands/__pycache__/configuration.cpython-38.pyc lib/python3.8/site-packages/pip/_internal/commands/__pycache__/debug.cpython-38.pyc lib/python3.8/site-packages/pip/_internal/commands/__pycache__/download.cpython-38.pyc lib/python3.8/site-packages/pip/_internal/commands/__pycache__/freeze.cpython-38.pyc lib/python3.8/site-packages/pip/_internal/commands/__pycache__/hash.cpython-38.pyc lib/python3.8/site-packages/pip/_internal/commands/__pycache__/help.cpython-38.pyc lib/python3.8/site-packages/pip/_internal/commands/__pycache__/index.cpython-38.pyc lib/python3.8/site-packages/pip/_internal/commands/__pycache__/install.cpython-38.pyc lib/python3.8/site-packages/pip/_internal/commands/__pycache__/list.cpython-38.pyc lib/python3.8/site-packages/pip/_internal/commands/__pycache__/search.cpython-38.pyc lib/python3.8/site-packages/pip/_internal/commands/__pycache__/show.cpython-38.pyc lib/python3.8/site-packages/pip/_internal/commands/__pycache__/uninstall.cpython-38.pyc lib/python3.8/site-packages/pip/_internal/commands/__pycache__/wheel.cpython-38.pyc lib/python3.8/site-packages/pip/_internal/distributions/__pycache__/__init__.cpython-38.pyc lib/python3.8/site-packages/pip/_internal/distributions/__pycache__/base.cpython-38.pyc lib/python3.8/site-packages/pip/_internal/distributions/__pycache__/installed.cpython-38.pyc lib/python3.8/site-packages/pip/_internal/distributions/__pycache__/sdist.cpython-38.pyc lib/python3.8/site-packages/pip/_internal/distributions/__pycache__/wheel.cpython-38.pyc lib/python3.8/site-packages/pip/_internal/index/__pycache__/__init__.cpython-38.pyc lib/python3.8/site-packages/pip/_internal/index/__pycache__/collector.cpython-38.pyc lib/python3.8/site-packages/pip/_internal/index/__pycache__/package_finder.cpython-38.pyc lib/python3.8/site-packages/pip/_internal/index/__pycache__/sources.cpython-38.pyc lib/python3.8/site-packages/pip/_internal/locations/__pycache__/__init__.cpython-38.pyc ... Tons more removed Snapcraft offers some capabilities to solve this by use of the following keywords: - `filesets` - `stage` - `snap` - `organize` To learn more about these part keywords, run `snapcraft help plugins`. Run the same command again with --debug to shell into the environment if you wish to introspect this failure. ``` **Main question** What is the best way to build a snap with multiple python modules?
2022/01/13
[ "https://Stackoverflow.com/questions/70702139", "https://Stackoverflow.com", "https://Stackoverflow.com/users/17927115/" ]
Looks like your problem is that you are trying to run python main.py from within the Python interpreter, which is why you're seeing that traceback. Make sure you're out of the interpreter: ``` exit() ``` Then run the **python main.py** command from bash or command prompt or whatever.
Invoke python scripts like this: ``` PS C:\Users\sween\Desktop> python ./a.py ``` Not like this: ``` PS C:\Users\sween\Desktop> python Python 3.10.0 (tags/v3.10.0:b494f59, Oct 4 2021, 19:00:18) [MSC v.1929 64 bit (AMD64)] on win32 Type "help", "copyright", "credits" or "license" for more information. >>> ./a.py File "<stdin>", line 1 ./a.py ^ SyntaxError: invalid syntax ``` The three arrows `>>>` indicate a place to write Python code, not filenames or paths.
70,702,139
I am building a snap to test integration of a python script and a python SDK with snapcraft and there appears to be a conflict when two python 'parts' are built in the same snap. What is the best way to build a snap with multiple python modules? I have a simple script which imports the SDK and then prints some information. I also have the python SDK library (<https://help.iotconnect.io/documentation/sdk-reference/device-sdks-flavors/download-python-sdk/>) in a different folder. I have defined the two parts, and each one can be built stand alone (snapcraft build PARTNAME), however it seems the python internals are conflicting at the next step of 'staging' them together. tree output of structure ``` tree . ├── basictest │   ├── basictest.py │   ├── __init__.py │   └── setup.py ├── iotconnect-sdk-3.0.1 │   ├── iotconnect │   │   ├── assets │   │   │   ├── config.json │   │   │   └── crt.txt │   │   ├── client │   │   │   ├── httpclient.py │   │   │   ├── __init__.py │   │   │   ├── mqttclient.py │   │   │   └── offlineclient.py │   │   ├── common │   │   │   ├── data_evaluation.py │   │   │   ├── infinite_timer.py │   │   │   ├── __init__.py │   │   │   └── rule_evaluation.py │   │   ├── __init__.py │   │   ├── IoTConnectSDKException.py │   │   ├── IoTConnectSDK.py │   │   └── __pycache__ │   │   ├── __init__.cpython-38.pyc │   │   └── IoTConnectSDK.cpython-38.pyc │   ├── iotconnect_sdk.egg-info │   │   ├── dependency_links.txt │   │   ├── not-zip-safe │   │   ├── PKG-INFO │   │   ├── requires.txt │   │   ├── SOURCES.txt │   │   └── top_level.txt │   ├── PKG-INFO │   ├── README.md │   ├── setup.cfg │   └── setup.py └── snap └── snapcraft.yaml 9 directories, 30 files ``` snapcraft.yaml ``` name: basictest base: core20 version: '0.1' summary: Test snap to verifiy integration with python SDK description: | Test snap to verifiy integration with python SDK grade: devel confinement: devmode apps: basictest: command: bin/basictest parts: lib-basictest: plugin: python source: ./basictest/ after: [lib-pythonsdk] disable-parallel: true lib-pythonsdk: plugin: python source: ./iotconnect-sdk-3.0.1/ ``` Running 'snapcraft' shows tons of errors which look like conflicts between the two 'parts' related to the internals of python. snapcraft output ``` snapcraft Launching a VM. Skipping pull lib-pythonsdk (already ran) Skipping pull lib-basictest (already ran) Skipping build lib-pythonsdk (already ran) Skipping build lib-basictest (already ran) Failed to stage: Parts 'lib-pythonsdk' and 'lib-basictest' have the following files, but with different contents: bin/activate bin/activate.csh bin/activate.fish lib/python3.8/site-packages/_distutils_hack/__pycache__/__init__.cpython-38.pyc lib/python3.8/site-packages/_distutils_hack/__pycache__/override.cpython-38.pyc lib/python3.8/site-packages/pip-21.3.1.dist-info/RECORD lib/python3.8/site-packages/pip/__pycache__/__init__.cpython-38.pyc lib/python3.8/site-packages/pip/__pycache__/__main__.cpython-38.pyc lib/python3.8/site-packages/pip/_internal/__pycache__/__init__.cpython-38.pyc lib/python3.8/site-packages/pip/_internal/__pycache__/build_env.cpython-38.pyc lib/python3.8/site-packages/pip/_internal/__pycache__/cache.cpython-38.pyc lib/python3.8/site-packages/pip/_internal/__pycache__/configuration.cpython-38.pyc lib/python3.8/site-packages/pip/_internal/__pycache__/exceptions.cpython-38.pyc lib/python3.8/site-packages/pip/_internal/__pycache__/main.cpython-38.pyc lib/python3.8/site-packages/pip/_internal/__pycache__/pyproject.cpython-38.pyc lib/python3.8/site-packages/pip/_internal/__pycache__/self_outdated_check.cpython-38.pyc lib/python3.8/site-packages/pip/_internal/__pycache__/wheel_builder.cpython-38.pyc lib/python3.8/site-packages/pip/_internal/cli/__pycache__/__init__.cpython-38.pyc lib/python3.8/site-packages/pip/_internal/cli/__pycache__/autocompletion.cpython-38.pyc lib/python3.8/site-packages/pip/_internal/cli/__pycache__/base_command.cpython-38.pyc lib/python3.8/site-packages/pip/_internal/cli/__pycache__/cmdoptions.cpython-38.pyc lib/python3.8/site-packages/pip/_internal/cli/__pycache__/command_context.cpython-38.pyc lib/python3.8/site-packages/pip/_internal/cli/__pycache__/main.cpython-38.pyc lib/python3.8/site-packages/pip/_internal/cli/__pycache__/main_parser.cpython-38.pyc lib/python3.8/site-packages/pip/_internal/cli/__pycache__/parser.cpython-38.pyc lib/python3.8/site-packages/pip/_internal/cli/__pycache__/progress_bars.cpython-38.pyc lib/python3.8/site-packages/pip/_internal/cli/__pycache__/req_command.cpython-38.pyc lib/python3.8/site-packages/pip/_internal/cli/__pycache__/spinners.cpython-38.pyc lib/python3.8/site-packages/pip/_internal/cli/__pycache__/status_codes.cpython-38.pyc lib/python3.8/site-packages/pip/_internal/commands/__pycache__/__init__.cpython-38.pyc lib/python3.8/site-packages/pip/_internal/commands/__pycache__/cache.cpython-38.pyc lib/python3.8/site-packages/pip/_internal/commands/__pycache__/check.cpython-38.pyc lib/python3.8/site-packages/pip/_internal/commands/__pycache__/completion.cpython-38.pyc lib/python3.8/site-packages/pip/_internal/commands/__pycache__/configuration.cpython-38.pyc lib/python3.8/site-packages/pip/_internal/commands/__pycache__/debug.cpython-38.pyc lib/python3.8/site-packages/pip/_internal/commands/__pycache__/download.cpython-38.pyc lib/python3.8/site-packages/pip/_internal/commands/__pycache__/freeze.cpython-38.pyc lib/python3.8/site-packages/pip/_internal/commands/__pycache__/hash.cpython-38.pyc lib/python3.8/site-packages/pip/_internal/commands/__pycache__/help.cpython-38.pyc lib/python3.8/site-packages/pip/_internal/commands/__pycache__/index.cpython-38.pyc lib/python3.8/site-packages/pip/_internal/commands/__pycache__/install.cpython-38.pyc lib/python3.8/site-packages/pip/_internal/commands/__pycache__/list.cpython-38.pyc lib/python3.8/site-packages/pip/_internal/commands/__pycache__/search.cpython-38.pyc lib/python3.8/site-packages/pip/_internal/commands/__pycache__/show.cpython-38.pyc lib/python3.8/site-packages/pip/_internal/commands/__pycache__/uninstall.cpython-38.pyc lib/python3.8/site-packages/pip/_internal/commands/__pycache__/wheel.cpython-38.pyc lib/python3.8/site-packages/pip/_internal/distributions/__pycache__/__init__.cpython-38.pyc lib/python3.8/site-packages/pip/_internal/distributions/__pycache__/base.cpython-38.pyc lib/python3.8/site-packages/pip/_internal/distributions/__pycache__/installed.cpython-38.pyc lib/python3.8/site-packages/pip/_internal/distributions/__pycache__/sdist.cpython-38.pyc lib/python3.8/site-packages/pip/_internal/distributions/__pycache__/wheel.cpython-38.pyc lib/python3.8/site-packages/pip/_internal/index/__pycache__/__init__.cpython-38.pyc lib/python3.8/site-packages/pip/_internal/index/__pycache__/collector.cpython-38.pyc lib/python3.8/site-packages/pip/_internal/index/__pycache__/package_finder.cpython-38.pyc lib/python3.8/site-packages/pip/_internal/index/__pycache__/sources.cpython-38.pyc lib/python3.8/site-packages/pip/_internal/locations/__pycache__/__init__.cpython-38.pyc ... Tons more removed Snapcraft offers some capabilities to solve this by use of the following keywords: - `filesets` - `stage` - `snap` - `organize` To learn more about these part keywords, run `snapcraft help plugins`. Run the same command again with --debug to shell into the environment if you wish to introspect this failure. ``` **Main question** What is the best way to build a snap with multiple python modules?
2022/01/13
[ "https://Stackoverflow.com/questions/70702139", "https://Stackoverflow.com", "https://Stackoverflow.com/users/17927115/" ]
Looks like your problem is that you are trying to run python main.py from within the Python interpreter, which is why you're seeing that traceback. Make sure you're out of the interpreter: ``` exit() ``` Then run the **python main.py** command from bash or command prompt or whatever.
For better assistance, you may need to provide your Python version. For **Python 3.8**, use `from tkinter import Tk` or `from tkinter import *`. If this did not solve your problem, you may have problem with **tkinter** installation.
70,702,139
I am building a snap to test integration of a python script and a python SDK with snapcraft and there appears to be a conflict when two python 'parts' are built in the same snap. What is the best way to build a snap with multiple python modules? I have a simple script which imports the SDK and then prints some information. I also have the python SDK library (<https://help.iotconnect.io/documentation/sdk-reference/device-sdks-flavors/download-python-sdk/>) in a different folder. I have defined the two parts, and each one can be built stand alone (snapcraft build PARTNAME), however it seems the python internals are conflicting at the next step of 'staging' them together. tree output of structure ``` tree . ├── basictest │   ├── basictest.py │   ├── __init__.py │   └── setup.py ├── iotconnect-sdk-3.0.1 │   ├── iotconnect │   │   ├── assets │   │   │   ├── config.json │   │   │   └── crt.txt │   │   ├── client │   │   │   ├── httpclient.py │   │   │   ├── __init__.py │   │   │   ├── mqttclient.py │   │   │   └── offlineclient.py │   │   ├── common │   │   │   ├── data_evaluation.py │   │   │   ├── infinite_timer.py │   │   │   ├── __init__.py │   │   │   └── rule_evaluation.py │   │   ├── __init__.py │   │   ├── IoTConnectSDKException.py │   │   ├── IoTConnectSDK.py │   │   └── __pycache__ │   │   ├── __init__.cpython-38.pyc │   │   └── IoTConnectSDK.cpython-38.pyc │   ├── iotconnect_sdk.egg-info │   │   ├── dependency_links.txt │   │   ├── not-zip-safe │   │   ├── PKG-INFO │   │   ├── requires.txt │   │   ├── SOURCES.txt │   │   └── top_level.txt │   ├── PKG-INFO │   ├── README.md │   ├── setup.cfg │   └── setup.py └── snap └── snapcraft.yaml 9 directories, 30 files ``` snapcraft.yaml ``` name: basictest base: core20 version: '0.1' summary: Test snap to verifiy integration with python SDK description: | Test snap to verifiy integration with python SDK grade: devel confinement: devmode apps: basictest: command: bin/basictest parts: lib-basictest: plugin: python source: ./basictest/ after: [lib-pythonsdk] disable-parallel: true lib-pythonsdk: plugin: python source: ./iotconnect-sdk-3.0.1/ ``` Running 'snapcraft' shows tons of errors which look like conflicts between the two 'parts' related to the internals of python. snapcraft output ``` snapcraft Launching a VM. Skipping pull lib-pythonsdk (already ran) Skipping pull lib-basictest (already ran) Skipping build lib-pythonsdk (already ran) Skipping build lib-basictest (already ran) Failed to stage: Parts 'lib-pythonsdk' and 'lib-basictest' have the following files, but with different contents: bin/activate bin/activate.csh bin/activate.fish lib/python3.8/site-packages/_distutils_hack/__pycache__/__init__.cpython-38.pyc lib/python3.8/site-packages/_distutils_hack/__pycache__/override.cpython-38.pyc lib/python3.8/site-packages/pip-21.3.1.dist-info/RECORD lib/python3.8/site-packages/pip/__pycache__/__init__.cpython-38.pyc lib/python3.8/site-packages/pip/__pycache__/__main__.cpython-38.pyc lib/python3.8/site-packages/pip/_internal/__pycache__/__init__.cpython-38.pyc lib/python3.8/site-packages/pip/_internal/__pycache__/build_env.cpython-38.pyc lib/python3.8/site-packages/pip/_internal/__pycache__/cache.cpython-38.pyc lib/python3.8/site-packages/pip/_internal/__pycache__/configuration.cpython-38.pyc lib/python3.8/site-packages/pip/_internal/__pycache__/exceptions.cpython-38.pyc lib/python3.8/site-packages/pip/_internal/__pycache__/main.cpython-38.pyc lib/python3.8/site-packages/pip/_internal/__pycache__/pyproject.cpython-38.pyc lib/python3.8/site-packages/pip/_internal/__pycache__/self_outdated_check.cpython-38.pyc lib/python3.8/site-packages/pip/_internal/__pycache__/wheel_builder.cpython-38.pyc lib/python3.8/site-packages/pip/_internal/cli/__pycache__/__init__.cpython-38.pyc lib/python3.8/site-packages/pip/_internal/cli/__pycache__/autocompletion.cpython-38.pyc lib/python3.8/site-packages/pip/_internal/cli/__pycache__/base_command.cpython-38.pyc lib/python3.8/site-packages/pip/_internal/cli/__pycache__/cmdoptions.cpython-38.pyc lib/python3.8/site-packages/pip/_internal/cli/__pycache__/command_context.cpython-38.pyc lib/python3.8/site-packages/pip/_internal/cli/__pycache__/main.cpython-38.pyc lib/python3.8/site-packages/pip/_internal/cli/__pycache__/main_parser.cpython-38.pyc lib/python3.8/site-packages/pip/_internal/cli/__pycache__/parser.cpython-38.pyc lib/python3.8/site-packages/pip/_internal/cli/__pycache__/progress_bars.cpython-38.pyc lib/python3.8/site-packages/pip/_internal/cli/__pycache__/req_command.cpython-38.pyc lib/python3.8/site-packages/pip/_internal/cli/__pycache__/spinners.cpython-38.pyc lib/python3.8/site-packages/pip/_internal/cli/__pycache__/status_codes.cpython-38.pyc lib/python3.8/site-packages/pip/_internal/commands/__pycache__/__init__.cpython-38.pyc lib/python3.8/site-packages/pip/_internal/commands/__pycache__/cache.cpython-38.pyc lib/python3.8/site-packages/pip/_internal/commands/__pycache__/check.cpython-38.pyc lib/python3.8/site-packages/pip/_internal/commands/__pycache__/completion.cpython-38.pyc lib/python3.8/site-packages/pip/_internal/commands/__pycache__/configuration.cpython-38.pyc lib/python3.8/site-packages/pip/_internal/commands/__pycache__/debug.cpython-38.pyc lib/python3.8/site-packages/pip/_internal/commands/__pycache__/download.cpython-38.pyc lib/python3.8/site-packages/pip/_internal/commands/__pycache__/freeze.cpython-38.pyc lib/python3.8/site-packages/pip/_internal/commands/__pycache__/hash.cpython-38.pyc lib/python3.8/site-packages/pip/_internal/commands/__pycache__/help.cpython-38.pyc lib/python3.8/site-packages/pip/_internal/commands/__pycache__/index.cpython-38.pyc lib/python3.8/site-packages/pip/_internal/commands/__pycache__/install.cpython-38.pyc lib/python3.8/site-packages/pip/_internal/commands/__pycache__/list.cpython-38.pyc lib/python3.8/site-packages/pip/_internal/commands/__pycache__/search.cpython-38.pyc lib/python3.8/site-packages/pip/_internal/commands/__pycache__/show.cpython-38.pyc lib/python3.8/site-packages/pip/_internal/commands/__pycache__/uninstall.cpython-38.pyc lib/python3.8/site-packages/pip/_internal/commands/__pycache__/wheel.cpython-38.pyc lib/python3.8/site-packages/pip/_internal/distributions/__pycache__/__init__.cpython-38.pyc lib/python3.8/site-packages/pip/_internal/distributions/__pycache__/base.cpython-38.pyc lib/python3.8/site-packages/pip/_internal/distributions/__pycache__/installed.cpython-38.pyc lib/python3.8/site-packages/pip/_internal/distributions/__pycache__/sdist.cpython-38.pyc lib/python3.8/site-packages/pip/_internal/distributions/__pycache__/wheel.cpython-38.pyc lib/python3.8/site-packages/pip/_internal/index/__pycache__/__init__.cpython-38.pyc lib/python3.8/site-packages/pip/_internal/index/__pycache__/collector.cpython-38.pyc lib/python3.8/site-packages/pip/_internal/index/__pycache__/package_finder.cpython-38.pyc lib/python3.8/site-packages/pip/_internal/index/__pycache__/sources.cpython-38.pyc lib/python3.8/site-packages/pip/_internal/locations/__pycache__/__init__.cpython-38.pyc ... Tons more removed Snapcraft offers some capabilities to solve this by use of the following keywords: - `filesets` - `stage` - `snap` - `organize` To learn more about these part keywords, run `snapcraft help plugins`. Run the same command again with --debug to shell into the environment if you wish to introspect this failure. ``` **Main question** What is the best way to build a snap with multiple python modules?
2022/01/13
[ "https://Stackoverflow.com/questions/70702139", "https://Stackoverflow.com", "https://Stackoverflow.com/users/17927115/" ]
Looks like your problem is that you are trying to run python main.py from within the Python interpreter, which is why you're seeing that traceback. Make sure you're out of the interpreter: ``` exit() ``` Then run the **python main.py** command from bash or command prompt or whatever.
First thing I noticed was you need to switch out `root.Canvas` with `tk.Canvas`. ``` import tkinter as tk from tkinter import filedialog, Text import os root = tk.Tk() canvas = tk.Canvas(root, height=700, width=700, bg="#263d42") canvas.pack() root.mainloop() ``` Although even using your original unedited script, it didn't result in a SyntaxError, but an AttributeError for the canvas. I'm working from Pycharm, my assumption is you're working from command line? It looks like you're running main.py from within the interpreter, which you should be able to use `exit()` to resolve. [The post linked here goes into more detail on that.](https://stackoverflow.com/questions/13961140/syntax-error-when-using-command-line-in-python)
70,702,139
I am building a snap to test integration of a python script and a python SDK with snapcraft and there appears to be a conflict when two python 'parts' are built in the same snap. What is the best way to build a snap with multiple python modules? I have a simple script which imports the SDK and then prints some information. I also have the python SDK library (<https://help.iotconnect.io/documentation/sdk-reference/device-sdks-flavors/download-python-sdk/>) in a different folder. I have defined the two parts, and each one can be built stand alone (snapcraft build PARTNAME), however it seems the python internals are conflicting at the next step of 'staging' them together. tree output of structure ``` tree . ├── basictest │   ├── basictest.py │   ├── __init__.py │   └── setup.py ├── iotconnect-sdk-3.0.1 │   ├── iotconnect │   │   ├── assets │   │   │   ├── config.json │   │   │   └── crt.txt │   │   ├── client │   │   │   ├── httpclient.py │   │   │   ├── __init__.py │   │   │   ├── mqttclient.py │   │   │   └── offlineclient.py │   │   ├── common │   │   │   ├── data_evaluation.py │   │   │   ├── infinite_timer.py │   │   │   ├── __init__.py │   │   │   └── rule_evaluation.py │   │   ├── __init__.py │   │   ├── IoTConnectSDKException.py │   │   ├── IoTConnectSDK.py │   │   └── __pycache__ │   │   ├── __init__.cpython-38.pyc │   │   └── IoTConnectSDK.cpython-38.pyc │   ├── iotconnect_sdk.egg-info │   │   ├── dependency_links.txt │   │   ├── not-zip-safe │   │   ├── PKG-INFO │   │   ├── requires.txt │   │   ├── SOURCES.txt │   │   └── top_level.txt │   ├── PKG-INFO │   ├── README.md │   ├── setup.cfg │   └── setup.py └── snap └── snapcraft.yaml 9 directories, 30 files ``` snapcraft.yaml ``` name: basictest base: core20 version: '0.1' summary: Test snap to verifiy integration with python SDK description: | Test snap to verifiy integration with python SDK grade: devel confinement: devmode apps: basictest: command: bin/basictest parts: lib-basictest: plugin: python source: ./basictest/ after: [lib-pythonsdk] disable-parallel: true lib-pythonsdk: plugin: python source: ./iotconnect-sdk-3.0.1/ ``` Running 'snapcraft' shows tons of errors which look like conflicts between the two 'parts' related to the internals of python. snapcraft output ``` snapcraft Launching a VM. Skipping pull lib-pythonsdk (already ran) Skipping pull lib-basictest (already ran) Skipping build lib-pythonsdk (already ran) Skipping build lib-basictest (already ran) Failed to stage: Parts 'lib-pythonsdk' and 'lib-basictest' have the following files, but with different contents: bin/activate bin/activate.csh bin/activate.fish lib/python3.8/site-packages/_distutils_hack/__pycache__/__init__.cpython-38.pyc lib/python3.8/site-packages/_distutils_hack/__pycache__/override.cpython-38.pyc lib/python3.8/site-packages/pip-21.3.1.dist-info/RECORD lib/python3.8/site-packages/pip/__pycache__/__init__.cpython-38.pyc lib/python3.8/site-packages/pip/__pycache__/__main__.cpython-38.pyc lib/python3.8/site-packages/pip/_internal/__pycache__/__init__.cpython-38.pyc lib/python3.8/site-packages/pip/_internal/__pycache__/build_env.cpython-38.pyc lib/python3.8/site-packages/pip/_internal/__pycache__/cache.cpython-38.pyc lib/python3.8/site-packages/pip/_internal/__pycache__/configuration.cpython-38.pyc lib/python3.8/site-packages/pip/_internal/__pycache__/exceptions.cpython-38.pyc lib/python3.8/site-packages/pip/_internal/__pycache__/main.cpython-38.pyc lib/python3.8/site-packages/pip/_internal/__pycache__/pyproject.cpython-38.pyc lib/python3.8/site-packages/pip/_internal/__pycache__/self_outdated_check.cpython-38.pyc lib/python3.8/site-packages/pip/_internal/__pycache__/wheel_builder.cpython-38.pyc lib/python3.8/site-packages/pip/_internal/cli/__pycache__/__init__.cpython-38.pyc lib/python3.8/site-packages/pip/_internal/cli/__pycache__/autocompletion.cpython-38.pyc lib/python3.8/site-packages/pip/_internal/cli/__pycache__/base_command.cpython-38.pyc lib/python3.8/site-packages/pip/_internal/cli/__pycache__/cmdoptions.cpython-38.pyc lib/python3.8/site-packages/pip/_internal/cli/__pycache__/command_context.cpython-38.pyc lib/python3.8/site-packages/pip/_internal/cli/__pycache__/main.cpython-38.pyc lib/python3.8/site-packages/pip/_internal/cli/__pycache__/main_parser.cpython-38.pyc lib/python3.8/site-packages/pip/_internal/cli/__pycache__/parser.cpython-38.pyc lib/python3.8/site-packages/pip/_internal/cli/__pycache__/progress_bars.cpython-38.pyc lib/python3.8/site-packages/pip/_internal/cli/__pycache__/req_command.cpython-38.pyc lib/python3.8/site-packages/pip/_internal/cli/__pycache__/spinners.cpython-38.pyc lib/python3.8/site-packages/pip/_internal/cli/__pycache__/status_codes.cpython-38.pyc lib/python3.8/site-packages/pip/_internal/commands/__pycache__/__init__.cpython-38.pyc lib/python3.8/site-packages/pip/_internal/commands/__pycache__/cache.cpython-38.pyc lib/python3.8/site-packages/pip/_internal/commands/__pycache__/check.cpython-38.pyc lib/python3.8/site-packages/pip/_internal/commands/__pycache__/completion.cpython-38.pyc lib/python3.8/site-packages/pip/_internal/commands/__pycache__/configuration.cpython-38.pyc lib/python3.8/site-packages/pip/_internal/commands/__pycache__/debug.cpython-38.pyc lib/python3.8/site-packages/pip/_internal/commands/__pycache__/download.cpython-38.pyc lib/python3.8/site-packages/pip/_internal/commands/__pycache__/freeze.cpython-38.pyc lib/python3.8/site-packages/pip/_internal/commands/__pycache__/hash.cpython-38.pyc lib/python3.8/site-packages/pip/_internal/commands/__pycache__/help.cpython-38.pyc lib/python3.8/site-packages/pip/_internal/commands/__pycache__/index.cpython-38.pyc lib/python3.8/site-packages/pip/_internal/commands/__pycache__/install.cpython-38.pyc lib/python3.8/site-packages/pip/_internal/commands/__pycache__/list.cpython-38.pyc lib/python3.8/site-packages/pip/_internal/commands/__pycache__/search.cpython-38.pyc lib/python3.8/site-packages/pip/_internal/commands/__pycache__/show.cpython-38.pyc lib/python3.8/site-packages/pip/_internal/commands/__pycache__/uninstall.cpython-38.pyc lib/python3.8/site-packages/pip/_internal/commands/__pycache__/wheel.cpython-38.pyc lib/python3.8/site-packages/pip/_internal/distributions/__pycache__/__init__.cpython-38.pyc lib/python3.8/site-packages/pip/_internal/distributions/__pycache__/base.cpython-38.pyc lib/python3.8/site-packages/pip/_internal/distributions/__pycache__/installed.cpython-38.pyc lib/python3.8/site-packages/pip/_internal/distributions/__pycache__/sdist.cpython-38.pyc lib/python3.8/site-packages/pip/_internal/distributions/__pycache__/wheel.cpython-38.pyc lib/python3.8/site-packages/pip/_internal/index/__pycache__/__init__.cpython-38.pyc lib/python3.8/site-packages/pip/_internal/index/__pycache__/collector.cpython-38.pyc lib/python3.8/site-packages/pip/_internal/index/__pycache__/package_finder.cpython-38.pyc lib/python3.8/site-packages/pip/_internal/index/__pycache__/sources.cpython-38.pyc lib/python3.8/site-packages/pip/_internal/locations/__pycache__/__init__.cpython-38.pyc ... Tons more removed Snapcraft offers some capabilities to solve this by use of the following keywords: - `filesets` - `stage` - `snap` - `organize` To learn more about these part keywords, run `snapcraft help plugins`. Run the same command again with --debug to shell into the environment if you wish to introspect this failure. ``` **Main question** What is the best way to build a snap with multiple python modules?
2022/01/13
[ "https://Stackoverflow.com/questions/70702139", "https://Stackoverflow.com", "https://Stackoverflow.com/users/17927115/" ]
Looks like your problem is that you are trying to run python main.py from within the Python interpreter, which is why you're seeing that traceback. Make sure you're out of the interpreter: ``` exit() ``` Then run the **python main.py** command from bash or command prompt or whatever.
Your syntax error is on line 6: Instead of: canvas = root.Canvas(root, height=700, width=700, bg="#263d42") Try: canvas = Canvas(root, height=700, width=700, bg="#263d42")
70,702,139
I am building a snap to test integration of a python script and a python SDK with snapcraft and there appears to be a conflict when two python 'parts' are built in the same snap. What is the best way to build a snap with multiple python modules? I have a simple script which imports the SDK and then prints some information. I also have the python SDK library (<https://help.iotconnect.io/documentation/sdk-reference/device-sdks-flavors/download-python-sdk/>) in a different folder. I have defined the two parts, and each one can be built stand alone (snapcraft build PARTNAME), however it seems the python internals are conflicting at the next step of 'staging' them together. tree output of structure ``` tree . ├── basictest │   ├── basictest.py │   ├── __init__.py │   └── setup.py ├── iotconnect-sdk-3.0.1 │   ├── iotconnect │   │   ├── assets │   │   │   ├── config.json │   │   │   └── crt.txt │   │   ├── client │   │   │   ├── httpclient.py │   │   │   ├── __init__.py │   │   │   ├── mqttclient.py │   │   │   └── offlineclient.py │   │   ├── common │   │   │   ├── data_evaluation.py │   │   │   ├── infinite_timer.py │   │   │   ├── __init__.py │   │   │   └── rule_evaluation.py │   │   ├── __init__.py │   │   ├── IoTConnectSDKException.py │   │   ├── IoTConnectSDK.py │   │   └── __pycache__ │   │   ├── __init__.cpython-38.pyc │   │   └── IoTConnectSDK.cpython-38.pyc │   ├── iotconnect_sdk.egg-info │   │   ├── dependency_links.txt │   │   ├── not-zip-safe │   │   ├── PKG-INFO │   │   ├── requires.txt │   │   ├── SOURCES.txt │   │   └── top_level.txt │   ├── PKG-INFO │   ├── README.md │   ├── setup.cfg │   └── setup.py └── snap └── snapcraft.yaml 9 directories, 30 files ``` snapcraft.yaml ``` name: basictest base: core20 version: '0.1' summary: Test snap to verifiy integration with python SDK description: | Test snap to verifiy integration with python SDK grade: devel confinement: devmode apps: basictest: command: bin/basictest parts: lib-basictest: plugin: python source: ./basictest/ after: [lib-pythonsdk] disable-parallel: true lib-pythonsdk: plugin: python source: ./iotconnect-sdk-3.0.1/ ``` Running 'snapcraft' shows tons of errors which look like conflicts between the two 'parts' related to the internals of python. snapcraft output ``` snapcraft Launching a VM. Skipping pull lib-pythonsdk (already ran) Skipping pull lib-basictest (already ran) Skipping build lib-pythonsdk (already ran) Skipping build lib-basictest (already ran) Failed to stage: Parts 'lib-pythonsdk' and 'lib-basictest' have the following files, but with different contents: bin/activate bin/activate.csh bin/activate.fish lib/python3.8/site-packages/_distutils_hack/__pycache__/__init__.cpython-38.pyc lib/python3.8/site-packages/_distutils_hack/__pycache__/override.cpython-38.pyc lib/python3.8/site-packages/pip-21.3.1.dist-info/RECORD lib/python3.8/site-packages/pip/__pycache__/__init__.cpython-38.pyc lib/python3.8/site-packages/pip/__pycache__/__main__.cpython-38.pyc lib/python3.8/site-packages/pip/_internal/__pycache__/__init__.cpython-38.pyc lib/python3.8/site-packages/pip/_internal/__pycache__/build_env.cpython-38.pyc lib/python3.8/site-packages/pip/_internal/__pycache__/cache.cpython-38.pyc lib/python3.8/site-packages/pip/_internal/__pycache__/configuration.cpython-38.pyc lib/python3.8/site-packages/pip/_internal/__pycache__/exceptions.cpython-38.pyc lib/python3.8/site-packages/pip/_internal/__pycache__/main.cpython-38.pyc lib/python3.8/site-packages/pip/_internal/__pycache__/pyproject.cpython-38.pyc lib/python3.8/site-packages/pip/_internal/__pycache__/self_outdated_check.cpython-38.pyc lib/python3.8/site-packages/pip/_internal/__pycache__/wheel_builder.cpython-38.pyc lib/python3.8/site-packages/pip/_internal/cli/__pycache__/__init__.cpython-38.pyc lib/python3.8/site-packages/pip/_internal/cli/__pycache__/autocompletion.cpython-38.pyc lib/python3.8/site-packages/pip/_internal/cli/__pycache__/base_command.cpython-38.pyc lib/python3.8/site-packages/pip/_internal/cli/__pycache__/cmdoptions.cpython-38.pyc lib/python3.8/site-packages/pip/_internal/cli/__pycache__/command_context.cpython-38.pyc lib/python3.8/site-packages/pip/_internal/cli/__pycache__/main.cpython-38.pyc lib/python3.8/site-packages/pip/_internal/cli/__pycache__/main_parser.cpython-38.pyc lib/python3.8/site-packages/pip/_internal/cli/__pycache__/parser.cpython-38.pyc lib/python3.8/site-packages/pip/_internal/cli/__pycache__/progress_bars.cpython-38.pyc lib/python3.8/site-packages/pip/_internal/cli/__pycache__/req_command.cpython-38.pyc lib/python3.8/site-packages/pip/_internal/cli/__pycache__/spinners.cpython-38.pyc lib/python3.8/site-packages/pip/_internal/cli/__pycache__/status_codes.cpython-38.pyc lib/python3.8/site-packages/pip/_internal/commands/__pycache__/__init__.cpython-38.pyc lib/python3.8/site-packages/pip/_internal/commands/__pycache__/cache.cpython-38.pyc lib/python3.8/site-packages/pip/_internal/commands/__pycache__/check.cpython-38.pyc lib/python3.8/site-packages/pip/_internal/commands/__pycache__/completion.cpython-38.pyc lib/python3.8/site-packages/pip/_internal/commands/__pycache__/configuration.cpython-38.pyc lib/python3.8/site-packages/pip/_internal/commands/__pycache__/debug.cpython-38.pyc lib/python3.8/site-packages/pip/_internal/commands/__pycache__/download.cpython-38.pyc lib/python3.8/site-packages/pip/_internal/commands/__pycache__/freeze.cpython-38.pyc lib/python3.8/site-packages/pip/_internal/commands/__pycache__/hash.cpython-38.pyc lib/python3.8/site-packages/pip/_internal/commands/__pycache__/help.cpython-38.pyc lib/python3.8/site-packages/pip/_internal/commands/__pycache__/index.cpython-38.pyc lib/python3.8/site-packages/pip/_internal/commands/__pycache__/install.cpython-38.pyc lib/python3.8/site-packages/pip/_internal/commands/__pycache__/list.cpython-38.pyc lib/python3.8/site-packages/pip/_internal/commands/__pycache__/search.cpython-38.pyc lib/python3.8/site-packages/pip/_internal/commands/__pycache__/show.cpython-38.pyc lib/python3.8/site-packages/pip/_internal/commands/__pycache__/uninstall.cpython-38.pyc lib/python3.8/site-packages/pip/_internal/commands/__pycache__/wheel.cpython-38.pyc lib/python3.8/site-packages/pip/_internal/distributions/__pycache__/__init__.cpython-38.pyc lib/python3.8/site-packages/pip/_internal/distributions/__pycache__/base.cpython-38.pyc lib/python3.8/site-packages/pip/_internal/distributions/__pycache__/installed.cpython-38.pyc lib/python3.8/site-packages/pip/_internal/distributions/__pycache__/sdist.cpython-38.pyc lib/python3.8/site-packages/pip/_internal/distributions/__pycache__/wheel.cpython-38.pyc lib/python3.8/site-packages/pip/_internal/index/__pycache__/__init__.cpython-38.pyc lib/python3.8/site-packages/pip/_internal/index/__pycache__/collector.cpython-38.pyc lib/python3.8/site-packages/pip/_internal/index/__pycache__/package_finder.cpython-38.pyc lib/python3.8/site-packages/pip/_internal/index/__pycache__/sources.cpython-38.pyc lib/python3.8/site-packages/pip/_internal/locations/__pycache__/__init__.cpython-38.pyc ... Tons more removed Snapcraft offers some capabilities to solve this by use of the following keywords: - `filesets` - `stage` - `snap` - `organize` To learn more about these part keywords, run `snapcraft help plugins`. Run the same command again with --debug to shell into the environment if you wish to introspect this failure. ``` **Main question** What is the best way to build a snap with multiple python modules?
2022/01/13
[ "https://Stackoverflow.com/questions/70702139", "https://Stackoverflow.com", "https://Stackoverflow.com/users/17927115/" ]
Invoke python scripts like this: ``` PS C:\Users\sween\Desktop> python ./a.py ``` Not like this: ``` PS C:\Users\sween\Desktop> python Python 3.10.0 (tags/v3.10.0:b494f59, Oct 4 2021, 19:00:18) [MSC v.1929 64 bit (AMD64)] on win32 Type "help", "copyright", "credits" or "license" for more information. >>> ./a.py File "<stdin>", line 1 ./a.py ^ SyntaxError: invalid syntax ``` The three arrows `>>>` indicate a place to write Python code, not filenames or paths.
For better assistance, you may need to provide your Python version. For **Python 3.8**, use `from tkinter import Tk` or `from tkinter import *`. If this did not solve your problem, you may have problem with **tkinter** installation.
70,702,139
I am building a snap to test integration of a python script and a python SDK with snapcraft and there appears to be a conflict when two python 'parts' are built in the same snap. What is the best way to build a snap with multiple python modules? I have a simple script which imports the SDK and then prints some information. I also have the python SDK library (<https://help.iotconnect.io/documentation/sdk-reference/device-sdks-flavors/download-python-sdk/>) in a different folder. I have defined the two parts, and each one can be built stand alone (snapcraft build PARTNAME), however it seems the python internals are conflicting at the next step of 'staging' them together. tree output of structure ``` tree . ├── basictest │   ├── basictest.py │   ├── __init__.py │   └── setup.py ├── iotconnect-sdk-3.0.1 │   ├── iotconnect │   │   ├── assets │   │   │   ├── config.json │   │   │   └── crt.txt │   │   ├── client │   │   │   ├── httpclient.py │   │   │   ├── __init__.py │   │   │   ├── mqttclient.py │   │   │   └── offlineclient.py │   │   ├── common │   │   │   ├── data_evaluation.py │   │   │   ├── infinite_timer.py │   │   │   ├── __init__.py │   │   │   └── rule_evaluation.py │   │   ├── __init__.py │   │   ├── IoTConnectSDKException.py │   │   ├── IoTConnectSDK.py │   │   └── __pycache__ │   │   ├── __init__.cpython-38.pyc │   │   └── IoTConnectSDK.cpython-38.pyc │   ├── iotconnect_sdk.egg-info │   │   ├── dependency_links.txt │   │   ├── not-zip-safe │   │   ├── PKG-INFO │   │   ├── requires.txt │   │   ├── SOURCES.txt │   │   └── top_level.txt │   ├── PKG-INFO │   ├── README.md │   ├── setup.cfg │   └── setup.py └── snap └── snapcraft.yaml 9 directories, 30 files ``` snapcraft.yaml ``` name: basictest base: core20 version: '0.1' summary: Test snap to verifiy integration with python SDK description: | Test snap to verifiy integration with python SDK grade: devel confinement: devmode apps: basictest: command: bin/basictest parts: lib-basictest: plugin: python source: ./basictest/ after: [lib-pythonsdk] disable-parallel: true lib-pythonsdk: plugin: python source: ./iotconnect-sdk-3.0.1/ ``` Running 'snapcraft' shows tons of errors which look like conflicts between the two 'parts' related to the internals of python. snapcraft output ``` snapcraft Launching a VM. Skipping pull lib-pythonsdk (already ran) Skipping pull lib-basictest (already ran) Skipping build lib-pythonsdk (already ran) Skipping build lib-basictest (already ran) Failed to stage: Parts 'lib-pythonsdk' and 'lib-basictest' have the following files, but with different contents: bin/activate bin/activate.csh bin/activate.fish lib/python3.8/site-packages/_distutils_hack/__pycache__/__init__.cpython-38.pyc lib/python3.8/site-packages/_distutils_hack/__pycache__/override.cpython-38.pyc lib/python3.8/site-packages/pip-21.3.1.dist-info/RECORD lib/python3.8/site-packages/pip/__pycache__/__init__.cpython-38.pyc lib/python3.8/site-packages/pip/__pycache__/__main__.cpython-38.pyc lib/python3.8/site-packages/pip/_internal/__pycache__/__init__.cpython-38.pyc lib/python3.8/site-packages/pip/_internal/__pycache__/build_env.cpython-38.pyc lib/python3.8/site-packages/pip/_internal/__pycache__/cache.cpython-38.pyc lib/python3.8/site-packages/pip/_internal/__pycache__/configuration.cpython-38.pyc lib/python3.8/site-packages/pip/_internal/__pycache__/exceptions.cpython-38.pyc lib/python3.8/site-packages/pip/_internal/__pycache__/main.cpython-38.pyc lib/python3.8/site-packages/pip/_internal/__pycache__/pyproject.cpython-38.pyc lib/python3.8/site-packages/pip/_internal/__pycache__/self_outdated_check.cpython-38.pyc lib/python3.8/site-packages/pip/_internal/__pycache__/wheel_builder.cpython-38.pyc lib/python3.8/site-packages/pip/_internal/cli/__pycache__/__init__.cpython-38.pyc lib/python3.8/site-packages/pip/_internal/cli/__pycache__/autocompletion.cpython-38.pyc lib/python3.8/site-packages/pip/_internal/cli/__pycache__/base_command.cpython-38.pyc lib/python3.8/site-packages/pip/_internal/cli/__pycache__/cmdoptions.cpython-38.pyc lib/python3.8/site-packages/pip/_internal/cli/__pycache__/command_context.cpython-38.pyc lib/python3.8/site-packages/pip/_internal/cli/__pycache__/main.cpython-38.pyc lib/python3.8/site-packages/pip/_internal/cli/__pycache__/main_parser.cpython-38.pyc lib/python3.8/site-packages/pip/_internal/cli/__pycache__/parser.cpython-38.pyc lib/python3.8/site-packages/pip/_internal/cli/__pycache__/progress_bars.cpython-38.pyc lib/python3.8/site-packages/pip/_internal/cli/__pycache__/req_command.cpython-38.pyc lib/python3.8/site-packages/pip/_internal/cli/__pycache__/spinners.cpython-38.pyc lib/python3.8/site-packages/pip/_internal/cli/__pycache__/status_codes.cpython-38.pyc lib/python3.8/site-packages/pip/_internal/commands/__pycache__/__init__.cpython-38.pyc lib/python3.8/site-packages/pip/_internal/commands/__pycache__/cache.cpython-38.pyc lib/python3.8/site-packages/pip/_internal/commands/__pycache__/check.cpython-38.pyc lib/python3.8/site-packages/pip/_internal/commands/__pycache__/completion.cpython-38.pyc lib/python3.8/site-packages/pip/_internal/commands/__pycache__/configuration.cpython-38.pyc lib/python3.8/site-packages/pip/_internal/commands/__pycache__/debug.cpython-38.pyc lib/python3.8/site-packages/pip/_internal/commands/__pycache__/download.cpython-38.pyc lib/python3.8/site-packages/pip/_internal/commands/__pycache__/freeze.cpython-38.pyc lib/python3.8/site-packages/pip/_internal/commands/__pycache__/hash.cpython-38.pyc lib/python3.8/site-packages/pip/_internal/commands/__pycache__/help.cpython-38.pyc lib/python3.8/site-packages/pip/_internal/commands/__pycache__/index.cpython-38.pyc lib/python3.8/site-packages/pip/_internal/commands/__pycache__/install.cpython-38.pyc lib/python3.8/site-packages/pip/_internal/commands/__pycache__/list.cpython-38.pyc lib/python3.8/site-packages/pip/_internal/commands/__pycache__/search.cpython-38.pyc lib/python3.8/site-packages/pip/_internal/commands/__pycache__/show.cpython-38.pyc lib/python3.8/site-packages/pip/_internal/commands/__pycache__/uninstall.cpython-38.pyc lib/python3.8/site-packages/pip/_internal/commands/__pycache__/wheel.cpython-38.pyc lib/python3.8/site-packages/pip/_internal/distributions/__pycache__/__init__.cpython-38.pyc lib/python3.8/site-packages/pip/_internal/distributions/__pycache__/base.cpython-38.pyc lib/python3.8/site-packages/pip/_internal/distributions/__pycache__/installed.cpython-38.pyc lib/python3.8/site-packages/pip/_internal/distributions/__pycache__/sdist.cpython-38.pyc lib/python3.8/site-packages/pip/_internal/distributions/__pycache__/wheel.cpython-38.pyc lib/python3.8/site-packages/pip/_internal/index/__pycache__/__init__.cpython-38.pyc lib/python3.8/site-packages/pip/_internal/index/__pycache__/collector.cpython-38.pyc lib/python3.8/site-packages/pip/_internal/index/__pycache__/package_finder.cpython-38.pyc lib/python3.8/site-packages/pip/_internal/index/__pycache__/sources.cpython-38.pyc lib/python3.8/site-packages/pip/_internal/locations/__pycache__/__init__.cpython-38.pyc ... Tons more removed Snapcraft offers some capabilities to solve this by use of the following keywords: - `filesets` - `stage` - `snap` - `organize` To learn more about these part keywords, run `snapcraft help plugins`. Run the same command again with --debug to shell into the environment if you wish to introspect this failure. ``` **Main question** What is the best way to build a snap with multiple python modules?
2022/01/13
[ "https://Stackoverflow.com/questions/70702139", "https://Stackoverflow.com", "https://Stackoverflow.com/users/17927115/" ]
Invoke python scripts like this: ``` PS C:\Users\sween\Desktop> python ./a.py ``` Not like this: ``` PS C:\Users\sween\Desktop> python Python 3.10.0 (tags/v3.10.0:b494f59, Oct 4 2021, 19:00:18) [MSC v.1929 64 bit (AMD64)] on win32 Type "help", "copyright", "credits" or "license" for more information. >>> ./a.py File "<stdin>", line 1 ./a.py ^ SyntaxError: invalid syntax ``` The three arrows `>>>` indicate a place to write Python code, not filenames or paths.
First thing I noticed was you need to switch out `root.Canvas` with `tk.Canvas`. ``` import tkinter as tk from tkinter import filedialog, Text import os root = tk.Tk() canvas = tk.Canvas(root, height=700, width=700, bg="#263d42") canvas.pack() root.mainloop() ``` Although even using your original unedited script, it didn't result in a SyntaxError, but an AttributeError for the canvas. I'm working from Pycharm, my assumption is you're working from command line? It looks like you're running main.py from within the interpreter, which you should be able to use `exit()` to resolve. [The post linked here goes into more detail on that.](https://stackoverflow.com/questions/13961140/syntax-error-when-using-command-line-in-python)
70,702,139
I am building a snap to test integration of a python script and a python SDK with snapcraft and there appears to be a conflict when two python 'parts' are built in the same snap. What is the best way to build a snap with multiple python modules? I have a simple script which imports the SDK and then prints some information. I also have the python SDK library (<https://help.iotconnect.io/documentation/sdk-reference/device-sdks-flavors/download-python-sdk/>) in a different folder. I have defined the two parts, and each one can be built stand alone (snapcraft build PARTNAME), however it seems the python internals are conflicting at the next step of 'staging' them together. tree output of structure ``` tree . ├── basictest │   ├── basictest.py │   ├── __init__.py │   └── setup.py ├── iotconnect-sdk-3.0.1 │   ├── iotconnect │   │   ├── assets │   │   │   ├── config.json │   │   │   └── crt.txt │   │   ├── client │   │   │   ├── httpclient.py │   │   │   ├── __init__.py │   │   │   ├── mqttclient.py │   │   │   └── offlineclient.py │   │   ├── common │   │   │   ├── data_evaluation.py │   │   │   ├── infinite_timer.py │   │   │   ├── __init__.py │   │   │   └── rule_evaluation.py │   │   ├── __init__.py │   │   ├── IoTConnectSDKException.py │   │   ├── IoTConnectSDK.py │   │   └── __pycache__ │   │   ├── __init__.cpython-38.pyc │   │   └── IoTConnectSDK.cpython-38.pyc │   ├── iotconnect_sdk.egg-info │   │   ├── dependency_links.txt │   │   ├── not-zip-safe │   │   ├── PKG-INFO │   │   ├── requires.txt │   │   ├── SOURCES.txt │   │   └── top_level.txt │   ├── PKG-INFO │   ├── README.md │   ├── setup.cfg │   └── setup.py └── snap └── snapcraft.yaml 9 directories, 30 files ``` snapcraft.yaml ``` name: basictest base: core20 version: '0.1' summary: Test snap to verifiy integration with python SDK description: | Test snap to verifiy integration with python SDK grade: devel confinement: devmode apps: basictest: command: bin/basictest parts: lib-basictest: plugin: python source: ./basictest/ after: [lib-pythonsdk] disable-parallel: true lib-pythonsdk: plugin: python source: ./iotconnect-sdk-3.0.1/ ``` Running 'snapcraft' shows tons of errors which look like conflicts between the two 'parts' related to the internals of python. snapcraft output ``` snapcraft Launching a VM. Skipping pull lib-pythonsdk (already ran) Skipping pull lib-basictest (already ran) Skipping build lib-pythonsdk (already ran) Skipping build lib-basictest (already ran) Failed to stage: Parts 'lib-pythonsdk' and 'lib-basictest' have the following files, but with different contents: bin/activate bin/activate.csh bin/activate.fish lib/python3.8/site-packages/_distutils_hack/__pycache__/__init__.cpython-38.pyc lib/python3.8/site-packages/_distutils_hack/__pycache__/override.cpython-38.pyc lib/python3.8/site-packages/pip-21.3.1.dist-info/RECORD lib/python3.8/site-packages/pip/__pycache__/__init__.cpython-38.pyc lib/python3.8/site-packages/pip/__pycache__/__main__.cpython-38.pyc lib/python3.8/site-packages/pip/_internal/__pycache__/__init__.cpython-38.pyc lib/python3.8/site-packages/pip/_internal/__pycache__/build_env.cpython-38.pyc lib/python3.8/site-packages/pip/_internal/__pycache__/cache.cpython-38.pyc lib/python3.8/site-packages/pip/_internal/__pycache__/configuration.cpython-38.pyc lib/python3.8/site-packages/pip/_internal/__pycache__/exceptions.cpython-38.pyc lib/python3.8/site-packages/pip/_internal/__pycache__/main.cpython-38.pyc lib/python3.8/site-packages/pip/_internal/__pycache__/pyproject.cpython-38.pyc lib/python3.8/site-packages/pip/_internal/__pycache__/self_outdated_check.cpython-38.pyc lib/python3.8/site-packages/pip/_internal/__pycache__/wheel_builder.cpython-38.pyc lib/python3.8/site-packages/pip/_internal/cli/__pycache__/__init__.cpython-38.pyc lib/python3.8/site-packages/pip/_internal/cli/__pycache__/autocompletion.cpython-38.pyc lib/python3.8/site-packages/pip/_internal/cli/__pycache__/base_command.cpython-38.pyc lib/python3.8/site-packages/pip/_internal/cli/__pycache__/cmdoptions.cpython-38.pyc lib/python3.8/site-packages/pip/_internal/cli/__pycache__/command_context.cpython-38.pyc lib/python3.8/site-packages/pip/_internal/cli/__pycache__/main.cpython-38.pyc lib/python3.8/site-packages/pip/_internal/cli/__pycache__/main_parser.cpython-38.pyc lib/python3.8/site-packages/pip/_internal/cli/__pycache__/parser.cpython-38.pyc lib/python3.8/site-packages/pip/_internal/cli/__pycache__/progress_bars.cpython-38.pyc lib/python3.8/site-packages/pip/_internal/cli/__pycache__/req_command.cpython-38.pyc lib/python3.8/site-packages/pip/_internal/cli/__pycache__/spinners.cpython-38.pyc lib/python3.8/site-packages/pip/_internal/cli/__pycache__/status_codes.cpython-38.pyc lib/python3.8/site-packages/pip/_internal/commands/__pycache__/__init__.cpython-38.pyc lib/python3.8/site-packages/pip/_internal/commands/__pycache__/cache.cpython-38.pyc lib/python3.8/site-packages/pip/_internal/commands/__pycache__/check.cpython-38.pyc lib/python3.8/site-packages/pip/_internal/commands/__pycache__/completion.cpython-38.pyc lib/python3.8/site-packages/pip/_internal/commands/__pycache__/configuration.cpython-38.pyc lib/python3.8/site-packages/pip/_internal/commands/__pycache__/debug.cpython-38.pyc lib/python3.8/site-packages/pip/_internal/commands/__pycache__/download.cpython-38.pyc lib/python3.8/site-packages/pip/_internal/commands/__pycache__/freeze.cpython-38.pyc lib/python3.8/site-packages/pip/_internal/commands/__pycache__/hash.cpython-38.pyc lib/python3.8/site-packages/pip/_internal/commands/__pycache__/help.cpython-38.pyc lib/python3.8/site-packages/pip/_internal/commands/__pycache__/index.cpython-38.pyc lib/python3.8/site-packages/pip/_internal/commands/__pycache__/install.cpython-38.pyc lib/python3.8/site-packages/pip/_internal/commands/__pycache__/list.cpython-38.pyc lib/python3.8/site-packages/pip/_internal/commands/__pycache__/search.cpython-38.pyc lib/python3.8/site-packages/pip/_internal/commands/__pycache__/show.cpython-38.pyc lib/python3.8/site-packages/pip/_internal/commands/__pycache__/uninstall.cpython-38.pyc lib/python3.8/site-packages/pip/_internal/commands/__pycache__/wheel.cpython-38.pyc lib/python3.8/site-packages/pip/_internal/distributions/__pycache__/__init__.cpython-38.pyc lib/python3.8/site-packages/pip/_internal/distributions/__pycache__/base.cpython-38.pyc lib/python3.8/site-packages/pip/_internal/distributions/__pycache__/installed.cpython-38.pyc lib/python3.8/site-packages/pip/_internal/distributions/__pycache__/sdist.cpython-38.pyc lib/python3.8/site-packages/pip/_internal/distributions/__pycache__/wheel.cpython-38.pyc lib/python3.8/site-packages/pip/_internal/index/__pycache__/__init__.cpython-38.pyc lib/python3.8/site-packages/pip/_internal/index/__pycache__/collector.cpython-38.pyc lib/python3.8/site-packages/pip/_internal/index/__pycache__/package_finder.cpython-38.pyc lib/python3.8/site-packages/pip/_internal/index/__pycache__/sources.cpython-38.pyc lib/python3.8/site-packages/pip/_internal/locations/__pycache__/__init__.cpython-38.pyc ... Tons more removed Snapcraft offers some capabilities to solve this by use of the following keywords: - `filesets` - `stage` - `snap` - `organize` To learn more about these part keywords, run `snapcraft help plugins`. Run the same command again with --debug to shell into the environment if you wish to introspect this failure. ``` **Main question** What is the best way to build a snap with multiple python modules?
2022/01/13
[ "https://Stackoverflow.com/questions/70702139", "https://Stackoverflow.com", "https://Stackoverflow.com/users/17927115/" ]
Invoke python scripts like this: ``` PS C:\Users\sween\Desktop> python ./a.py ``` Not like this: ``` PS C:\Users\sween\Desktop> python Python 3.10.0 (tags/v3.10.0:b494f59, Oct 4 2021, 19:00:18) [MSC v.1929 64 bit (AMD64)] on win32 Type "help", "copyright", "credits" or "license" for more information. >>> ./a.py File "<stdin>", line 1 ./a.py ^ SyntaxError: invalid syntax ``` The three arrows `>>>` indicate a place to write Python code, not filenames or paths.
Your syntax error is on line 6: Instead of: canvas = root.Canvas(root, height=700, width=700, bg="#263d42") Try: canvas = Canvas(root, height=700, width=700, bg="#263d42")
70,702,139
I am building a snap to test integration of a python script and a python SDK with snapcraft and there appears to be a conflict when two python 'parts' are built in the same snap. What is the best way to build a snap with multiple python modules? I have a simple script which imports the SDK and then prints some information. I also have the python SDK library (<https://help.iotconnect.io/documentation/sdk-reference/device-sdks-flavors/download-python-sdk/>) in a different folder. I have defined the two parts, and each one can be built stand alone (snapcraft build PARTNAME), however it seems the python internals are conflicting at the next step of 'staging' them together. tree output of structure ``` tree . ├── basictest │   ├── basictest.py │   ├── __init__.py │   └── setup.py ├── iotconnect-sdk-3.0.1 │   ├── iotconnect │   │   ├── assets │   │   │   ├── config.json │   │   │   └── crt.txt │   │   ├── client │   │   │   ├── httpclient.py │   │   │   ├── __init__.py │   │   │   ├── mqttclient.py │   │   │   └── offlineclient.py │   │   ├── common │   │   │   ├── data_evaluation.py │   │   │   ├── infinite_timer.py │   │   │   ├── __init__.py │   │   │   └── rule_evaluation.py │   │   ├── __init__.py │   │   ├── IoTConnectSDKException.py │   │   ├── IoTConnectSDK.py │   │   └── __pycache__ │   │   ├── __init__.cpython-38.pyc │   │   └── IoTConnectSDK.cpython-38.pyc │   ├── iotconnect_sdk.egg-info │   │   ├── dependency_links.txt │   │   ├── not-zip-safe │   │   ├── PKG-INFO │   │   ├── requires.txt │   │   ├── SOURCES.txt │   │   └── top_level.txt │   ├── PKG-INFO │   ├── README.md │   ├── setup.cfg │   └── setup.py └── snap └── snapcraft.yaml 9 directories, 30 files ``` snapcraft.yaml ``` name: basictest base: core20 version: '0.1' summary: Test snap to verifiy integration with python SDK description: | Test snap to verifiy integration with python SDK grade: devel confinement: devmode apps: basictest: command: bin/basictest parts: lib-basictest: plugin: python source: ./basictest/ after: [lib-pythonsdk] disable-parallel: true lib-pythonsdk: plugin: python source: ./iotconnect-sdk-3.0.1/ ``` Running 'snapcraft' shows tons of errors which look like conflicts between the two 'parts' related to the internals of python. snapcraft output ``` snapcraft Launching a VM. Skipping pull lib-pythonsdk (already ran) Skipping pull lib-basictest (already ran) Skipping build lib-pythonsdk (already ran) Skipping build lib-basictest (already ran) Failed to stage: Parts 'lib-pythonsdk' and 'lib-basictest' have the following files, but with different contents: bin/activate bin/activate.csh bin/activate.fish lib/python3.8/site-packages/_distutils_hack/__pycache__/__init__.cpython-38.pyc lib/python3.8/site-packages/_distutils_hack/__pycache__/override.cpython-38.pyc lib/python3.8/site-packages/pip-21.3.1.dist-info/RECORD lib/python3.8/site-packages/pip/__pycache__/__init__.cpython-38.pyc lib/python3.8/site-packages/pip/__pycache__/__main__.cpython-38.pyc lib/python3.8/site-packages/pip/_internal/__pycache__/__init__.cpython-38.pyc lib/python3.8/site-packages/pip/_internal/__pycache__/build_env.cpython-38.pyc lib/python3.8/site-packages/pip/_internal/__pycache__/cache.cpython-38.pyc lib/python3.8/site-packages/pip/_internal/__pycache__/configuration.cpython-38.pyc lib/python3.8/site-packages/pip/_internal/__pycache__/exceptions.cpython-38.pyc lib/python3.8/site-packages/pip/_internal/__pycache__/main.cpython-38.pyc lib/python3.8/site-packages/pip/_internal/__pycache__/pyproject.cpython-38.pyc lib/python3.8/site-packages/pip/_internal/__pycache__/self_outdated_check.cpython-38.pyc lib/python3.8/site-packages/pip/_internal/__pycache__/wheel_builder.cpython-38.pyc lib/python3.8/site-packages/pip/_internal/cli/__pycache__/__init__.cpython-38.pyc lib/python3.8/site-packages/pip/_internal/cli/__pycache__/autocompletion.cpython-38.pyc lib/python3.8/site-packages/pip/_internal/cli/__pycache__/base_command.cpython-38.pyc lib/python3.8/site-packages/pip/_internal/cli/__pycache__/cmdoptions.cpython-38.pyc lib/python3.8/site-packages/pip/_internal/cli/__pycache__/command_context.cpython-38.pyc lib/python3.8/site-packages/pip/_internal/cli/__pycache__/main.cpython-38.pyc lib/python3.8/site-packages/pip/_internal/cli/__pycache__/main_parser.cpython-38.pyc lib/python3.8/site-packages/pip/_internal/cli/__pycache__/parser.cpython-38.pyc lib/python3.8/site-packages/pip/_internal/cli/__pycache__/progress_bars.cpython-38.pyc lib/python3.8/site-packages/pip/_internal/cli/__pycache__/req_command.cpython-38.pyc lib/python3.8/site-packages/pip/_internal/cli/__pycache__/spinners.cpython-38.pyc lib/python3.8/site-packages/pip/_internal/cli/__pycache__/status_codes.cpython-38.pyc lib/python3.8/site-packages/pip/_internal/commands/__pycache__/__init__.cpython-38.pyc lib/python3.8/site-packages/pip/_internal/commands/__pycache__/cache.cpython-38.pyc lib/python3.8/site-packages/pip/_internal/commands/__pycache__/check.cpython-38.pyc lib/python3.8/site-packages/pip/_internal/commands/__pycache__/completion.cpython-38.pyc lib/python3.8/site-packages/pip/_internal/commands/__pycache__/configuration.cpython-38.pyc lib/python3.8/site-packages/pip/_internal/commands/__pycache__/debug.cpython-38.pyc lib/python3.8/site-packages/pip/_internal/commands/__pycache__/download.cpython-38.pyc lib/python3.8/site-packages/pip/_internal/commands/__pycache__/freeze.cpython-38.pyc lib/python3.8/site-packages/pip/_internal/commands/__pycache__/hash.cpython-38.pyc lib/python3.8/site-packages/pip/_internal/commands/__pycache__/help.cpython-38.pyc lib/python3.8/site-packages/pip/_internal/commands/__pycache__/index.cpython-38.pyc lib/python3.8/site-packages/pip/_internal/commands/__pycache__/install.cpython-38.pyc lib/python3.8/site-packages/pip/_internal/commands/__pycache__/list.cpython-38.pyc lib/python3.8/site-packages/pip/_internal/commands/__pycache__/search.cpython-38.pyc lib/python3.8/site-packages/pip/_internal/commands/__pycache__/show.cpython-38.pyc lib/python3.8/site-packages/pip/_internal/commands/__pycache__/uninstall.cpython-38.pyc lib/python3.8/site-packages/pip/_internal/commands/__pycache__/wheel.cpython-38.pyc lib/python3.8/site-packages/pip/_internal/distributions/__pycache__/__init__.cpython-38.pyc lib/python3.8/site-packages/pip/_internal/distributions/__pycache__/base.cpython-38.pyc lib/python3.8/site-packages/pip/_internal/distributions/__pycache__/installed.cpython-38.pyc lib/python3.8/site-packages/pip/_internal/distributions/__pycache__/sdist.cpython-38.pyc lib/python3.8/site-packages/pip/_internal/distributions/__pycache__/wheel.cpython-38.pyc lib/python3.8/site-packages/pip/_internal/index/__pycache__/__init__.cpython-38.pyc lib/python3.8/site-packages/pip/_internal/index/__pycache__/collector.cpython-38.pyc lib/python3.8/site-packages/pip/_internal/index/__pycache__/package_finder.cpython-38.pyc lib/python3.8/site-packages/pip/_internal/index/__pycache__/sources.cpython-38.pyc lib/python3.8/site-packages/pip/_internal/locations/__pycache__/__init__.cpython-38.pyc ... Tons more removed Snapcraft offers some capabilities to solve this by use of the following keywords: - `filesets` - `stage` - `snap` - `organize` To learn more about these part keywords, run `snapcraft help plugins`. Run the same command again with --debug to shell into the environment if you wish to introspect this failure. ``` **Main question** What is the best way to build a snap with multiple python modules?
2022/01/13
[ "https://Stackoverflow.com/questions/70702139", "https://Stackoverflow.com", "https://Stackoverflow.com/users/17927115/" ]
First thing I noticed was you need to switch out `root.Canvas` with `tk.Canvas`. ``` import tkinter as tk from tkinter import filedialog, Text import os root = tk.Tk() canvas = tk.Canvas(root, height=700, width=700, bg="#263d42") canvas.pack() root.mainloop() ``` Although even using your original unedited script, it didn't result in a SyntaxError, but an AttributeError for the canvas. I'm working from Pycharm, my assumption is you're working from command line? It looks like you're running main.py from within the interpreter, which you should be able to use `exit()` to resolve. [The post linked here goes into more detail on that.](https://stackoverflow.com/questions/13961140/syntax-error-when-using-command-line-in-python)
For better assistance, you may need to provide your Python version. For **Python 3.8**, use `from tkinter import Tk` or `from tkinter import *`. If this did not solve your problem, you may have problem with **tkinter** installation.
70,702,139
I am building a snap to test integration of a python script and a python SDK with snapcraft and there appears to be a conflict when two python 'parts' are built in the same snap. What is the best way to build a snap with multiple python modules? I have a simple script which imports the SDK and then prints some information. I also have the python SDK library (<https://help.iotconnect.io/documentation/sdk-reference/device-sdks-flavors/download-python-sdk/>) in a different folder. I have defined the two parts, and each one can be built stand alone (snapcraft build PARTNAME), however it seems the python internals are conflicting at the next step of 'staging' them together. tree output of structure ``` tree . ├── basictest │   ├── basictest.py │   ├── __init__.py │   └── setup.py ├── iotconnect-sdk-3.0.1 │   ├── iotconnect │   │   ├── assets │   │   │   ├── config.json │   │   │   └── crt.txt │   │   ├── client │   │   │   ├── httpclient.py │   │   │   ├── __init__.py │   │   │   ├── mqttclient.py │   │   │   └── offlineclient.py │   │   ├── common │   │   │   ├── data_evaluation.py │   │   │   ├── infinite_timer.py │   │   │   ├── __init__.py │   │   │   └── rule_evaluation.py │   │   ├── __init__.py │   │   ├── IoTConnectSDKException.py │   │   ├── IoTConnectSDK.py │   │   └── __pycache__ │   │   ├── __init__.cpython-38.pyc │   │   └── IoTConnectSDK.cpython-38.pyc │   ├── iotconnect_sdk.egg-info │   │   ├── dependency_links.txt │   │   ├── not-zip-safe │   │   ├── PKG-INFO │   │   ├── requires.txt │   │   ├── SOURCES.txt │   │   └── top_level.txt │   ├── PKG-INFO │   ├── README.md │   ├── setup.cfg │   └── setup.py └── snap └── snapcraft.yaml 9 directories, 30 files ``` snapcraft.yaml ``` name: basictest base: core20 version: '0.1' summary: Test snap to verifiy integration with python SDK description: | Test snap to verifiy integration with python SDK grade: devel confinement: devmode apps: basictest: command: bin/basictest parts: lib-basictest: plugin: python source: ./basictest/ after: [lib-pythonsdk] disable-parallel: true lib-pythonsdk: plugin: python source: ./iotconnect-sdk-3.0.1/ ``` Running 'snapcraft' shows tons of errors which look like conflicts between the two 'parts' related to the internals of python. snapcraft output ``` snapcraft Launching a VM. Skipping pull lib-pythonsdk (already ran) Skipping pull lib-basictest (already ran) Skipping build lib-pythonsdk (already ran) Skipping build lib-basictest (already ran) Failed to stage: Parts 'lib-pythonsdk' and 'lib-basictest' have the following files, but with different contents: bin/activate bin/activate.csh bin/activate.fish lib/python3.8/site-packages/_distutils_hack/__pycache__/__init__.cpython-38.pyc lib/python3.8/site-packages/_distutils_hack/__pycache__/override.cpython-38.pyc lib/python3.8/site-packages/pip-21.3.1.dist-info/RECORD lib/python3.8/site-packages/pip/__pycache__/__init__.cpython-38.pyc lib/python3.8/site-packages/pip/__pycache__/__main__.cpython-38.pyc lib/python3.8/site-packages/pip/_internal/__pycache__/__init__.cpython-38.pyc lib/python3.8/site-packages/pip/_internal/__pycache__/build_env.cpython-38.pyc lib/python3.8/site-packages/pip/_internal/__pycache__/cache.cpython-38.pyc lib/python3.8/site-packages/pip/_internal/__pycache__/configuration.cpython-38.pyc lib/python3.8/site-packages/pip/_internal/__pycache__/exceptions.cpython-38.pyc lib/python3.8/site-packages/pip/_internal/__pycache__/main.cpython-38.pyc lib/python3.8/site-packages/pip/_internal/__pycache__/pyproject.cpython-38.pyc lib/python3.8/site-packages/pip/_internal/__pycache__/self_outdated_check.cpython-38.pyc lib/python3.8/site-packages/pip/_internal/__pycache__/wheel_builder.cpython-38.pyc lib/python3.8/site-packages/pip/_internal/cli/__pycache__/__init__.cpython-38.pyc lib/python3.8/site-packages/pip/_internal/cli/__pycache__/autocompletion.cpython-38.pyc lib/python3.8/site-packages/pip/_internal/cli/__pycache__/base_command.cpython-38.pyc lib/python3.8/site-packages/pip/_internal/cli/__pycache__/cmdoptions.cpython-38.pyc lib/python3.8/site-packages/pip/_internal/cli/__pycache__/command_context.cpython-38.pyc lib/python3.8/site-packages/pip/_internal/cli/__pycache__/main.cpython-38.pyc lib/python3.8/site-packages/pip/_internal/cli/__pycache__/main_parser.cpython-38.pyc lib/python3.8/site-packages/pip/_internal/cli/__pycache__/parser.cpython-38.pyc lib/python3.8/site-packages/pip/_internal/cli/__pycache__/progress_bars.cpython-38.pyc lib/python3.8/site-packages/pip/_internal/cli/__pycache__/req_command.cpython-38.pyc lib/python3.8/site-packages/pip/_internal/cli/__pycache__/spinners.cpython-38.pyc lib/python3.8/site-packages/pip/_internal/cli/__pycache__/status_codes.cpython-38.pyc lib/python3.8/site-packages/pip/_internal/commands/__pycache__/__init__.cpython-38.pyc lib/python3.8/site-packages/pip/_internal/commands/__pycache__/cache.cpython-38.pyc lib/python3.8/site-packages/pip/_internal/commands/__pycache__/check.cpython-38.pyc lib/python3.8/site-packages/pip/_internal/commands/__pycache__/completion.cpython-38.pyc lib/python3.8/site-packages/pip/_internal/commands/__pycache__/configuration.cpython-38.pyc lib/python3.8/site-packages/pip/_internal/commands/__pycache__/debug.cpython-38.pyc lib/python3.8/site-packages/pip/_internal/commands/__pycache__/download.cpython-38.pyc lib/python3.8/site-packages/pip/_internal/commands/__pycache__/freeze.cpython-38.pyc lib/python3.8/site-packages/pip/_internal/commands/__pycache__/hash.cpython-38.pyc lib/python3.8/site-packages/pip/_internal/commands/__pycache__/help.cpython-38.pyc lib/python3.8/site-packages/pip/_internal/commands/__pycache__/index.cpython-38.pyc lib/python3.8/site-packages/pip/_internal/commands/__pycache__/install.cpython-38.pyc lib/python3.8/site-packages/pip/_internal/commands/__pycache__/list.cpython-38.pyc lib/python3.8/site-packages/pip/_internal/commands/__pycache__/search.cpython-38.pyc lib/python3.8/site-packages/pip/_internal/commands/__pycache__/show.cpython-38.pyc lib/python3.8/site-packages/pip/_internal/commands/__pycache__/uninstall.cpython-38.pyc lib/python3.8/site-packages/pip/_internal/commands/__pycache__/wheel.cpython-38.pyc lib/python3.8/site-packages/pip/_internal/distributions/__pycache__/__init__.cpython-38.pyc lib/python3.8/site-packages/pip/_internal/distributions/__pycache__/base.cpython-38.pyc lib/python3.8/site-packages/pip/_internal/distributions/__pycache__/installed.cpython-38.pyc lib/python3.8/site-packages/pip/_internal/distributions/__pycache__/sdist.cpython-38.pyc lib/python3.8/site-packages/pip/_internal/distributions/__pycache__/wheel.cpython-38.pyc lib/python3.8/site-packages/pip/_internal/index/__pycache__/__init__.cpython-38.pyc lib/python3.8/site-packages/pip/_internal/index/__pycache__/collector.cpython-38.pyc lib/python3.8/site-packages/pip/_internal/index/__pycache__/package_finder.cpython-38.pyc lib/python3.8/site-packages/pip/_internal/index/__pycache__/sources.cpython-38.pyc lib/python3.8/site-packages/pip/_internal/locations/__pycache__/__init__.cpython-38.pyc ... Tons more removed Snapcraft offers some capabilities to solve this by use of the following keywords: - `filesets` - `stage` - `snap` - `organize` To learn more about these part keywords, run `snapcraft help plugins`. Run the same command again with --debug to shell into the environment if you wish to introspect this failure. ``` **Main question** What is the best way to build a snap with multiple python modules?
2022/01/13
[ "https://Stackoverflow.com/questions/70702139", "https://Stackoverflow.com", "https://Stackoverflow.com/users/17927115/" ]
First thing I noticed was you need to switch out `root.Canvas` with `tk.Canvas`. ``` import tkinter as tk from tkinter import filedialog, Text import os root = tk.Tk() canvas = tk.Canvas(root, height=700, width=700, bg="#263d42") canvas.pack() root.mainloop() ``` Although even using your original unedited script, it didn't result in a SyntaxError, but an AttributeError for the canvas. I'm working from Pycharm, my assumption is you're working from command line? It looks like you're running main.py from within the interpreter, which you should be able to use `exit()` to resolve. [The post linked here goes into more detail on that.](https://stackoverflow.com/questions/13961140/syntax-error-when-using-command-line-in-python)
Your syntax error is on line 6: Instead of: canvas = root.Canvas(root, height=700, width=700, bg="#263d42") Try: canvas = Canvas(root, height=700, width=700, bg="#263d42")
28,780,489
When am trying to run the chron job in django using below command ``` python manage.py runcrons ``` its showing one error like below ``` $ python manage.py runcrons No handlers could be found for logger "django_cron" ``` Does any one have any idea about this error? Any help is appreciated.
2015/02/28
[ "https://Stackoverflow.com/questions/28780489", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4582293/" ]
It is kind of given in the error you get. You are missing a handler for the "django\_cron" logger. See for example <https://stackoverflow.com/a/7048543/1197616>. Also have a look at the docs for Django, <https://docs.djangoproject.com/en/dev/topics/logging/>.
Actually the *django-cron* library does not require a 'django\_cron' logger. I resolved the same problem by running the migrations of django\_cron: ``` python manage.py migrate #migrate database ```
62,978,500
I have made a python program that uses Pygame. For some reason, I can't close the window when pressing the red cross. I tried using Command+Q but it doesn't work as well. I have to quit idle (my python interpreter) to close the window. Is there any other way to make the window close by pressing the red 'x' at the top right-hand corner? My code: ``` import pygame import sys from pygame.locals import * pygame.init() screen = pygame.display.set_mode((800,800)) while 1: pygame.display.update() for event in pygame.event.get(): if event.type == QUIT: pygame.quit() sys.exit() ```
2020/07/19
[ "https://Stackoverflow.com/questions/62978500", "https://Stackoverflow.com", "https://Stackoverflow.com/users/12987382/" ]
A pygame window can be closed properly if you use a different python interpreter. Try using pycharm, you can close pygame windows using pycharm.
Try this: ``` import pygame, sys from pygame.locals import * pygame.init() screen = pygame.display.set_mode((800,800)) while True: pygame.display.update() for event in pygame.event.get(): if event.type == pygame.QUIT: pygame.quit() sys.exit() ```
62,978,500
I have made a python program that uses Pygame. For some reason, I can't close the window when pressing the red cross. I tried using Command+Q but it doesn't work as well. I have to quit idle (my python interpreter) to close the window. Is there any other way to make the window close by pressing the red 'x' at the top right-hand corner? My code: ``` import pygame import sys from pygame.locals import * pygame.init() screen = pygame.display.set_mode((800,800)) while 1: pygame.display.update() for event in pygame.event.get(): if event.type == QUIT: pygame.quit() sys.exit() ```
2020/07/19
[ "https://Stackoverflow.com/questions/62978500", "https://Stackoverflow.com", "https://Stackoverflow.com/users/12987382/" ]
You should just force quit the window or run another program to close the window. When you run a different program, the window should close.
Try this: ``` import pygame, sys from pygame.locals import * pygame.init() screen = pygame.display.set_mode((800,800)) while True: pygame.display.update() for event in pygame.event.get(): if event.type == pygame.QUIT: pygame.quit() sys.exit() ```
62,978,500
I have made a python program that uses Pygame. For some reason, I can't close the window when pressing the red cross. I tried using Command+Q but it doesn't work as well. I have to quit idle (my python interpreter) to close the window. Is there any other way to make the window close by pressing the red 'x' at the top right-hand corner? My code: ``` import pygame import sys from pygame.locals import * pygame.init() screen = pygame.display.set_mode((800,800)) while 1: pygame.display.update() for event in pygame.event.get(): if event.type == QUIT: pygame.quit() sys.exit() ```
2020/07/19
[ "https://Stackoverflow.com/questions/62978500", "https://Stackoverflow.com", "https://Stackoverflow.com/users/12987382/" ]
A pygame window can be closed properly if you use a different python interpreter. Try using pycharm, you can close pygame windows using pycharm.
You should just force quit the window or run another program to close the window. When you run a different program, the window should close.
20,905,702
I'm currently working with Freeswitch and its [event socket library](http://wiki.freeswitch.org/wiki/Event_Socket_Library) (through the [mod event socket](http://wiki.freeswitch.org/wiki/Mod_event_socket)). For instance: ``` from ESL import ESLconnection cmd = 'uuid_kill %s' % active_call # active_call comes from a Django db and is unicode con = ESLconnection(config.HOST, config.PORT, config.PWD) if con.connected(): e = con.api(str(cmd)) else: logging.error('Couldn\'t connect to Freeswitch Mod Event Socket') ``` As you can see, I had to explicitly cast `con.api()`'s argument with `str()`. Without that, the call ends up in the following stack trace: ``` Traceback (most recent call last): [...] e = con.api(cmd) File "/usr/lib64/python2.7/site-packages/ESL.py", line 87, in api def api(*args): return apply(_ESL.ESLconnection_api, args) TypeError: in method 'ESLconnection_api', argument 2 of type 'char const *' ``` I don't understand this TypeError: what does it mean ? `cmd` contains a string, so what does it fix it when I cast it with `str(cmd)` ? Could it be related to Freeswitch's python API, generated through [SWIG](http://www.swig.org/) ?
2014/01/03
[ "https://Stackoverflow.com/questions/20905702", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1030960/" ]
Short answer: `cmd` likely contains a Unicode string, which cannot be trivially converted to a `const char *`. The error message likely comes from a wrapper framework that automates writing Python bindings for C libraries, such as SWIG or ctypes. The framework knows what to do with a byte string, but punts on Unicode strings. Passing `str(cmd)` helps because it converts the Unicode string to a byte string, from which a `const char *` value expected by C code can be trivially extracted. Long answer: The C type `char const *`, more customarily spelled `const char *`, can be read as "read-only array of `char`", `char` being C's way to spell "byte". When a C function accepts a `const char *`, it expects a "C string", i.e. an array of `char` values terminated with a null character. Conveniently, Python strings are internally represented as C strings with some additional information such as type, reference count, and the length of the string (so the string length can be retrieved with O(1) complexity, and also so that the string may contain null characters themselves). Unicode strings in Python 2 are represented as arrays of `Py_UNICODE`, which are either 16 or 32 bits wide, depending on the operating system and build-time flags. Such an array cannot be passed to code that expects an array of 8-bit chars — it needs to be *converted*, typically to a temporary buffer, and this buffer must be freed when no longer needed. For example, a simple-minded (and quite unnecessary) wrapper for the C function `strlen` could look like this: ``` PyObject *strlen(PyObject *ignore, PyObject *obj) { const char *c_string; size_t len; if (!PyString_Check(obj)) { PyErr_Format(PyExc_TypeError, "string expected, got %s", Py_TYPE(obj)->tp_name); return NULL; } c_string = PyString_AsString(obj); len = strlen(c_string); return PyInt_FromLong((long) len); } ``` The code simply calls `PyString_AsString` to retrieve the internal C string stored by every Python string and expected by `strlen`. For this code to also support Unicode objects (provided it even makes sense to call `strlen` on Unicode objects), it must handle them explicitly: ``` PyObject *strlen(PyObject *ignore, PyObject *obj) { const char *c_string; size_t len; PyObject *tmp = NULL; if (PyString_Check(obj)) c_string = PyString_AsString(obj); else if (PyUnicode_Check(obj)) { if (!(tmp = PyUnicode_AsUTF8String(obj))) return NULL; c_string = PyString_AsString(tmp); } else { PyErr_Format(PyExc_TypeError, "string or unicode expected, got %s", Py_TYPE(obj)->tp_name); return NULL; } len = strlen(c_string); Py_XDECREF(tmp); return PyInt_FromLong((long) len); } ``` Note the additional complexity, not only in lines of boilerplate code, but in the different code paths that require different management of a temporary object that holds the byte representation of the Unicode string. Also note that the code needed to decide to on an *encoding* when converting a Unicode string to a byte string. UTF-8 is guaranteed to be able to encode any Unicode string, but passing a UTF-8-encoded sequence to a function expecting a C string might not make sense for some uses. The `str` function uses the ASCII codec to encode the Unicode string, so if the Unicode string actually contained any non-ASCII characters, you would get an exception. There have been [requests to include this functionality in SWIG](http://sourceforge.net/p/swig/feature-requests/75/), but it is unclear from the linked report if they made it in.
I had similar problem, and I solved it by doing this: `cmd = 'uuid_kill %s'.encode('utf-8')`
28,431,765
So I am trying to open websites on new tabs inside my WebDriver. I want to do this, because opening a new WebDriver for each website takes about 3.5secs using PhantomJS, I want more speed... I'm using a multiprocess python script, and I want to get some elements from each page, so the workflow is like this: ``` Open Browser Loop throught my array For element in array -> Open website in new tab -> do my business -> close it ``` But I can't find any way to achieve this. Here's the code I'm using. It takes forever between websites, I need it to be fast... Other tools are allowed, but I don't know too many tools for scrapping website content that loads with JavaScript (divs created when some event is triggered on load etc) That's why I need Selenium... BeautifulSoup can't be used for some of my pages. ``` #!/usr/bin/env python import multiprocessing, time, pika, json, traceback, logging, sys, os, itertools, urllib, urllib2, cStringIO, mysql.connector, shutil, hashlib, socket, urllib2, re from selenium import webdriver from selenium.webdriver.common.keys import Keys from PIL import Image from os import listdir from os.path import isfile, join from bs4 import BeautifulSoup from pprint import pprint def getPhantomData(parameters): try: # We create WebDriver browser = webdriver.Firefox() # Navigate to URL browser.get(parameters['target_url']) # Find all links by Selector links = browser.find_elements_by_css_selector(parameters['selector']) result = [] for link in links: # Extract link attribute and append to our list result.append(link.get_attribute(parameters['attribute'])) browser.close() browser.quit() return json.dumps({'data': result}) except Exception, err: browser.close() browser.quit() print err def callback(ch, method, properties, body): parameters = json.loads(body) message = getPhantomData(parameters) if message['data']: ch.basic_ack(delivery_tag=method.delivery_tag) else: ch.basic_reject(delivery_tag=method.delivery_tag, requeue=True) def consume(): credentials = pika.PlainCredentials('invitado', 'invitado') rabbit = pika.ConnectionParameters('localhost',5672,'/',credentials) connection = pika.BlockingConnection(rabbit) channel = connection.channel() # Conectamos al canal channel.queue_declare(queue='com.stuff.images', durable=True) channel.basic_consume(callback,queue='com.stuff.images') print ' [*] Waiting for messages. To exit press CTRL^C' try: channel.start_consuming() except KeyboardInterrupt: pass workers = 5 pool = multiprocessing.Pool(processes=workers) for i in xrange(0, workers): pool.apply_async(consume) try: while True: continue except KeyboardInterrupt: print ' [*] Exiting...' pool.terminate() pool.join() ```
2015/02/10
[ "https://Stackoverflow.com/questions/28431765", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1381537/" ]
* OS: Win 10, * Python 3.8.1 + selenium==3.141.0 ``` from selenium import webdriver import time driver = webdriver.Firefox(executable_path=r'TO\Your\Path\geckodriver.exe') driver.get('https://www.google.com/') # Open a new window driver.execute_script("window.open('');") # Switch to the new window driver.switch_to.window(driver.window_handles[1]) driver.get("http://stackoverflow.com") time.sleep(3) # Open a new window driver.execute_script("window.open('');") # Switch to the new window driver.switch_to.window(driver.window_handles[2]) driver.get("https://www.reddit.com/") time.sleep(3) # close the active tab driver.close() time.sleep(3) # Switch back to the first tab driver.switch_to.window(driver.window_handles[0]) driver.get("https://bing.com") time.sleep(3) # Close the only tab, will also close the browser. driver.close() ``` Reference: [Need Help Opening A New Tab in Selenium](https://python-forum.io/Thread-Need-Help-Opening-A-New-Tab-in-Selenium)
I tried for a very long time to duplicate tabs in Chrome running using action\_keys and send\_keys on body. The only thing that worked for me was an answer [here](https://stackoverflow.com/a/41633373/10488716). This is what my duplicate tabs def ended up looking like, probably not the best but it works fine for me. ``` def duplicate_tabs(number, chromewebdriver): #Once on the page we want to open a bunch of tabs url = chromewebdriver.current_url for i in range(number): print('opened tab: '+str(i)) chromewebdriver.execute_script("window.open('"+url+"', 'new_window"+str(i)+"')") ``` It basically runs some java from inside of python, it's incredibly useful. Hope this helps somebody. Note: I am using Ubuntu, it shouldn't make a difference but if it doesn't work for you this could be the reason.
28,431,765
So I am trying to open websites on new tabs inside my WebDriver. I want to do this, because opening a new WebDriver for each website takes about 3.5secs using PhantomJS, I want more speed... I'm using a multiprocess python script, and I want to get some elements from each page, so the workflow is like this: ``` Open Browser Loop throught my array For element in array -> Open website in new tab -> do my business -> close it ``` But I can't find any way to achieve this. Here's the code I'm using. It takes forever between websites, I need it to be fast... Other tools are allowed, but I don't know too many tools for scrapping website content that loads with JavaScript (divs created when some event is triggered on load etc) That's why I need Selenium... BeautifulSoup can't be used for some of my pages. ``` #!/usr/bin/env python import multiprocessing, time, pika, json, traceback, logging, sys, os, itertools, urllib, urllib2, cStringIO, mysql.connector, shutil, hashlib, socket, urllib2, re from selenium import webdriver from selenium.webdriver.common.keys import Keys from PIL import Image from os import listdir from os.path import isfile, join from bs4 import BeautifulSoup from pprint import pprint def getPhantomData(parameters): try: # We create WebDriver browser = webdriver.Firefox() # Navigate to URL browser.get(parameters['target_url']) # Find all links by Selector links = browser.find_elements_by_css_selector(parameters['selector']) result = [] for link in links: # Extract link attribute and append to our list result.append(link.get_attribute(parameters['attribute'])) browser.close() browser.quit() return json.dumps({'data': result}) except Exception, err: browser.close() browser.quit() print err def callback(ch, method, properties, body): parameters = json.loads(body) message = getPhantomData(parameters) if message['data']: ch.basic_ack(delivery_tag=method.delivery_tag) else: ch.basic_reject(delivery_tag=method.delivery_tag, requeue=True) def consume(): credentials = pika.PlainCredentials('invitado', 'invitado') rabbit = pika.ConnectionParameters('localhost',5672,'/',credentials) connection = pika.BlockingConnection(rabbit) channel = connection.channel() # Conectamos al canal channel.queue_declare(queue='com.stuff.images', durable=True) channel.basic_consume(callback,queue='com.stuff.images') print ' [*] Waiting for messages. To exit press CTRL^C' try: channel.start_consuming() except KeyboardInterrupt: pass workers = 5 pool = multiprocessing.Pool(processes=workers) for i in xrange(0, workers): pool.apply_async(consume) try: while True: continue except KeyboardInterrupt: print ' [*] Exiting...' pool.terminate() pool.join() ```
2015/02/10
[ "https://Stackoverflow.com/questions/28431765", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1381537/" ]
I'd stick to [ActionChains](https://selenium-python.readthedocs.io/api.html#module-selenium.webdriver.common.action_chains) for this. **Here's a function which opens a new tab and switches to that tab:** ```py import time from selenium.webdriver.common.action_chains import ActionChains def open_in_new_tab(driver, element, switch_to_new_tab=True): base_handle = driver.current_window_handle # Do some actions ActionChains(driver) \ .move_to_element(element) \ .key_down(Keys.COMMAND) \ .click() \ .key_up(Keys.COMMAND) \ .perform() # Should you switch to the new tab? if switch_to_new_tab: new_handle = [x for x in driver.window_handles if x!=base_handle] assert len new_handle == 1 # assume you are only opening one tab at a time # Switch to the new window driver.switch_to.window(new_handle[0]) # I like to wait after switching to a new tab for the content to load # Do that either with time.sleep() or with WebDriverWait until a basic # element of the page appears (such as "body") -- reference for this is # provided below time.sleep(0.5) # NOTE: if you choose to switch to the window/tab, be sure to close # the newly opened window/tab after using it and that you switch back # to the original "base_handle" --> otherwise, you'll experience many # errors and a painful debugging experience... ``` **Here's how you would apply that function:** ```py # Remember your starting handle base_handle = driver.current_window_handle # Say we have a list of elements and each is a link: links = driver.find_elements_by_css_selector('a[href]') # Loop through the links and open each one in a new tab for link in links: open_in_new_tab(driver, link, True) # Do something on this new page print(driver.current_url) # Once you're finished, close this tab and switch back to the original one driver.close() driver.switch_to.window(base_handle) # You're ready to continue to the next item in your loop ``` *Here's how you could [wait until the page is loaded](https://stackoverflow.com/questions/26566799/wait-until-page-is-loaded-with-selenium-webdriver-for-python).*
``` tabs = {} def new_tab(): global browser hpos = browser.window_handles.index(browser.current_window_handle) browser.execute_script("window.open('');") browser.switch_to.window(browser.window_handles[hpos + 1]) return(browser.current_window_handle) def switch_tab(name): global tabs global browser if not name in tabs.keys(): tabs[name] = {'window_handle': new_tab(), 'url': url+name} browser.get(tabs[name]['url']) else: browser.switch_to.window(tabs[name]['window_handle']) ```
28,431,765
So I am trying to open websites on new tabs inside my WebDriver. I want to do this, because opening a new WebDriver for each website takes about 3.5secs using PhantomJS, I want more speed... I'm using a multiprocess python script, and I want to get some elements from each page, so the workflow is like this: ``` Open Browser Loop throught my array For element in array -> Open website in new tab -> do my business -> close it ``` But I can't find any way to achieve this. Here's the code I'm using. It takes forever between websites, I need it to be fast... Other tools are allowed, but I don't know too many tools for scrapping website content that loads with JavaScript (divs created when some event is triggered on load etc) That's why I need Selenium... BeautifulSoup can't be used for some of my pages. ``` #!/usr/bin/env python import multiprocessing, time, pika, json, traceback, logging, sys, os, itertools, urllib, urllib2, cStringIO, mysql.connector, shutil, hashlib, socket, urllib2, re from selenium import webdriver from selenium.webdriver.common.keys import Keys from PIL import Image from os import listdir from os.path import isfile, join from bs4 import BeautifulSoup from pprint import pprint def getPhantomData(parameters): try: # We create WebDriver browser = webdriver.Firefox() # Navigate to URL browser.get(parameters['target_url']) # Find all links by Selector links = browser.find_elements_by_css_selector(parameters['selector']) result = [] for link in links: # Extract link attribute and append to our list result.append(link.get_attribute(parameters['attribute'])) browser.close() browser.quit() return json.dumps({'data': result}) except Exception, err: browser.close() browser.quit() print err def callback(ch, method, properties, body): parameters = json.loads(body) message = getPhantomData(parameters) if message['data']: ch.basic_ack(delivery_tag=method.delivery_tag) else: ch.basic_reject(delivery_tag=method.delivery_tag, requeue=True) def consume(): credentials = pika.PlainCredentials('invitado', 'invitado') rabbit = pika.ConnectionParameters('localhost',5672,'/',credentials) connection = pika.BlockingConnection(rabbit) channel = connection.channel() # Conectamos al canal channel.queue_declare(queue='com.stuff.images', durable=True) channel.basic_consume(callback,queue='com.stuff.images') print ' [*] Waiting for messages. To exit press CTRL^C' try: channel.start_consuming() except KeyboardInterrupt: pass workers = 5 pool = multiprocessing.Pool(processes=workers) for i in xrange(0, workers): pool.apply_async(consume) try: while True: continue except KeyboardInterrupt: print ' [*] Exiting...' pool.terminate() pool.join() ```
2015/02/10
[ "https://Stackoverflow.com/questions/28431765", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1381537/" ]
you can use this to open a new tab ``` driver.execute_script("window.open('http://google.com', 'new_window')") ```
``` #Change the method of finding the element if needed self.find_element_by_xpath(element).send_keys(Keys.CONTROL + Keys.ENTER) ``` This will find the element and open it in a new tab. self is just the name used for the webdriver object.
28,431,765
So I am trying to open websites on new tabs inside my WebDriver. I want to do this, because opening a new WebDriver for each website takes about 3.5secs using PhantomJS, I want more speed... I'm using a multiprocess python script, and I want to get some elements from each page, so the workflow is like this: ``` Open Browser Loop throught my array For element in array -> Open website in new tab -> do my business -> close it ``` But I can't find any way to achieve this. Here's the code I'm using. It takes forever between websites, I need it to be fast... Other tools are allowed, but I don't know too many tools for scrapping website content that loads with JavaScript (divs created when some event is triggered on load etc) That's why I need Selenium... BeautifulSoup can't be used for some of my pages. ``` #!/usr/bin/env python import multiprocessing, time, pika, json, traceback, logging, sys, os, itertools, urllib, urllib2, cStringIO, mysql.connector, shutil, hashlib, socket, urllib2, re from selenium import webdriver from selenium.webdriver.common.keys import Keys from PIL import Image from os import listdir from os.path import isfile, join from bs4 import BeautifulSoup from pprint import pprint def getPhantomData(parameters): try: # We create WebDriver browser = webdriver.Firefox() # Navigate to URL browser.get(parameters['target_url']) # Find all links by Selector links = browser.find_elements_by_css_selector(parameters['selector']) result = [] for link in links: # Extract link attribute and append to our list result.append(link.get_attribute(parameters['attribute'])) browser.close() browser.quit() return json.dumps({'data': result}) except Exception, err: browser.close() browser.quit() print err def callback(ch, method, properties, body): parameters = json.loads(body) message = getPhantomData(parameters) if message['data']: ch.basic_ack(delivery_tag=method.delivery_tag) else: ch.basic_reject(delivery_tag=method.delivery_tag, requeue=True) def consume(): credentials = pika.PlainCredentials('invitado', 'invitado') rabbit = pika.ConnectionParameters('localhost',5672,'/',credentials) connection = pika.BlockingConnection(rabbit) channel = connection.channel() # Conectamos al canal channel.queue_declare(queue='com.stuff.images', durable=True) channel.basic_consume(callback,queue='com.stuff.images') print ' [*] Waiting for messages. To exit press CTRL^C' try: channel.start_consuming() except KeyboardInterrupt: pass workers = 5 pool = multiprocessing.Pool(processes=workers) for i in xrange(0, workers): pool.apply_async(consume) try: while True: continue except KeyboardInterrupt: print ' [*] Exiting...' pool.terminate() pool.join() ```
2015/02/10
[ "https://Stackoverflow.com/questions/28431765", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1381537/" ]
**This worked for me:-** ``` link = "https://www.google.com/" driver.execute_script('''window.open("about:blank");''') # Opening a blank new tab driver.switch_to.window(driver.window_handles[1]) # Switching to newly opend tab driver.get(link) ```
Opening the **new empty tab** within same window in chrome browser is **not possible** up to my knowledge but you can open the new tab with web-link. So far I surfed net and I got good working content on this question. Please try to follow the steps without missing. ``` import selenium.webdriver as webdriver from selenium.webdriver.common.keys import Keys driver = webdriver.Chrome() driver.get('https://www.google.com?q=python#q=python') first_link = driver.find_element_by_class_name('l') # Use: Keys.CONTROL + Keys.SHIFT + Keys.RETURN to open tab on top of the stack first_link.send_keys(Keys.CONTROL + Keys.RETURN) # Switch tab to the new tab, which we will assume is the next one on the right driver.find_element_by_tag_name('body').send_keys(Keys.CONTROL + Keys.TAB) driver.quit() ``` I think this is better solution so far. Credits: <https://gist.github.com/lrhache/7686903>
28,431,765
So I am trying to open websites on new tabs inside my WebDriver. I want to do this, because opening a new WebDriver for each website takes about 3.5secs using PhantomJS, I want more speed... I'm using a multiprocess python script, and I want to get some elements from each page, so the workflow is like this: ``` Open Browser Loop throught my array For element in array -> Open website in new tab -> do my business -> close it ``` But I can't find any way to achieve this. Here's the code I'm using. It takes forever between websites, I need it to be fast... Other tools are allowed, but I don't know too many tools for scrapping website content that loads with JavaScript (divs created when some event is triggered on load etc) That's why I need Selenium... BeautifulSoup can't be used for some of my pages. ``` #!/usr/bin/env python import multiprocessing, time, pika, json, traceback, logging, sys, os, itertools, urllib, urllib2, cStringIO, mysql.connector, shutil, hashlib, socket, urllib2, re from selenium import webdriver from selenium.webdriver.common.keys import Keys from PIL import Image from os import listdir from os.path import isfile, join from bs4 import BeautifulSoup from pprint import pprint def getPhantomData(parameters): try: # We create WebDriver browser = webdriver.Firefox() # Navigate to URL browser.get(parameters['target_url']) # Find all links by Selector links = browser.find_elements_by_css_selector(parameters['selector']) result = [] for link in links: # Extract link attribute and append to our list result.append(link.get_attribute(parameters['attribute'])) browser.close() browser.quit() return json.dumps({'data': result}) except Exception, err: browser.close() browser.quit() print err def callback(ch, method, properties, body): parameters = json.loads(body) message = getPhantomData(parameters) if message['data']: ch.basic_ack(delivery_tag=method.delivery_tag) else: ch.basic_reject(delivery_tag=method.delivery_tag, requeue=True) def consume(): credentials = pika.PlainCredentials('invitado', 'invitado') rabbit = pika.ConnectionParameters('localhost',5672,'/',credentials) connection = pika.BlockingConnection(rabbit) channel = connection.channel() # Conectamos al canal channel.queue_declare(queue='com.stuff.images', durable=True) channel.basic_consume(callback,queue='com.stuff.images') print ' [*] Waiting for messages. To exit press CTRL^C' try: channel.start_consuming() except KeyboardInterrupt: pass workers = 5 pool = multiprocessing.Pool(processes=workers) for i in xrange(0, workers): pool.apply_async(consume) try: while True: continue except KeyboardInterrupt: print ' [*] Exiting...' pool.terminate() pool.join() ```
2015/02/10
[ "https://Stackoverflow.com/questions/28431765", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1381537/" ]
you can use this to open a new tab ``` driver.execute_script("window.open('http://google.com', 'new_window')") ```
Opening the **new empty tab** within same window in chrome browser is **not possible** up to my knowledge but you can open the new tab with web-link. So far I surfed net and I got good working content on this question. Please try to follow the steps without missing. ``` import selenium.webdriver as webdriver from selenium.webdriver.common.keys import Keys driver = webdriver.Chrome() driver.get('https://www.google.com?q=python#q=python') first_link = driver.find_element_by_class_name('l') # Use: Keys.CONTROL + Keys.SHIFT + Keys.RETURN to open tab on top of the stack first_link.send_keys(Keys.CONTROL + Keys.RETURN) # Switch tab to the new tab, which we will assume is the next one on the right driver.find_element_by_tag_name('body').send_keys(Keys.CONTROL + Keys.TAB) driver.quit() ``` I think this is better solution so far. Credits: <https://gist.github.com/lrhache/7686903>
28,431,765
So I am trying to open websites on new tabs inside my WebDriver. I want to do this, because opening a new WebDriver for each website takes about 3.5secs using PhantomJS, I want more speed... I'm using a multiprocess python script, and I want to get some elements from each page, so the workflow is like this: ``` Open Browser Loop throught my array For element in array -> Open website in new tab -> do my business -> close it ``` But I can't find any way to achieve this. Here's the code I'm using. It takes forever between websites, I need it to be fast... Other tools are allowed, but I don't know too many tools for scrapping website content that loads with JavaScript (divs created when some event is triggered on load etc) That's why I need Selenium... BeautifulSoup can't be used for some of my pages. ``` #!/usr/bin/env python import multiprocessing, time, pika, json, traceback, logging, sys, os, itertools, urllib, urllib2, cStringIO, mysql.connector, shutil, hashlib, socket, urllib2, re from selenium import webdriver from selenium.webdriver.common.keys import Keys from PIL import Image from os import listdir from os.path import isfile, join from bs4 import BeautifulSoup from pprint import pprint def getPhantomData(parameters): try: # We create WebDriver browser = webdriver.Firefox() # Navigate to URL browser.get(parameters['target_url']) # Find all links by Selector links = browser.find_elements_by_css_selector(parameters['selector']) result = [] for link in links: # Extract link attribute and append to our list result.append(link.get_attribute(parameters['attribute'])) browser.close() browser.quit() return json.dumps({'data': result}) except Exception, err: browser.close() browser.quit() print err def callback(ch, method, properties, body): parameters = json.loads(body) message = getPhantomData(parameters) if message['data']: ch.basic_ack(delivery_tag=method.delivery_tag) else: ch.basic_reject(delivery_tag=method.delivery_tag, requeue=True) def consume(): credentials = pika.PlainCredentials('invitado', 'invitado') rabbit = pika.ConnectionParameters('localhost',5672,'/',credentials) connection = pika.BlockingConnection(rabbit) channel = connection.channel() # Conectamos al canal channel.queue_declare(queue='com.stuff.images', durable=True) channel.basic_consume(callback,queue='com.stuff.images') print ' [*] Waiting for messages. To exit press CTRL^C' try: channel.start_consuming() except KeyboardInterrupt: pass workers = 5 pool = multiprocessing.Pool(processes=workers) for i in xrange(0, workers): pool.apply_async(consume) try: while True: continue except KeyboardInterrupt: print ' [*] Exiting...' pool.terminate() pool.join() ```
2015/02/10
[ "https://Stackoverflow.com/questions/28431765", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1381537/" ]
The other solutions do not work for **chrome driver v83**. Instead, it works as follows, suppose there is only 1 opening tab: ``` driver.execute_script("window.open('');") driver.switch_to.window(driver.window_handles[1]) driver.get("https://www.example.com") ``` If there are already more than 1 opening tabs, you should first get the index of the last newly-created tab and switch to the tab before calling the url (Credit to [tylerl](https://stackoverflow.com/users/706751/tylerl)) : ``` driver.execute_script("window.open('');") driver.switch_to.window(len(driver.window_handles)-1) driver.get("https://www.example.com") ```
The 4.0.0 version of Selenium supports the following operations: * to open a new tab try: `driver.switch_to.new_window()` * to switch to a specific tab (note that the `tabID` starts from 0): `driver.switch_to.window(driver.window_handles[tabID])`
28,431,765
So I am trying to open websites on new tabs inside my WebDriver. I want to do this, because opening a new WebDriver for each website takes about 3.5secs using PhantomJS, I want more speed... I'm using a multiprocess python script, and I want to get some elements from each page, so the workflow is like this: ``` Open Browser Loop throught my array For element in array -> Open website in new tab -> do my business -> close it ``` But I can't find any way to achieve this. Here's the code I'm using. It takes forever between websites, I need it to be fast... Other tools are allowed, but I don't know too many tools for scrapping website content that loads with JavaScript (divs created when some event is triggered on load etc) That's why I need Selenium... BeautifulSoup can't be used for some of my pages. ``` #!/usr/bin/env python import multiprocessing, time, pika, json, traceback, logging, sys, os, itertools, urllib, urllib2, cStringIO, mysql.connector, shutil, hashlib, socket, urllib2, re from selenium import webdriver from selenium.webdriver.common.keys import Keys from PIL import Image from os import listdir from os.path import isfile, join from bs4 import BeautifulSoup from pprint import pprint def getPhantomData(parameters): try: # We create WebDriver browser = webdriver.Firefox() # Navigate to URL browser.get(parameters['target_url']) # Find all links by Selector links = browser.find_elements_by_css_selector(parameters['selector']) result = [] for link in links: # Extract link attribute and append to our list result.append(link.get_attribute(parameters['attribute'])) browser.close() browser.quit() return json.dumps({'data': result}) except Exception, err: browser.close() browser.quit() print err def callback(ch, method, properties, body): parameters = json.loads(body) message = getPhantomData(parameters) if message['data']: ch.basic_ack(delivery_tag=method.delivery_tag) else: ch.basic_reject(delivery_tag=method.delivery_tag, requeue=True) def consume(): credentials = pika.PlainCredentials('invitado', 'invitado') rabbit = pika.ConnectionParameters('localhost',5672,'/',credentials) connection = pika.BlockingConnection(rabbit) channel = connection.channel() # Conectamos al canal channel.queue_declare(queue='com.stuff.images', durable=True) channel.basic_consume(callback,queue='com.stuff.images') print ' [*] Waiting for messages. To exit press CTRL^C' try: channel.start_consuming() except KeyboardInterrupt: pass workers = 5 pool = multiprocessing.Pool(processes=workers) for i in xrange(0, workers): pool.apply_async(consume) try: while True: continue except KeyboardInterrupt: print ' [*] Exiting...' pool.terminate() pool.join() ```
2015/02/10
[ "https://Stackoverflow.com/questions/28431765", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1381537/" ]
I'd stick to [ActionChains](https://selenium-python.readthedocs.io/api.html#module-selenium.webdriver.common.action_chains) for this. **Here's a function which opens a new tab and switches to that tab:** ```py import time from selenium.webdriver.common.action_chains import ActionChains def open_in_new_tab(driver, element, switch_to_new_tab=True): base_handle = driver.current_window_handle # Do some actions ActionChains(driver) \ .move_to_element(element) \ .key_down(Keys.COMMAND) \ .click() \ .key_up(Keys.COMMAND) \ .perform() # Should you switch to the new tab? if switch_to_new_tab: new_handle = [x for x in driver.window_handles if x!=base_handle] assert len new_handle == 1 # assume you are only opening one tab at a time # Switch to the new window driver.switch_to.window(new_handle[0]) # I like to wait after switching to a new tab for the content to load # Do that either with time.sleep() or with WebDriverWait until a basic # element of the page appears (such as "body") -- reference for this is # provided below time.sleep(0.5) # NOTE: if you choose to switch to the window/tab, be sure to close # the newly opened window/tab after using it and that you switch back # to the original "base_handle" --> otherwise, you'll experience many # errors and a painful debugging experience... ``` **Here's how you would apply that function:** ```py # Remember your starting handle base_handle = driver.current_window_handle # Say we have a list of elements and each is a link: links = driver.find_elements_by_css_selector('a[href]') # Loop through the links and open each one in a new tab for link in links: open_in_new_tab(driver, link, True) # Do something on this new page print(driver.current_url) # Once you're finished, close this tab and switch back to the original one driver.close() driver.switch_to.window(base_handle) # You're ready to continue to the next item in your loop ``` *Here's how you could [wait until the page is loaded](https://stackoverflow.com/questions/26566799/wait-until-page-is-loaded-with-selenium-webdriver-for-python).*
Opening the **new empty tab** within same window in chrome browser is **not possible** up to my knowledge but you can open the new tab with web-link. So far I surfed net and I got good working content on this question. Please try to follow the steps without missing. ``` import selenium.webdriver as webdriver from selenium.webdriver.common.keys import Keys driver = webdriver.Chrome() driver.get('https://www.google.com?q=python#q=python') first_link = driver.find_element_by_class_name('l') # Use: Keys.CONTROL + Keys.SHIFT + Keys.RETURN to open tab on top of the stack first_link.send_keys(Keys.CONTROL + Keys.RETURN) # Switch tab to the new tab, which we will assume is the next one on the right driver.find_element_by_tag_name('body').send_keys(Keys.CONTROL + Keys.TAB) driver.quit() ``` I think this is better solution so far. Credits: <https://gist.github.com/lrhache/7686903>
28,431,765
So I am trying to open websites on new tabs inside my WebDriver. I want to do this, because opening a new WebDriver for each website takes about 3.5secs using PhantomJS, I want more speed... I'm using a multiprocess python script, and I want to get some elements from each page, so the workflow is like this: ``` Open Browser Loop throught my array For element in array -> Open website in new tab -> do my business -> close it ``` But I can't find any way to achieve this. Here's the code I'm using. It takes forever between websites, I need it to be fast... Other tools are allowed, but I don't know too many tools for scrapping website content that loads with JavaScript (divs created when some event is triggered on load etc) That's why I need Selenium... BeautifulSoup can't be used for some of my pages. ``` #!/usr/bin/env python import multiprocessing, time, pika, json, traceback, logging, sys, os, itertools, urllib, urllib2, cStringIO, mysql.connector, shutil, hashlib, socket, urllib2, re from selenium import webdriver from selenium.webdriver.common.keys import Keys from PIL import Image from os import listdir from os.path import isfile, join from bs4 import BeautifulSoup from pprint import pprint def getPhantomData(parameters): try: # We create WebDriver browser = webdriver.Firefox() # Navigate to URL browser.get(parameters['target_url']) # Find all links by Selector links = browser.find_elements_by_css_selector(parameters['selector']) result = [] for link in links: # Extract link attribute and append to our list result.append(link.get_attribute(parameters['attribute'])) browser.close() browser.quit() return json.dumps({'data': result}) except Exception, err: browser.close() browser.quit() print err def callback(ch, method, properties, body): parameters = json.loads(body) message = getPhantomData(parameters) if message['data']: ch.basic_ack(delivery_tag=method.delivery_tag) else: ch.basic_reject(delivery_tag=method.delivery_tag, requeue=True) def consume(): credentials = pika.PlainCredentials('invitado', 'invitado') rabbit = pika.ConnectionParameters('localhost',5672,'/',credentials) connection = pika.BlockingConnection(rabbit) channel = connection.channel() # Conectamos al canal channel.queue_declare(queue='com.stuff.images', durable=True) channel.basic_consume(callback,queue='com.stuff.images') print ' [*] Waiting for messages. To exit press CTRL^C' try: channel.start_consuming() except KeyboardInterrupt: pass workers = 5 pool = multiprocessing.Pool(processes=workers) for i in xrange(0, workers): pool.apply_async(consume) try: while True: continue except KeyboardInterrupt: print ' [*] Exiting...' pool.terminate() pool.join() ```
2015/02/10
[ "https://Stackoverflow.com/questions/28431765", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1381537/" ]
*Editor's note*: This answer no longer works for new Selenium versions. Refer to [this comment](https://stackoverflow.com/questions/28431765/open-web-in-new-tab-selenium-python#comment91110223_28432939). --- You can achieve the opening/closing of a tab by the combination of keys `COMMAND` + `T` or `COMMAND` + `W` (OSX). On other OSs you can use `CONTROL` + `T` / `CONTROL` + `W`. In selenium you can emulate such behavior. You will need to create one webdriver and as many tabs as the tests you need. Here it is the code. ``` from selenium import webdriver from selenium.webdriver.common.keys import Keys driver = webdriver.Firefox() driver.get("http://www.google.com/") #open tab driver.find_element_by_tag_name('body').send_keys(Keys.COMMAND + 't') # You can use (Keys.CONTROL + 't') on other OSs # Load a page driver.get('http://stackoverflow.com/') # Make the tests... # close the tab # (Keys.CONTROL + 'w') on other OSs. driver.find_element_by_tag_name('body').send_keys(Keys.COMMAND + 'w') driver.close() ```
As already mentioned several times, the following approaches are NOT working anymore: ``` driver.find_element_by_tag_name('body').send_keys(Keys.CONTROL + 't') ActionChains(driver).key_down(Keys.CONTROL).send_keys('t').key_up(Keys.CONTROL).perform() ``` Moreover, `driver.execute_script("window.open('');")` is working but is limited by the popup blocker. I process hundreds of tabs in parallel (web scraping using [scrapy](https://scrapy.org/)). However, the popup blocker became active after opening 20 new tabs using JavaScript's `window.open('')` and, thus, has broke my crawler. As work around I declared a tab as "master" which has opended the following `helper.html`: ``` <!DOCTYPE html> <html><body> <a id="open_new_window" href="about:blank" target="_blank">open a new window</a> </body></html> ``` Now, my (simplified) crawler can open as many tabs as necessary by purposely clicking the link which is not considered by the popup blogger at all: ``` # master master_handle = driver.current_window_handle helper = os.path.join(os.path.dirname(os.path.abspath(__file__)), "helper.html") driver.get(helper) # open new tabs for _ in range(100): window_handle = driver.window_handles # current state driver.switch_to_window(master_handle) driver.find_element_by_id("open_new_window").click() window_handle = set(driver.window_handles).difference(window_handle).pop() print("new window handle:", window_handle) ``` Closing these windows via JavaScript's `window.close()` is no problem.
28,431,765
So I am trying to open websites on new tabs inside my WebDriver. I want to do this, because opening a new WebDriver for each website takes about 3.5secs using PhantomJS, I want more speed... I'm using a multiprocess python script, and I want to get some elements from each page, so the workflow is like this: ``` Open Browser Loop throught my array For element in array -> Open website in new tab -> do my business -> close it ``` But I can't find any way to achieve this. Here's the code I'm using. It takes forever between websites, I need it to be fast... Other tools are allowed, but I don't know too many tools for scrapping website content that loads with JavaScript (divs created when some event is triggered on load etc) That's why I need Selenium... BeautifulSoup can't be used for some of my pages. ``` #!/usr/bin/env python import multiprocessing, time, pika, json, traceback, logging, sys, os, itertools, urllib, urllib2, cStringIO, mysql.connector, shutil, hashlib, socket, urllib2, re from selenium import webdriver from selenium.webdriver.common.keys import Keys from PIL import Image from os import listdir from os.path import isfile, join from bs4 import BeautifulSoup from pprint import pprint def getPhantomData(parameters): try: # We create WebDriver browser = webdriver.Firefox() # Navigate to URL browser.get(parameters['target_url']) # Find all links by Selector links = browser.find_elements_by_css_selector(parameters['selector']) result = [] for link in links: # Extract link attribute and append to our list result.append(link.get_attribute(parameters['attribute'])) browser.close() browser.quit() return json.dumps({'data': result}) except Exception, err: browser.close() browser.quit() print err def callback(ch, method, properties, body): parameters = json.loads(body) message = getPhantomData(parameters) if message['data']: ch.basic_ack(delivery_tag=method.delivery_tag) else: ch.basic_reject(delivery_tag=method.delivery_tag, requeue=True) def consume(): credentials = pika.PlainCredentials('invitado', 'invitado') rabbit = pika.ConnectionParameters('localhost',5672,'/',credentials) connection = pika.BlockingConnection(rabbit) channel = connection.channel() # Conectamos al canal channel.queue_declare(queue='com.stuff.images', durable=True) channel.basic_consume(callback,queue='com.stuff.images') print ' [*] Waiting for messages. To exit press CTRL^C' try: channel.start_consuming() except KeyboardInterrupt: pass workers = 5 pool = multiprocessing.Pool(processes=workers) for i in xrange(0, workers): pool.apply_async(consume) try: while True: continue except KeyboardInterrupt: print ' [*] Exiting...' pool.terminate() pool.join() ```
2015/02/10
[ "https://Stackoverflow.com/questions/28431765", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1381537/" ]
**This worked for me:-** ``` link = "https://www.google.com/" driver.execute_script('''window.open("about:blank");''') # Opening a blank new tab driver.switch_to.window(driver.window_handles[1]) # Switching to newly opend tab driver.get(link) ```
As already mentioned several times, the following approaches are NOT working anymore: ``` driver.find_element_by_tag_name('body').send_keys(Keys.CONTROL + 't') ActionChains(driver).key_down(Keys.CONTROL).send_keys('t').key_up(Keys.CONTROL).perform() ``` Moreover, `driver.execute_script("window.open('');")` is working but is limited by the popup blocker. I process hundreds of tabs in parallel (web scraping using [scrapy](https://scrapy.org/)). However, the popup blocker became active after opening 20 new tabs using JavaScript's `window.open('')` and, thus, has broke my crawler. As work around I declared a tab as "master" which has opended the following `helper.html`: ``` <!DOCTYPE html> <html><body> <a id="open_new_window" href="about:blank" target="_blank">open a new window</a> </body></html> ``` Now, my (simplified) crawler can open as many tabs as necessary by purposely clicking the link which is not considered by the popup blogger at all: ``` # master master_handle = driver.current_window_handle helper = os.path.join(os.path.dirname(os.path.abspath(__file__)), "helper.html") driver.get(helper) # open new tabs for _ in range(100): window_handle = driver.window_handles # current state driver.switch_to_window(master_handle) driver.find_element_by_id("open_new_window").click() window_handle = set(driver.window_handles).difference(window_handle).pop() print("new window handle:", window_handle) ``` Closing these windows via JavaScript's `window.close()` is no problem.
28,431,765
So I am trying to open websites on new tabs inside my WebDriver. I want to do this, because opening a new WebDriver for each website takes about 3.5secs using PhantomJS, I want more speed... I'm using a multiprocess python script, and I want to get some elements from each page, so the workflow is like this: ``` Open Browser Loop throught my array For element in array -> Open website in new tab -> do my business -> close it ``` But I can't find any way to achieve this. Here's the code I'm using. It takes forever between websites, I need it to be fast... Other tools are allowed, but I don't know too many tools for scrapping website content that loads with JavaScript (divs created when some event is triggered on load etc) That's why I need Selenium... BeautifulSoup can't be used for some of my pages. ``` #!/usr/bin/env python import multiprocessing, time, pika, json, traceback, logging, sys, os, itertools, urllib, urllib2, cStringIO, mysql.connector, shutil, hashlib, socket, urllib2, re from selenium import webdriver from selenium.webdriver.common.keys import Keys from PIL import Image from os import listdir from os.path import isfile, join from bs4 import BeautifulSoup from pprint import pprint def getPhantomData(parameters): try: # We create WebDriver browser = webdriver.Firefox() # Navigate to URL browser.get(parameters['target_url']) # Find all links by Selector links = browser.find_elements_by_css_selector(parameters['selector']) result = [] for link in links: # Extract link attribute and append to our list result.append(link.get_attribute(parameters['attribute'])) browser.close() browser.quit() return json.dumps({'data': result}) except Exception, err: browser.close() browser.quit() print err def callback(ch, method, properties, body): parameters = json.loads(body) message = getPhantomData(parameters) if message['data']: ch.basic_ack(delivery_tag=method.delivery_tag) else: ch.basic_reject(delivery_tag=method.delivery_tag, requeue=True) def consume(): credentials = pika.PlainCredentials('invitado', 'invitado') rabbit = pika.ConnectionParameters('localhost',5672,'/',credentials) connection = pika.BlockingConnection(rabbit) channel = connection.channel() # Conectamos al canal channel.queue_declare(queue='com.stuff.images', durable=True) channel.basic_consume(callback,queue='com.stuff.images') print ' [*] Waiting for messages. To exit press CTRL^C' try: channel.start_consuming() except KeyboardInterrupt: pass workers = 5 pool = multiprocessing.Pool(processes=workers) for i in xrange(0, workers): pool.apply_async(consume) try: while True: continue except KeyboardInterrupt: print ' [*] Exiting...' pool.terminate() pool.join() ```
2015/02/10
[ "https://Stackoverflow.com/questions/28431765", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1381537/" ]
just for future reference, the simple way could be done as this: ``` driver.switch_to.new_window() t=driver.window_handles[-1]# Get the handle of new tab driver.switch_to.window(t) driver.get(target_url) # Now the target url is opened in new tab ```
you can use this to open a new tab ``` driver.execute_script("window.open('http://google.com', 'new_window')") ```
27,767,937
Ive been trying to figure this out all night with no luck. Im assuming that this will be a simple question for the more experienced programmer. Im working on a canonical request that I can sign. something like this: ``` canonical_request = method + '\n' + canonical_uri + '\n' + canonical_querystring + '\n' + canonical_headers ``` However when I print(canonical\_request) I get: ``` method canonical_uri canonical_querystring canonical_headers ``` But this is what im after: ``` method\ncanonical_uri\ncanonical_querystring\ncanonical_headers ``` By the way Im using python34, I would really appreciate the help.
2015/01/04
[ "https://Stackoverflow.com/questions/27767937", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4400330/" ]
So you want to not have "actual" newlines, but the escape character for newlines in your string? Just add a second slash to `'\n'` to escape it as well, `'\\n'`. Or prepend your strings with r to make them "raw"; in them the backslash is interpreted literally; `r'\n'` (commonly used for regular expressions). ``` canonical_request = method + r'\n' + canonical_uri + r'\n' + canonical_querystring + r'\n' + canonical_headers ``` For information about string literals, see [String and byte literals](https://docs.python.org/3/reference/lexical_analysis.html#string-and-bytes-literals) in the docs.
As an alternative and more elegant way you can put your strings in a list and join them with escape the `\n` with add `\` to leading : ``` >>> l=['method', 'canonical_uri', 'canonical_querystring', 'canonical_headers'] >>> print '\\n'.join(l) method\ncanonical_uri\ncanonical_querystring\ncanonical_headers ``` > > The backslash (\) character is used to escape characters that otherwise have a special meaning, such as newline, backslash itself, or the quote character. > > >
14,206,760
I have been working on a Django application lately, trying to get it to work with Amazon Elastic Beanstalk. In my `.ebextensions/python.config` file, I have set the following: ``` option_settings: - namespace: aws:elasticbeanstalk:application:environment option_name: ProductionBucket value: s3-bucket-name - namespace: aws:elasticbeanstalk:application:environment option_name: ProductionCache value: memcached-server.site.com:11211 ``` However, whenever I look on the server, no such environment variables are set (and as such, aren't accessible when I try `os.getenv('ProductionBucket')` I came across this [this page](https://gist.github.com/808968) which appears to attempt to document all the namespaces. I've also tried using `PARAM1` as the option name, but have had similar results. How can I set environment variables in Amazon Elastic Beanstalk? **EDIT**: I have also tried adding a command prior to all other commands which would just export an environment variable: ``` commands: 01_env_vars: command: "source scripts/env_vars" ``` ... This was also unsuccessful
2013/01/08
[ "https://Stackoverflow.com/questions/14206760", "https://Stackoverflow.com", "https://Stackoverflow.com/users/165988/" ]
I was having the same problem. Believe it or not, you have to commit the `.ebextensions` directory and all `*.config` files to version control before you deploy in order for them to show up as environment variables on the server. In order to keep sensitive information out of version control, you can use a config file like this: ``` option_settings: - option_name: API_LOGIN value: placeholder - option_name: TRANS_KEY value: placeholder - option_name: PROVIDER_ID value: placeholder ``` Then edit the configuration in the AWS admin panel (Configuration > Software Configuration > Environment Properties) and update the values there. You may also find [this answer](https://stackoverflow.com/a/14491294/274695) helpful.
I know this is an old question but for those who still have the same question like I did here is the solution from AWS documentation: <https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/environments-cfg-softwaresettings.html> > > To configure environment properties in the Elastic Beanstalk console > > > 1. Open the [Elastic Beanstalk console](https://console.aws.amazon.com/elasticbeanstalk), and then, in the regions drop-down > list, select your region. > 2. In the navigation pane, choose **Environments**, and then choose your > environment's name on the list. > 3. In the navigation pane, choose **Configuration**. > 4. In the **Software** configuration category, choose Edit. > 5. Under **Environment properties**, enter key-value pairs. > 6. Choose **Apply**. > > >
14,206,760
I have been working on a Django application lately, trying to get it to work with Amazon Elastic Beanstalk. In my `.ebextensions/python.config` file, I have set the following: ``` option_settings: - namespace: aws:elasticbeanstalk:application:environment option_name: ProductionBucket value: s3-bucket-name - namespace: aws:elasticbeanstalk:application:environment option_name: ProductionCache value: memcached-server.site.com:11211 ``` However, whenever I look on the server, no such environment variables are set (and as such, aren't accessible when I try `os.getenv('ProductionBucket')` I came across this [this page](https://gist.github.com/808968) which appears to attempt to document all the namespaces. I've also tried using `PARAM1` as the option name, but have had similar results. How can I set environment variables in Amazon Elastic Beanstalk? **EDIT**: I have also tried adding a command prior to all other commands which would just export an environment variable: ``` commands: 01_env_vars: command: "source scripts/env_vars" ``` ... This was also unsuccessful
2013/01/08
[ "https://Stackoverflow.com/questions/14206760", "https://Stackoverflow.com", "https://Stackoverflow.com/users/165988/" ]
I was having the same problem. Believe it or not, you have to commit the `.ebextensions` directory and all `*.config` files to version control before you deploy in order for them to show up as environment variables on the server. In order to keep sensitive information out of version control, you can use a config file like this: ``` option_settings: - option_name: API_LOGIN value: placeholder - option_name: TRANS_KEY value: placeholder - option_name: PROVIDER_ID value: placeholder ``` Then edit the configuration in the AWS admin panel (Configuration > Software Configuration > Environment Properties) and update the values there. You may also find [this answer](https://stackoverflow.com/a/14491294/274695) helpful.
I did the following to also get my environment variables that I configure in cloudformation in the non-container phase, eg the regular commands ``` /opt/elasticbeanstalk/bin/get-config environment | python -c "import json,sys; obj=json.load(sys.stdin); f = open('/tmp/eb_env', 'w'); f.write('\n'.join(map(lambda x: 'export ' + x[0] + '=' + x[1], obj.iteritems())))" ``` Once you execute this command you will have a file in /tmp/eb\_env with all your environment variables. Just execute the following before a command that needs the environment variables ``` source /tmp/eb_env ``` Example ``` source /tmp/eb_env && echo $MY_CUSTOM_ENV ``` In the config file of elastic beanstalk, it looks like this: ``` commands: 02-make-sure-we-can-get-our-env-in-the-instance-itself: command: "/opt/elasticbeanstalk/bin/get-config environment | python -c 'import json,sys; obj=json.load(sys.stdin); f = open(\'/tmp/eb_env\', \'w\'); f.write(\'\n\'.join(map(lambda x: \'export \' + x[0] + \'=\' + x[1], obj.iteritems())))'" ```
14,206,760
I have been working on a Django application lately, trying to get it to work with Amazon Elastic Beanstalk. In my `.ebextensions/python.config` file, I have set the following: ``` option_settings: - namespace: aws:elasticbeanstalk:application:environment option_name: ProductionBucket value: s3-bucket-name - namespace: aws:elasticbeanstalk:application:environment option_name: ProductionCache value: memcached-server.site.com:11211 ``` However, whenever I look on the server, no such environment variables are set (and as such, aren't accessible when I try `os.getenv('ProductionBucket')` I came across this [this page](https://gist.github.com/808968) which appears to attempt to document all the namespaces. I've also tried using `PARAM1` as the option name, but have had similar results. How can I set environment variables in Amazon Elastic Beanstalk? **EDIT**: I have also tried adding a command prior to all other commands which would just export an environment variable: ``` commands: 01_env_vars: command: "source scripts/env_vars" ``` ... This was also unsuccessful
2013/01/08
[ "https://Stackoverflow.com/questions/14206760", "https://Stackoverflow.com", "https://Stackoverflow.com/users/165988/" ]
I did the following to also get my environment variables that I configure in cloudformation in the non-container phase, eg the regular commands ``` /opt/elasticbeanstalk/bin/get-config environment | python -c "import json,sys; obj=json.load(sys.stdin); f = open('/tmp/eb_env', 'w'); f.write('\n'.join(map(lambda x: 'export ' + x[0] + '=' + x[1], obj.iteritems())))" ``` Once you execute this command you will have a file in /tmp/eb\_env with all your environment variables. Just execute the following before a command that needs the environment variables ``` source /tmp/eb_env ``` Example ``` source /tmp/eb_env && echo $MY_CUSTOM_ENV ``` In the config file of elastic beanstalk, it looks like this: ``` commands: 02-make-sure-we-can-get-our-env-in-the-instance-itself: command: "/opt/elasticbeanstalk/bin/get-config environment | python -c 'import json,sys; obj=json.load(sys.stdin); f = open(\'/tmp/eb_env\', \'w\'); f.write(\'\n\'.join(map(lambda x: \'export \' + x[0] + \'=\' + x[1], obj.iteritems())))'" ```
To set variables on a local run, you can do the following: ``` eb local setenv CONFIG=dev eb local run ``` This also works with Docker MultiContainers, which otherwise will not see your environment.
14,206,760
I have been working on a Django application lately, trying to get it to work with Amazon Elastic Beanstalk. In my `.ebextensions/python.config` file, I have set the following: ``` option_settings: - namespace: aws:elasticbeanstalk:application:environment option_name: ProductionBucket value: s3-bucket-name - namespace: aws:elasticbeanstalk:application:environment option_name: ProductionCache value: memcached-server.site.com:11211 ``` However, whenever I look on the server, no such environment variables are set (and as such, aren't accessible when I try `os.getenv('ProductionBucket')` I came across this [this page](https://gist.github.com/808968) which appears to attempt to document all the namespaces. I've also tried using `PARAM1` as the option name, but have had similar results. How can I set environment variables in Amazon Elastic Beanstalk? **EDIT**: I have also tried adding a command prior to all other commands which would just export an environment variable: ``` commands: 01_env_vars: command: "source scripts/env_vars" ``` ... This was also unsuccessful
2013/01/08
[ "https://Stackoverflow.com/questions/14206760", "https://Stackoverflow.com", "https://Stackoverflow.com/users/165988/" ]
I was having the same problem. Believe it or not, you have to commit the `.ebextensions` directory and all `*.config` files to version control before you deploy in order for them to show up as environment variables on the server. In order to keep sensitive information out of version control, you can use a config file like this: ``` option_settings: - option_name: API_LOGIN value: placeholder - option_name: TRANS_KEY value: placeholder - option_name: PROVIDER_ID value: placeholder ``` Then edit the configuration in the AWS admin panel (Configuration > Software Configuration > Environment Properties) and update the values there. You may also find [this answer](https://stackoverflow.com/a/14491294/274695) helpful.
I've checked using a modern (i.e., non legacy) container, and found it under /opt/elasticbeanstalk/deploy/configuration/containerconfiguration as a json file. The Behaviour seems to be Platform-Dependent: I remember in PHP in particular, it also creates some shell scripts with the values. Regardless of that, look into /opt/elasticbeanstalk/hooks/configdeploy. Java case again, it runs this python script, which looks quite handy for you: <https://gist.github.com/19c1e4b718f9a70a4ce1>
14,206,760
I have been working on a Django application lately, trying to get it to work with Amazon Elastic Beanstalk. In my `.ebextensions/python.config` file, I have set the following: ``` option_settings: - namespace: aws:elasticbeanstalk:application:environment option_name: ProductionBucket value: s3-bucket-name - namespace: aws:elasticbeanstalk:application:environment option_name: ProductionCache value: memcached-server.site.com:11211 ``` However, whenever I look on the server, no such environment variables are set (and as such, aren't accessible when I try `os.getenv('ProductionBucket')` I came across this [this page](https://gist.github.com/808968) which appears to attempt to document all the namespaces. I've also tried using `PARAM1` as the option name, but have had similar results. How can I set environment variables in Amazon Elastic Beanstalk? **EDIT**: I have also tried adding a command prior to all other commands which would just export an environment variable: ``` commands: 01_env_vars: command: "source scripts/env_vars" ``` ... This was also unsuccessful
2013/01/08
[ "https://Stackoverflow.com/questions/14206760", "https://Stackoverflow.com", "https://Stackoverflow.com/users/165988/" ]
I did the following to also get my environment variables that I configure in cloudformation in the non-container phase, eg the regular commands ``` /opt/elasticbeanstalk/bin/get-config environment | python -c "import json,sys; obj=json.load(sys.stdin); f = open('/tmp/eb_env', 'w'); f.write('\n'.join(map(lambda x: 'export ' + x[0] + '=' + x[1], obj.iteritems())))" ``` Once you execute this command you will have a file in /tmp/eb\_env with all your environment variables. Just execute the following before a command that needs the environment variables ``` source /tmp/eb_env ``` Example ``` source /tmp/eb_env && echo $MY_CUSTOM_ENV ``` In the config file of elastic beanstalk, it looks like this: ``` commands: 02-make-sure-we-can-get-our-env-in-the-instance-itself: command: "/opt/elasticbeanstalk/bin/get-config environment | python -c 'import json,sys; obj=json.load(sys.stdin); f = open(\'/tmp/eb_env\', \'w\'); f.write(\'\n\'.join(map(lambda x: \'export \' + x[0] + \'=\' + x[1], obj.iteritems())))'" ```
I know this is an old question but for those who still have the same question like I did here is the solution from AWS documentation: <https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/environments-cfg-softwaresettings.html> > > To configure environment properties in the Elastic Beanstalk console > > > 1. Open the [Elastic Beanstalk console](https://console.aws.amazon.com/elasticbeanstalk), and then, in the regions drop-down > list, select your region. > 2. In the navigation pane, choose **Environments**, and then choose your > environment's name on the list. > 3. In the navigation pane, choose **Configuration**. > 4. In the **Software** configuration category, choose Edit. > 5. Under **Environment properties**, enter key-value pairs. > 6. Choose **Apply**. > > >
14,206,760
I have been working on a Django application lately, trying to get it to work with Amazon Elastic Beanstalk. In my `.ebextensions/python.config` file, I have set the following: ``` option_settings: - namespace: aws:elasticbeanstalk:application:environment option_name: ProductionBucket value: s3-bucket-name - namespace: aws:elasticbeanstalk:application:environment option_name: ProductionCache value: memcached-server.site.com:11211 ``` However, whenever I look on the server, no such environment variables are set (and as such, aren't accessible when I try `os.getenv('ProductionBucket')` I came across this [this page](https://gist.github.com/808968) which appears to attempt to document all the namespaces. I've also tried using `PARAM1` as the option name, but have had similar results. How can I set environment variables in Amazon Elastic Beanstalk? **EDIT**: I have also tried adding a command prior to all other commands which would just export an environment variable: ``` commands: 01_env_vars: command: "source scripts/env_vars" ``` ... This was also unsuccessful
2013/01/08
[ "https://Stackoverflow.com/questions/14206760", "https://Stackoverflow.com", "https://Stackoverflow.com/users/165988/" ]
**Option 1:** You can set environment variables using `eb setenv FOO=bar` You can view the environment variables using `eb printenv` **Option 2:** You can create a config file in your .ebextensions directory, for example `00_environment.config`. Then, add your environment variables like this: `option_settings: - option_name: MY_FIRST_ENV_VAR value: abc - option_name: ANOTHER_ENV_VAR value: 123` However, if you have multiple environments, I have found that it is more useful to set the environment variables directly, using option #1. I also have found the `eb config` commands to be helpful: <http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/eb3-config.html> These commands allow you to get, put, list, or delete configuration files on your eb environment. The command `eb config get` will save your config, including environment variables, to a local file in `.elasticbeanstalk/saved_configs`.
To set variables on a local run, you can do the following: ``` eb local setenv CONFIG=dev eb local run ``` This also works with Docker MultiContainers, which otherwise will not see your environment.
14,206,760
I have been working on a Django application lately, trying to get it to work with Amazon Elastic Beanstalk. In my `.ebextensions/python.config` file, I have set the following: ``` option_settings: - namespace: aws:elasticbeanstalk:application:environment option_name: ProductionBucket value: s3-bucket-name - namespace: aws:elasticbeanstalk:application:environment option_name: ProductionCache value: memcached-server.site.com:11211 ``` However, whenever I look on the server, no such environment variables are set (and as such, aren't accessible when I try `os.getenv('ProductionBucket')` I came across this [this page](https://gist.github.com/808968) which appears to attempt to document all the namespaces. I've also tried using `PARAM1` as the option name, but have had similar results. How can I set environment variables in Amazon Elastic Beanstalk? **EDIT**: I have also tried adding a command prior to all other commands which would just export an environment variable: ``` commands: 01_env_vars: command: "source scripts/env_vars" ``` ... This was also unsuccessful
2013/01/08
[ "https://Stackoverflow.com/questions/14206760", "https://Stackoverflow.com", "https://Stackoverflow.com/users/165988/" ]
I was having the same problem. Believe it or not, you have to commit the `.ebextensions` directory and all `*.config` files to version control before you deploy in order for them to show up as environment variables on the server. In order to keep sensitive information out of version control, you can use a config file like this: ``` option_settings: - option_name: API_LOGIN value: placeholder - option_name: TRANS_KEY value: placeholder - option_name: PROVIDER_ID value: placeholder ``` Then edit the configuration in the AWS admin panel (Configuration > Software Configuration > Environment Properties) and update the values there. You may also find [this answer](https://stackoverflow.com/a/14491294/274695) helpful.
To set variables on a local run, you can do the following: ``` eb local setenv CONFIG=dev eb local run ``` This also works with Docker MultiContainers, which otherwise will not see your environment.
14,206,760
I have been working on a Django application lately, trying to get it to work with Amazon Elastic Beanstalk. In my `.ebextensions/python.config` file, I have set the following: ``` option_settings: - namespace: aws:elasticbeanstalk:application:environment option_name: ProductionBucket value: s3-bucket-name - namespace: aws:elasticbeanstalk:application:environment option_name: ProductionCache value: memcached-server.site.com:11211 ``` However, whenever I look on the server, no such environment variables are set (and as such, aren't accessible when I try `os.getenv('ProductionBucket')` I came across this [this page](https://gist.github.com/808968) which appears to attempt to document all the namespaces. I've also tried using `PARAM1` as the option name, but have had similar results. How can I set environment variables in Amazon Elastic Beanstalk? **EDIT**: I have also tried adding a command prior to all other commands which would just export an environment variable: ``` commands: 01_env_vars: command: "source scripts/env_vars" ``` ... This was also unsuccessful
2013/01/08
[ "https://Stackoverflow.com/questions/14206760", "https://Stackoverflow.com", "https://Stackoverflow.com/users/165988/" ]
I was having the same problem. Believe it or not, you have to commit the `.ebextensions` directory and all `*.config` files to version control before you deploy in order for them to show up as environment variables on the server. In order to keep sensitive information out of version control, you can use a config file like this: ``` option_settings: - option_name: API_LOGIN value: placeholder - option_name: TRANS_KEY value: placeholder - option_name: PROVIDER_ID value: placeholder ``` Then edit the configuration in the AWS admin panel (Configuration > Software Configuration > Environment Properties) and update the values there. You may also find [this answer](https://stackoverflow.com/a/14491294/274695) helpful.
**Option 1:** You can set environment variables using `eb setenv FOO=bar` You can view the environment variables using `eb printenv` **Option 2:** You can create a config file in your .ebextensions directory, for example `00_environment.config`. Then, add your environment variables like this: `option_settings: - option_name: MY_FIRST_ENV_VAR value: abc - option_name: ANOTHER_ENV_VAR value: 123` However, if you have multiple environments, I have found that it is more useful to set the environment variables directly, using option #1. I also have found the `eb config` commands to be helpful: <http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/eb3-config.html> These commands allow you to get, put, list, or delete configuration files on your eb environment. The command `eb config get` will save your config, including environment variables, to a local file in `.elasticbeanstalk/saved_configs`.
14,206,760
I have been working on a Django application lately, trying to get it to work with Amazon Elastic Beanstalk. In my `.ebextensions/python.config` file, I have set the following: ``` option_settings: - namespace: aws:elasticbeanstalk:application:environment option_name: ProductionBucket value: s3-bucket-name - namespace: aws:elasticbeanstalk:application:environment option_name: ProductionCache value: memcached-server.site.com:11211 ``` However, whenever I look on the server, no such environment variables are set (and as such, aren't accessible when I try `os.getenv('ProductionBucket')` I came across this [this page](https://gist.github.com/808968) which appears to attempt to document all the namespaces. I've also tried using `PARAM1` as the option name, but have had similar results. How can I set environment variables in Amazon Elastic Beanstalk? **EDIT**: I have also tried adding a command prior to all other commands which would just export an environment variable: ``` commands: 01_env_vars: command: "source scripts/env_vars" ``` ... This was also unsuccessful
2013/01/08
[ "https://Stackoverflow.com/questions/14206760", "https://Stackoverflow.com", "https://Stackoverflow.com/users/165988/" ]
I've checked using a modern (i.e., non legacy) container, and found it under /opt/elasticbeanstalk/deploy/configuration/containerconfiguration as a json file. The Behaviour seems to be Platform-Dependent: I remember in PHP in particular, it also creates some shell scripts with the values. Regardless of that, look into /opt/elasticbeanstalk/hooks/configdeploy. Java case again, it runs this python script, which looks quite handy for you: <https://gist.github.com/19c1e4b718f9a70a4ce1>
To set variables on a local run, you can do the following: ``` eb local setenv CONFIG=dev eb local run ``` This also works with Docker MultiContainers, which otherwise will not see your environment.
14,206,760
I have been working on a Django application lately, trying to get it to work with Amazon Elastic Beanstalk. In my `.ebextensions/python.config` file, I have set the following: ``` option_settings: - namespace: aws:elasticbeanstalk:application:environment option_name: ProductionBucket value: s3-bucket-name - namespace: aws:elasticbeanstalk:application:environment option_name: ProductionCache value: memcached-server.site.com:11211 ``` However, whenever I look on the server, no such environment variables are set (and as such, aren't accessible when I try `os.getenv('ProductionBucket')` I came across this [this page](https://gist.github.com/808968) which appears to attempt to document all the namespaces. I've also tried using `PARAM1` as the option name, but have had similar results. How can I set environment variables in Amazon Elastic Beanstalk? **EDIT**: I have also tried adding a command prior to all other commands which would just export an environment variable: ``` commands: 01_env_vars: command: "source scripts/env_vars" ``` ... This was also unsuccessful
2013/01/08
[ "https://Stackoverflow.com/questions/14206760", "https://Stackoverflow.com", "https://Stackoverflow.com/users/165988/" ]
**Option 1:** You can set environment variables using `eb setenv FOO=bar` You can view the environment variables using `eb printenv` **Option 2:** You can create a config file in your .ebextensions directory, for example `00_environment.config`. Then, add your environment variables like this: `option_settings: - option_name: MY_FIRST_ENV_VAR value: abc - option_name: ANOTHER_ENV_VAR value: 123` However, if you have multiple environments, I have found that it is more useful to set the environment variables directly, using option #1. I also have found the `eb config` commands to be helpful: <http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/eb3-config.html> These commands allow you to get, put, list, or delete configuration files on your eb environment. The command `eb config get` will save your config, including environment variables, to a local file in `.elasticbeanstalk/saved_configs`.
I know this is an old question but for those who still have the same question like I did here is the solution from AWS documentation: <https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/environments-cfg-softwaresettings.html> > > To configure environment properties in the Elastic Beanstalk console > > > 1. Open the [Elastic Beanstalk console](https://console.aws.amazon.com/elasticbeanstalk), and then, in the regions drop-down > list, select your region. > 2. In the navigation pane, choose **Environments**, and then choose your > environment's name on the list. > 3. In the navigation pane, choose **Configuration**. > 4. In the **Software** configuration category, choose Edit. > 5. Under **Environment properties**, enter key-value pairs. > 6. Choose **Apply**. > > >
48,937,024
I am going to write down this pseudocode in python: ``` if (i < .1): doX() elif (i < .3): doY() elif (i < .5): doZ() . . else: doW() ``` The range of numbers may be 20, and each float number which shapes the constraints is read from a list. For the above example (shorter version), it is the list: ``` [0.1, 0.3, 0.5, 1] ``` Is there any pythonic way or a function which can do different functions for different associated ranges?
2018/02/22
[ "https://Stackoverflow.com/questions/48937024", "https://Stackoverflow.com", "https://Stackoverflow.com/users/8899386/" ]
``` from bisect import * a=[0.1, 0.3, 0.5, 1] b=["a","b","c","d"] print b[bisect_left(a,0.2)] ```
Here's an answer that you should not use: ``` doX = lambda x: x + 1 doY = lambda x: x + 10 doZ = lambda x: x + 100 ranges = [0.1, 0.3, 0.5, 1] functions = [doX, doY, doZ] answer = lambda x: [func(x) for (low, high), func in zip(zip(ranges[:-1],ranges[1:]), function) if low <= x < high][0] ``` The point is, that getting to fancy with it becomes unreadable. The if..elif..else is the *"one-- and preferably only one --obvious way to do it."*.
48,937,024
I am going to write down this pseudocode in python: ``` if (i < .1): doX() elif (i < .3): doY() elif (i < .5): doZ() . . else: doW() ``` The range of numbers may be 20, and each float number which shapes the constraints is read from a list. For the above example (shorter version), it is the list: ``` [0.1, 0.3, 0.5, 1] ``` Is there any pythonic way or a function which can do different functions for different associated ranges?
2018/02/22
[ "https://Stackoverflow.com/questions/48937024", "https://Stackoverflow.com", "https://Stackoverflow.com/users/8899386/" ]
``` from bisect import * a=[0.1, 0.3, 0.5, 1] b=["a","b","c","d"] print b[bisect_left(a,0.2)] ```
``` def a(): print('a returned') def b(): print('b returned') def c(): print('c returned') funcs = [a, b, c] def sample_func(x, funcs=None): if x < 0: return None thresholds = [.40, .60] for i, threshold in enumerate(thresholds): if x <= threshold: return funcs[i]() return funcs[len(thresholds)]() sample_func(.1, funcs) ``` returns ``` a returned sample_func(.65, funcs) ``` returns ``` c returned ```
48,937,024
I am going to write down this pseudocode in python: ``` if (i < .1): doX() elif (i < .3): doY() elif (i < .5): doZ() . . else: doW() ``` The range of numbers may be 20, and each float number which shapes the constraints is read from a list. For the above example (shorter version), it is the list: ``` [0.1, 0.3, 0.5, 1] ``` Is there any pythonic way or a function which can do different functions for different associated ranges?
2018/02/22
[ "https://Stackoverflow.com/questions/48937024", "https://Stackoverflow.com", "https://Stackoverflow.com/users/8899386/" ]
``` from bisect import * a=[0.1, 0.3, 0.5, 1] b=["a","b","c","d"] print b[bisect_left(a,0.2)] ```
I suggest you to create the following dictionary with numbers as keys and functions as values: ``` d = {0.1:doX, 0.3:doY, 0.5:doZ, 1:doW} ``` Then use the following code: ``` for n,(k,f) in enumerate(sorted(d.items())): if (i < k) or (n == (len(d) - 1)): f() break ```
63,815,087
I'm porting some Python 2 legacy code and I have this class: ``` class myfile(file): "Wrapper for file object whose read member returns a string buffer" def __init__ (self, *args): return file.__init__ (self, *args) def read(self, size=-1): return create_string_buffer(file.read(self, size)) ``` It's used like a File object: ``` self._file = myfile(name, mode, buffering) self._file.seek(self.si*self.blocksize) ``` I'm trying to implement it in Python 3 like so: ``` class myfile(io.FileIO): "Wrapper for file object whose read member returns a string buffer" def __init__(self, name, mode, *args, **kwargs): super(myfile, self).__init__(name, mode, closefd=True, *args, **kwargs) def read(self, size=-1): return create_string_buffer(self.read(size)) ``` The problem is that the constructor for FileIO doesn't take the `buffering` argument and Python throws a `TypeError: fileio() takes at most 3 arguments (4 given)` error. The [Python 3 open function](https://docs.python.org/3/library/functions.html#open) is what I need. Can I inherit from that? I've looked at the [PyFile\_FromFd class](https://docs.python.org/3.8/c-api/file.html), but it needs an open file descriptor and I'm concerned that the behaviour is not going to be the same. Thank you!!!
2020/09/09
[ "https://Stackoverflow.com/questions/63815087", "https://Stackoverflow.com", "https://Stackoverflow.com/users/6271889/" ]
If another way is fine , you can try the below, it is a little dirty though (you can try optimizing it) ``` cols = ['name','color','amount'] u = df[df.columns.difference(cols)].join(df[cols].agg(dict,1).rename('d')) v = (u.groupby(['cat1','cat2','cat3'])['d'].agg(list).reset_index("cat3")) v = v.groupby(v.index).apply(lambda x: dict(zip(x['cat3'],x['d']))) v.index = pd.MultiIndex.from_tuples(v.index,names=['cat1','cat2']) d = v.unstack(0).to_dict() ``` --- ``` print(d) {'A': {'BB': {'CC': [{'amount': 132, 'color': 'red', 'name': 'P1'}, {'amount': 51, 'color': 'blue', 'name': 'P2'}]}, 'BC': {'CD': [{'amount': 12, 'color': 'green', 'name': 'P3'}]}}, 'B': {'BB': {'CD': [{'amount': 421, 'color': 'green', 'name': 'P1'}, {'amount': 55, 'color': 'yellow', 'name': 'P4'}]}, 'BC': nan}, 'C': {'BB': {'CC': [{'amount': 11, 'color': 'red', 'name': 'P1'}]}, 'BC': {'CD': [{'amount': 123, 'color': 'blue', 'name': 'P3'}], 'CE': [{'amount': 312, 'color': 'blue', 'name': 'P6'}]}}} ```
We can `groupby` on `cat1`, `cat2` and `cat3` and recursively build the dictionary based on the grouped categories: ``` def set_val(d, k, v): if len(k) == 1: d[k[0]] = v else: d[k[0]] = set_val(d.get(k[0], {}), k[1:], v) return d dct = {} for k, g in df.groupby(['cat1', 'cat2', 'cat3']): set_val(dct, k, {'products': g[['name', 'color', 'amount']].to_dict('r')}) ``` --- ``` print(dct) {'A': {'BB': {'CC': {'products': [{'amount': 132, 'color': 'red', 'name': 'P1'}, {'amount': 51, 'color': 'blue', 'name': 'P2'}]}}, 'BC': {'CD': {'products': [{'amount': 12, 'color': 'green', 'name': 'P3'}]}}}, 'B': {'BB': {'CD': {'products': [{'amount': 421, 'color': 'green', 'name': 'P1'}, {'amount': 55, 'color': 'yellow', 'name': 'P4'}]}}}, 'C': {'BB': {'CC': {'products': [{'amount': 11, 'color': 'red', 'name': 'P1'}]}}, 'BC': {'CD': {'products': [{'amount': 123, 'color': 'blue', 'name': 'P3'}]}, 'CE': {'products': [{'amount': 312, 'color': 'blue', 'name': 'P6'}]}}}} ```
63,815,087
I'm porting some Python 2 legacy code and I have this class: ``` class myfile(file): "Wrapper for file object whose read member returns a string buffer" def __init__ (self, *args): return file.__init__ (self, *args) def read(self, size=-1): return create_string_buffer(file.read(self, size)) ``` It's used like a File object: ``` self._file = myfile(name, mode, buffering) self._file.seek(self.si*self.blocksize) ``` I'm trying to implement it in Python 3 like so: ``` class myfile(io.FileIO): "Wrapper for file object whose read member returns a string buffer" def __init__(self, name, mode, *args, **kwargs): super(myfile, self).__init__(name, mode, closefd=True, *args, **kwargs) def read(self, size=-1): return create_string_buffer(self.read(size)) ``` The problem is that the constructor for FileIO doesn't take the `buffering` argument and Python throws a `TypeError: fileio() takes at most 3 arguments (4 given)` error. The [Python 3 open function](https://docs.python.org/3/library/functions.html#open) is what I need. Can I inherit from that? I've looked at the [PyFile\_FromFd class](https://docs.python.org/3.8/c-api/file.html), but it needs an open file descriptor and I'm concerned that the behaviour is not going to be the same. Thank you!!!
2020/09/09
[ "https://Stackoverflow.com/questions/63815087", "https://Stackoverflow.com", "https://Stackoverflow.com/users/6271889/" ]
If another way is fine , you can try the below, it is a little dirty though (you can try optimizing it) ``` cols = ['name','color','amount'] u = df[df.columns.difference(cols)].join(df[cols].agg(dict,1).rename('d')) v = (u.groupby(['cat1','cat2','cat3'])['d'].agg(list).reset_index("cat3")) v = v.groupby(v.index).apply(lambda x: dict(zip(x['cat3'],x['d']))) v.index = pd.MultiIndex.from_tuples(v.index,names=['cat1','cat2']) d = v.unstack(0).to_dict() ``` --- ``` print(d) {'A': {'BB': {'CC': [{'amount': 132, 'color': 'red', 'name': 'P1'}, {'amount': 51, 'color': 'blue', 'name': 'P2'}]}, 'BC': {'CD': [{'amount': 12, 'color': 'green', 'name': 'P3'}]}}, 'B': {'BB': {'CD': [{'amount': 421, 'color': 'green', 'name': 'P1'}, {'amount': 55, 'color': 'yellow', 'name': 'P4'}]}, 'BC': nan}, 'C': {'BB': {'CC': [{'amount': 11, 'color': 'red', 'name': 'P1'}]}, 'BC': {'CD': [{'amount': 123, 'color': 'blue', 'name': 'P3'}], 'CE': [{'amount': 312, 'color': 'blue', 'name': 'P6'}]}}} ```
This is a *generic* method adapted from [Shubham Sharma's great Solution](https://stackoverflow.com/a/63816750/1167012) ``` def gen_nested_dict(dataframe, group, inner_key, inner_dict): def set_val(d, k2, v): if len(k2) == 1: d[k2[0]] = v else: d[k2[0]] = set_val(d.get(k2[0], {}), k2[1:], v) return d dct = {} for k, g in dataframe.groupby(group): set_val(dct, k, {inner_key: g[inner_dict].to_dict('records')}) return dct mydct = gen_nested_dict(df, ['cat1', 'cat2', 'cat3'], 'products', ['name', 'color', 'amount']) ```
63,815,087
I'm porting some Python 2 legacy code and I have this class: ``` class myfile(file): "Wrapper for file object whose read member returns a string buffer" def __init__ (self, *args): return file.__init__ (self, *args) def read(self, size=-1): return create_string_buffer(file.read(self, size)) ``` It's used like a File object: ``` self._file = myfile(name, mode, buffering) self._file.seek(self.si*self.blocksize) ``` I'm trying to implement it in Python 3 like so: ``` class myfile(io.FileIO): "Wrapper for file object whose read member returns a string buffer" def __init__(self, name, mode, *args, **kwargs): super(myfile, self).__init__(name, mode, closefd=True, *args, **kwargs) def read(self, size=-1): return create_string_buffer(self.read(size)) ``` The problem is that the constructor for FileIO doesn't take the `buffering` argument and Python throws a `TypeError: fileio() takes at most 3 arguments (4 given)` error. The [Python 3 open function](https://docs.python.org/3/library/functions.html#open) is what I need. Can I inherit from that? I've looked at the [PyFile\_FromFd class](https://docs.python.org/3.8/c-api/file.html), but it needs an open file descriptor and I'm concerned that the behaviour is not going to be the same. Thank you!!!
2020/09/09
[ "https://Stackoverflow.com/questions/63815087", "https://Stackoverflow.com", "https://Stackoverflow.com/users/6271889/" ]
We can `groupby` on `cat1`, `cat2` and `cat3` and recursively build the dictionary based on the grouped categories: ``` def set_val(d, k, v): if len(k) == 1: d[k[0]] = v else: d[k[0]] = set_val(d.get(k[0], {}), k[1:], v) return d dct = {} for k, g in df.groupby(['cat1', 'cat2', 'cat3']): set_val(dct, k, {'products': g[['name', 'color', 'amount']].to_dict('r')}) ``` --- ``` print(dct) {'A': {'BB': {'CC': {'products': [{'amount': 132, 'color': 'red', 'name': 'P1'}, {'amount': 51, 'color': 'blue', 'name': 'P2'}]}}, 'BC': {'CD': {'products': [{'amount': 12, 'color': 'green', 'name': 'P3'}]}}}, 'B': {'BB': {'CD': {'products': [{'amount': 421, 'color': 'green', 'name': 'P1'}, {'amount': 55, 'color': 'yellow', 'name': 'P4'}]}}}, 'C': {'BB': {'CC': {'products': [{'amount': 11, 'color': 'red', 'name': 'P1'}]}}, 'BC': {'CD': {'products': [{'amount': 123, 'color': 'blue', 'name': 'P3'}]}, 'CE': {'products': [{'amount': 312, 'color': 'blue', 'name': 'P6'}]}}}} ```
This is a *generic* method adapted from [Shubham Sharma's great Solution](https://stackoverflow.com/a/63816750/1167012) ``` def gen_nested_dict(dataframe, group, inner_key, inner_dict): def set_val(d, k2, v): if len(k2) == 1: d[k2[0]] = v else: d[k2[0]] = set_val(d.get(k2[0], {}), k2[1:], v) return d dct = {} for k, g in dataframe.groupby(group): set_val(dct, k, {inner_key: g[inner_dict].to_dict('records')}) return dct mydct = gen_nested_dict(df, ['cat1', 'cat2', 'cat3'], 'products', ['name', 'color', 'amount']) ```