qid
int64
46k
74.7M
question
stringlengths
54
37.8k
date
stringlengths
10
10
metadata
listlengths
3
3
response_j
stringlengths
17
26k
response_k
stringlengths
26
26k
60,548,289
I don't know why I am getting this error. Below is the code I am using. **settings.py** ``` TEMPLATE_DIRS = (os.path.join(os.path.dirname(BASE_DIR), "mysite", "static", "templates"),) ``` **urls.py** ``` from django.urls import path from django.conf.urls import include, url from django.contrib.auth import views as auth_views from notes import views as notes_views urlpatterns = [ url(r'^$', notes_views.home, name='home'), url(r'^admin/', admin.site.urls), ]``` **views.py** `def home(request): notes = Note.objects template = loader.get_template('note.html') context = {'notes': notes} return render(request, 'templates/note.html', context)` NOTE : I am following this tutorial - https://pythonspot.com/django-tutorial-building-a-note-taking-app/ ```
2020/03/05
[ "https://Stackoverflow.com/questions/60548289", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5227269/" ]
**second Solution** - this is because you have not bind the function and calling it in the click event Please refer [Handling events](https://reactjs.org/docs/handling-events.html) So add this line inside the constructor ``` this.resetTimer = this.resetTimer.bind(); ``` I hope this solves your problem :)
You have to bind the method call with the event as suggested by other users If you don't bind the method, It will be always be called with the re-render First approach Inside constructor `this.methodName = this.bind.methodName(this);` Inside render() ``` render(){ return( <button onClick={this.methodName}></button> ) } ``` Second approach ``` render(){ return( <button onClick={()=>this.methodName()}></button> ) } ``` Third approach ``` methodName = () => { //scope } render(){ return( <button onClick={this.methodName}></button> ) } ``` The third type is not available for older versions of react you have to use class experimental plugin for that, In that case you should always you the first approach The second approach should always be used when you need to pass a parameter otherwise don't use that Also please note If you just write ``` <button onClick={this.methodName()}></button> ``` That means you are calling the method, but without binding it with event i.e. whether you click or not the method will always be called
64,965,247
I was developing a bot on discord, and I want to log when user roles changes. I tried the code below and that was just starting. ```py TOKEN = "" client = discord.Client() @client.event async def on_ready(): print(f'{client.user} has connected to Discord!') @client.event async def on_message(message): print(message) @client.event async def on_member_update(before, after): print(before) print(after) client.run(TOKEN) ``` When I type message to a channel, It prints the message to python console, However, when I add role to myself in the same guild, It does not print anything. **Note: I enabled `presence intent` and `server member intent` in discord developer portal**
2020/11/23
[ "https://Stackoverflow.com/questions/64965247", "https://Stackoverflow.com", "https://Stackoverflow.com/users/14041512/" ]
Your intents should be enabled both on the portal and the code itself. Here is how you do it in the code. ```py intents = discord.Intents().all() client = discord.Client(intents=intents) ``` And according to the [docs of on\_memebr\_update](https://discordpy.readthedocs.io/en/latest/api.html#discord.on_member_update) `This requires Intents.members to be enabled.` That is why it did not work.
You should activate that from your code like: ```py intents = discord.Intents.default() intents.members = True intents.presences = True client= discord.Client(intents=intents) ``` [reference for more information](https://discordpy.readthedocs.io/en/latest/intents.html)
61,296,763
I have trained a CNN in Matlab 2019b that classifies images between three classes. When this CNN was tested in Matlab it was functioning fine and only took 10-15 seconds to classify an image. I used the exportONNXNetwork function in Maltab so that I can implement my CNN in Tensorflow. This is the code I am using to use the ONNX file in python: ```py import onnx from onnx_tf.backend import prepare import numpy as np from PIL import Image onnx_model = onnx.load('trainednet.onnx') tf_rep = prepare(onnx_model) filepath = 'filepath.png' img = Image.open(filepath).resize((224,224)).convert("RGB") img = array(img).transpose((2,0,1)) img = np.expand_dims(img, 0) img = img.astype(np.uint8) probabilities = tf_rep.run(img) print(probabilities) ``` When trying to use this code to classify the same test set, it seems to be classifying the images correctly but it is very slow and freezes my computer as it reaches high memory usages of up to 95+% at some points. I also noticed in the command prompt while classifying it prints this: ``` 2020-04-18 18:26:39.214286: W tensorflow/core/grappler/optimizers/meta_optimizer.cc:530] constant_folding failed: Deadline exceeded: constant_folding exceeded deadline., time = 486776.938ms. ``` Is there any way I can make this python code classify faster?
2020/04/18
[ "https://Stackoverflow.com/questions/61296763", "https://Stackoverflow.com", "https://Stackoverflow.com/users/-1/" ]
Maybe you could try to understand what part of the code takes a long time this way: ``` import onnx from onnx_tf.backend import prepare import numpy as np from PIL import Image import datetime now = datetime.datetime.now() onnx_model = onnx.load('trainednet.onnx') tf_rep = prepare(onnx_model) filepath = 'filepath.png' later = datetime.datetime.now() difference = later - now print("Loading time : %f ms" % (difference.microseconds / 1000)) img = Image.open(filepath).resize((224,224)).convert("RGB") img = array(img).transpose((2,0,1)) img = np.expand_dims(img, 0) img = img.astype(np.uint8) now = datetime.datetime.now() probabilities = tf_rep.run(img) later = datetime.datetime.now() difference = later - now print("Prediction time : %f ms" % (difference.microseconds / 1000)) print(probabilities) ``` Let me know what the output looks like :)
You should consider some points while working on TensorFlow with Python. A GPU will be better for work as it fastens the whole processing. For that, you have to install CUDA support. Apart from this, the compiler also sometimes matters. I can tell VSCode is better than Spyder from my experience. I hope it helps.
61,296,763
I have trained a CNN in Matlab 2019b that classifies images between three classes. When this CNN was tested in Matlab it was functioning fine and only took 10-15 seconds to classify an image. I used the exportONNXNetwork function in Maltab so that I can implement my CNN in Tensorflow. This is the code I am using to use the ONNX file in python: ```py import onnx from onnx_tf.backend import prepare import numpy as np from PIL import Image onnx_model = onnx.load('trainednet.onnx') tf_rep = prepare(onnx_model) filepath = 'filepath.png' img = Image.open(filepath).resize((224,224)).convert("RGB") img = array(img).transpose((2,0,1)) img = np.expand_dims(img, 0) img = img.astype(np.uint8) probabilities = tf_rep.run(img) print(probabilities) ``` When trying to use this code to classify the same test set, it seems to be classifying the images correctly but it is very slow and freezes my computer as it reaches high memory usages of up to 95+% at some points. I also noticed in the command prompt while classifying it prints this: ``` 2020-04-18 18:26:39.214286: W tensorflow/core/grappler/optimizers/meta_optimizer.cc:530] constant_folding failed: Deadline exceeded: constant_folding exceeded deadline., time = 486776.938ms. ``` Is there any way I can make this python code classify faster?
2020/04/18
[ "https://Stackoverflow.com/questions/61296763", "https://Stackoverflow.com", "https://Stackoverflow.com/users/-1/" ]
Maybe you could try to understand what part of the code takes a long time this way: ``` import onnx from onnx_tf.backend import prepare import numpy as np from PIL import Image import datetime now = datetime.datetime.now() onnx_model = onnx.load('trainednet.onnx') tf_rep = prepare(onnx_model) filepath = 'filepath.png' later = datetime.datetime.now() difference = later - now print("Loading time : %f ms" % (difference.microseconds / 1000)) img = Image.open(filepath).resize((224,224)).convert("RGB") img = array(img).transpose((2,0,1)) img = np.expand_dims(img, 0) img = img.astype(np.uint8) now = datetime.datetime.now() probabilities = tf_rep.run(img) later = datetime.datetime.now() difference = later - now print("Prediction time : %f ms" % (difference.microseconds / 1000)) print(probabilities) ``` Let me know what the output looks like :)
Since the command prompt states that your program takes a long time to perform constant folding, it might be worthwhile to turn this off. [Based on this documentation](https://www.tensorflow.org/guide/graph_optimization), you could try running: ``` import numpy as np import timeit import traceback import contextlib import onnx from onnx_tf.backend import prepare from PIL import Image import tensorflow as tf @contextlib.contextmanager def options(options): old_opts = tf.config.optimizer.get_experimental_options() tf.config.optimizer.set_experimental_options(options) try: yield finally: tf.config.optimizer.set_experimental_options(old_opts) with options({'constant_folding': False}): onnx_model = onnx.load('trainednet.onnx') tf_rep - prepare(onnx_model) filepath = 'filepath.png' img = Image.open(filepath).resize((224,224)).convert("RGB") img = array(img).transpose((2,0,1)) img = np.expand_dims(img, 0) img = img.astype(np.uint8) probabilities = tf_rep.run(img) print(probabilities) ``` This disables the constant folding performed in the TensorFlow Graph optimization. This can work both ways: on the one hand it will not reach the constant folding deadline, but on the other hand disabling constant folding can result in significant runtime increases. Anyway it is worth trying, good luck!
61,296,763
I have trained a CNN in Matlab 2019b that classifies images between three classes. When this CNN was tested in Matlab it was functioning fine and only took 10-15 seconds to classify an image. I used the exportONNXNetwork function in Maltab so that I can implement my CNN in Tensorflow. This is the code I am using to use the ONNX file in python: ```py import onnx from onnx_tf.backend import prepare import numpy as np from PIL import Image onnx_model = onnx.load('trainednet.onnx') tf_rep = prepare(onnx_model) filepath = 'filepath.png' img = Image.open(filepath).resize((224,224)).convert("RGB") img = array(img).transpose((2,0,1)) img = np.expand_dims(img, 0) img = img.astype(np.uint8) probabilities = tf_rep.run(img) print(probabilities) ``` When trying to use this code to classify the same test set, it seems to be classifying the images correctly but it is very slow and freezes my computer as it reaches high memory usages of up to 95+% at some points. I also noticed in the command prompt while classifying it prints this: ``` 2020-04-18 18:26:39.214286: W tensorflow/core/grappler/optimizers/meta_optimizer.cc:530] constant_folding failed: Deadline exceeded: constant_folding exceeded deadline., time = 486776.938ms. ``` Is there any way I can make this python code classify faster?
2020/04/18
[ "https://Stackoverflow.com/questions/61296763", "https://Stackoverflow.com", "https://Stackoverflow.com/users/-1/" ]
In this case, it appears that the [Grapper optimization suite](https://web.stanford.edu/class/cs245/slides/TFGraphOptimizationsStanford.pdf) has encountered some kind of infinite loop or memory leak. I would recommend filing an issue against the [Github repo](https://github.com/tensorflow/tensorflow/tree/master/tensorflow/python/grappler). It's challenging to debug why constant folding is taking so long, but you may have better performance using the [ONNX TensorRT backend](https://github.com/onnx/onnx-tensorrt) as compared to the TensorFlow backend. It achieves better performance as compared to the TensorFlow backend on Nvidia GPUs while compiling typical graphs more quickly. Constant folding usually doesn't provide large speedups for well optimized models. ``` import onnx import onnx_tensorrt.backend as backend import numpy as np model = onnx.load("trainednet.onnx'") engine = backend.prepare(model, device='CUDA:1') filepath = 'filepath.png' img = Image.open(filepath).resize((224,224)).convert("RGB") img = array(img).transpose((2,0,1)) img = np.expand_dims(img, 0) img = img.astype(np.uint8) output_data = engine.run(img)[0] print(output_data) ```
You should consider some points while working on TensorFlow with Python. A GPU will be better for work as it fastens the whole processing. For that, you have to install CUDA support. Apart from this, the compiler also sometimes matters. I can tell VSCode is better than Spyder from my experience. I hope it helps.
61,296,763
I have trained a CNN in Matlab 2019b that classifies images between three classes. When this CNN was tested in Matlab it was functioning fine and only took 10-15 seconds to classify an image. I used the exportONNXNetwork function in Maltab so that I can implement my CNN in Tensorflow. This is the code I am using to use the ONNX file in python: ```py import onnx from onnx_tf.backend import prepare import numpy as np from PIL import Image onnx_model = onnx.load('trainednet.onnx') tf_rep = prepare(onnx_model) filepath = 'filepath.png' img = Image.open(filepath).resize((224,224)).convert("RGB") img = array(img).transpose((2,0,1)) img = np.expand_dims(img, 0) img = img.astype(np.uint8) probabilities = tf_rep.run(img) print(probabilities) ``` When trying to use this code to classify the same test set, it seems to be classifying the images correctly but it is very slow and freezes my computer as it reaches high memory usages of up to 95+% at some points. I also noticed in the command prompt while classifying it prints this: ``` 2020-04-18 18:26:39.214286: W tensorflow/core/grappler/optimizers/meta_optimizer.cc:530] constant_folding failed: Deadline exceeded: constant_folding exceeded deadline., time = 486776.938ms. ``` Is there any way I can make this python code classify faster?
2020/04/18
[ "https://Stackoverflow.com/questions/61296763", "https://Stackoverflow.com", "https://Stackoverflow.com/users/-1/" ]
In this case, it appears that the [Grapper optimization suite](https://web.stanford.edu/class/cs245/slides/TFGraphOptimizationsStanford.pdf) has encountered some kind of infinite loop or memory leak. I would recommend filing an issue against the [Github repo](https://github.com/tensorflow/tensorflow/tree/master/tensorflow/python/grappler). It's challenging to debug why constant folding is taking so long, but you may have better performance using the [ONNX TensorRT backend](https://github.com/onnx/onnx-tensorrt) as compared to the TensorFlow backend. It achieves better performance as compared to the TensorFlow backend on Nvidia GPUs while compiling typical graphs more quickly. Constant folding usually doesn't provide large speedups for well optimized models. ``` import onnx import onnx_tensorrt.backend as backend import numpy as np model = onnx.load("trainednet.onnx'") engine = backend.prepare(model, device='CUDA:1') filepath = 'filepath.png' img = Image.open(filepath).resize((224,224)).convert("RGB") img = array(img).transpose((2,0,1)) img = np.expand_dims(img, 0) img = img.astype(np.uint8) output_data = engine.run(img)[0] print(output_data) ```
Since the command prompt states that your program takes a long time to perform constant folding, it might be worthwhile to turn this off. [Based on this documentation](https://www.tensorflow.org/guide/graph_optimization), you could try running: ``` import numpy as np import timeit import traceback import contextlib import onnx from onnx_tf.backend import prepare from PIL import Image import tensorflow as tf @contextlib.contextmanager def options(options): old_opts = tf.config.optimizer.get_experimental_options() tf.config.optimizer.set_experimental_options(options) try: yield finally: tf.config.optimizer.set_experimental_options(old_opts) with options({'constant_folding': False}): onnx_model = onnx.load('trainednet.onnx') tf_rep - prepare(onnx_model) filepath = 'filepath.png' img = Image.open(filepath).resize((224,224)).convert("RGB") img = array(img).transpose((2,0,1)) img = np.expand_dims(img, 0) img = img.astype(np.uint8) probabilities = tf_rep.run(img) print(probabilities) ``` This disables the constant folding performed in the TensorFlow Graph optimization. This can work both ways: on the one hand it will not reach the constant folding deadline, but on the other hand disabling constant folding can result in significant runtime increases. Anyway it is worth trying, good luck!
50,315,645
I have a simple script which is using [signalr-client-py](https://github.com/TargetProcess/signalr-client-py) as an external module. ``` from requests import Session from signalr import Connection import threading ``` When I try to run my script using the `sudo python myScriptName.py` I get an error: ``` Traceback (most recent call last): File "buttonEventDetectSample.py", line 3, in <module> from signalrManager import * File "/home/pi/Desktop/GitRepo/DiatAssign/Main/signalrManager.py", line 2, in <module> from signalr import Connection ImportError: No module named signalr ``` If I run my script typing only `python myScriptName.py` it works perfectly fine but I need to have the **sudo** in front because later on in my other scripts (that use this one) I perform write operation on the File system. I am quite new to Python and that's why I need to know how I can handle this situation. If I type `pydoc modules` I get a list which contains: ``` signalr signalrManager ``` If I type `pip freeze` I can see there listed: ``` signalr-client==0.0.7 ```
2018/05/13
[ "https://Stackoverflow.com/questions/50315645", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2128702/" ]
By default sudo runs commands in different environment. You can ask sudo to preserve environment with `-E` switch. ``` sudo -E python myScriptName.py ``` It comes with it's own security risks. So be careful
You need to check where signalr is installed. sudo runs the program in the environment available to root and if signalr is not installed globally it won't be picked up. Try 'sudo pip freeze' to see what is available in the root environment.
50,315,645
I have a simple script which is using [signalr-client-py](https://github.com/TargetProcess/signalr-client-py) as an external module. ``` from requests import Session from signalr import Connection import threading ``` When I try to run my script using the `sudo python myScriptName.py` I get an error: ``` Traceback (most recent call last): File "buttonEventDetectSample.py", line 3, in <module> from signalrManager import * File "/home/pi/Desktop/GitRepo/DiatAssign/Main/signalrManager.py", line 2, in <module> from signalr import Connection ImportError: No module named signalr ``` If I run my script typing only `python myScriptName.py` it works perfectly fine but I need to have the **sudo** in front because later on in my other scripts (that use this one) I perform write operation on the File system. I am quite new to Python and that's why I need to know how I can handle this situation. If I type `pydoc modules` I get a list which contains: ``` signalr signalrManager ``` If I type `pip freeze` I can see there listed: ``` signalr-client==0.0.7 ```
2018/05/13
[ "https://Stackoverflow.com/questions/50315645", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2128702/" ]
You need to check where signalr is installed. sudo runs the program in the environment available to root and if signalr is not installed globally it won't be picked up. Try 'sudo pip freeze' to see what is available in the root environment.
Another easy solution can be installing required packages via `sudo` -even they are installed before normally- instead of trying to match the paths: `sudo pip3 install <your-required-package-name>` After that you can execute the scripts via `sudo`.
50,315,645
I have a simple script which is using [signalr-client-py](https://github.com/TargetProcess/signalr-client-py) as an external module. ``` from requests import Session from signalr import Connection import threading ``` When I try to run my script using the `sudo python myScriptName.py` I get an error: ``` Traceback (most recent call last): File "buttonEventDetectSample.py", line 3, in <module> from signalrManager import * File "/home/pi/Desktop/GitRepo/DiatAssign/Main/signalrManager.py", line 2, in <module> from signalr import Connection ImportError: No module named signalr ``` If I run my script typing only `python myScriptName.py` it works perfectly fine but I need to have the **sudo** in front because later on in my other scripts (that use this one) I perform write operation on the File system. I am quite new to Python and that's why I need to know how I can handle this situation. If I type `pydoc modules` I get a list which contains: ``` signalr signalrManager ``` If I type `pip freeze` I can see there listed: ``` signalr-client==0.0.7 ```
2018/05/13
[ "https://Stackoverflow.com/questions/50315645", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2128702/" ]
By default sudo runs commands in different environment. You can ask sudo to preserve environment with `-E` switch. ``` sudo -E python myScriptName.py ``` It comes with it's own security risks. So be careful
Another easy solution can be installing required packages via `sudo` -even they are installed before normally- instead of trying to match the paths: `sudo pip3 install <your-required-package-name>` After that you can execute the scripts via `sudo`.
16,170,268
I have written a python27 module and installed it using `python setup.py install`. Part of that module has a script which I put in my bin folder within the module before I installed it. I think the module has installed properly and works (has been added to site-packages and scripts). I have built a simple script "test.py" that just runs functions and the script from the module. The functions work fine (the expected output prints to the console) but the script does not. I tried `from [module_name] import [script_name]` in test.py which did not work. How do I run a script within the bin of a module from the command line?
2013/04/23
[ "https://Stackoverflow.com/questions/16170268", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2270903/" ]
Are you using `distutils` or `setuptools`? I tested right now, and if it's distutils, it's enough to have `scripts=['bin/script_name']` in your `setup()` call If instead you're using setuptools you can avoid to have a script inside bin/ altogether and define your entry point by adding `entry_points={'console_scripts': ['script_name = module_name:main']}` inside your `setup()` call (assuming you have a `main` function inside `module_name`) are you sure that the bin/script\_name is marked as executable? what is the exact error you get when trying to run the script? what are the contents of your setup.py?
Please check your installed module for using condition to checking state of global variable `__name__`. I mean: ``` if __name__ == "__main__": ``` Global variable `__name__` changing to "`__main__`" string in case, then you starting script manually from command line (e.g. python sample.py). If you using this condition, and put all your code under this, it will be be work when you will try to import your installed module from another script. For example (code from module will not run, when you will import it): testImport.py: ``` import sample ...another code here... ``` sample.py: ``` if __name__ == "__main__": print "You will never see this message, except of case, when you start this module manually from command line" ```
36,212,431
I want to using python to open two files at the same time, read one line from each of them then do some operations. Then read the next line from each of then and do some operation,then the next next line...I want to know how can I do this. It seems that `for` loop cannot do this job.
2016/03/25
[ "https://Stackoverflow.com/questions/36212431", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5362936/" ]
``` file1 = open("some_file") file2 = open("other_file") for some_line,other_line in zip(file1,file2): #do something silly file1.close() file2.close() ``` note that `itertools.izip` may be prefered if you dont want to store the whole file in memory ... also note that this will finish when the end of either file is reached...
Why not read each file into a list each element in the list holds 1 line. Once you have both files loaded to your lists you can work line by line (index by index) through your list doing whatever comparisons/operations you require.
36,212,431
I want to using python to open two files at the same time, read one line from each of them then do some operations. Then read the next line from each of then and do some operation,then the next next line...I want to know how can I do this. It seems that `for` loop cannot do this job.
2016/03/25
[ "https://Stackoverflow.com/questions/36212431", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5362936/" ]
``` file1 = open("some_file") file2 = open("other_file") for some_line,other_line in zip(file1,file2): #do something silly file1.close() file2.close() ``` note that `itertools.izip` may be prefered if you dont want to store the whole file in memory ... also note that this will finish when the end of either file is reached...
you can put inside a loop like that: ``` for x in range(0, n): read onde line read the other line ``` try it
36,212,431
I want to using python to open two files at the same time, read one line from each of them then do some operations. Then read the next line from each of then and do some operation,then the next next line...I want to know how can I do this. It seems that `for` loop cannot do this job.
2016/03/25
[ "https://Stackoverflow.com/questions/36212431", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5362936/" ]
``` file1 = open("some_file") file2 = open("other_file") for some_line,other_line in zip(file1,file2): #do something silly file1.close() file2.close() ``` note that `itertools.izip` may be prefered if you dont want to store the whole file in memory ... also note that this will finish when the end of either file is reached...
You can try the following code: ``` fin1 = open('file1') fin2 = open('file2') content1 = fin1.readlines() content2 = fin2.readlines() length = len(content1) for i in range(length): line1, line2 = content1[i].rstrip('\n'),content2[i].rstrip('\n') # do something fin1.close() fin2.close() ```
23,769,001
I would like to know if it is possible to enable gzip compression for Server-Sent Events (SSE ; Content-Type: text/event-stream). It seems it is possible, according to this book: <http://chimera.labs.oreilly.com/books/1230000000545/ch16.html> But I can't find any example of SSE with gzip compression. I tried to send gzipped messages with the response header field *Content-Encoding* set to "gzip" without success. For experimenting around SSE, I am testing a small web application made in Python with the bottle framework + gevent ; I am just running the bottle WSGI server: ``` @bottle.get('/data_stream') def stream_data(): bottle.response.content_type = "text/event-stream" bottle.response.add_header("Connection", "keep-alive") bottle.response.add_header("Cache-Control", "no-cache") bottle.response.add_header("Content-Encoding", "gzip") while True: # new_data is a gevent AsyncResult object, # .get() just returns a data string when new # data is available data = new_data.get() yield zlib.compress("data: %s\n\n" % data) #yield "data: %s\n\n" % data ``` The code without compression (last line, commented) and without gzip content-encoding header field works like a charm. **EDIT**: thanks to the reply and to this other question: [Python: Creating a streaming gzip'd file-like?](https://stackoverflow.com/questions/2192529/python-creating-a-streaming-gzipd-file-like), I managed to solve the problem: ``` @bottle.route("/stream") def stream_data(): compressed_stream = zlib.compressobj() bottle.response.content_type = "text/event-stream" bottle.response.add_header("Connection", "keep-alive") bottle.response.add_header("Cache-Control", "no-cache, must-revalidate") bottle.response.add_header("Content-Encoding", "deflate") bottle.response.add_header("Transfer-Encoding", "chunked") while True: data = new_data.get() yield compressed_stream.compress("data: %s\n\n" % data) yield compressed_stream.flush(zlib.Z_SYNC_FLUSH) ```
2014/05/20
[ "https://Stackoverflow.com/questions/23769001", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2753095/" ]
TL;DR: If the requests are not cached, you likely want to use zlib and declare Content-Encoding to be 'deflate'. That change alone should make your code work. --- If you declare Content-Encoding to be gzip, you need to actually use gzip. They are based on the the same compression algorithm, but gzip has some extra framing. This works, for example: ``` import gzip import StringIO from bottle import response, route @route('/') def get_data(): response.add_header("Content-Encoding", "gzip") s = StringIO.StringIO() with gzip.GzipFile(fileobj=s, mode='w') as f: f.write('Hello World') return s.getvalue() ``` That only really makes sense if you use an actual file as a cache, though.
There's also middleware you can use so you don't need to worry about gzipping responses for each of your methods. Here's one I used recently. <https://code.google.com/p/ibkon-wsgi-gzip-middleware/> This is how I used it (I'm using bottle.py with the gevent server) ``` from gzip_middleware import Gzipper import bottle app = Gzipper(bottle.app()) run(app = app, host='0.0.0.0', port=8080, server='gevent') ``` For this particular library, you can set w/c types of responses you want to compress by modifying the `DEFAULT_COMPRESSABLES variable` for example ``` DEFAULT_COMPRESSABLES = set(['text/plain', 'text/html', 'text/css', 'application/json', 'application/x-javascript', 'text/xml', 'application/xml', 'application/xml+rss', 'text/javascript', 'image/gif']) ``` All responses go through the middleware and get gzipped without modifying your existing code. By default, it compresses responses whose content-type belongs to `DEFAULT_COMPRESSABLES` and whose content-length is greater than 200 characters.
47,310,884
I need to passively install Python in my applications package installation so i use the following: ``` python-3.5.4-amd64.exe /passive PrependPath=1 ``` according this: [3.1.4. Installing Without UI](https://docs.python.org/3.6/using/windows.html#installing-without-ui) I use the PrependPath parameter which should add paths into Path in Windows environment variables. But it seems not to work. The variables does not take any changes. If i start installation manually and select or deselect checkbox with add into Paths then everything works. Works same with clear installation also on modify current installation. Unfortunately i do not have other PC with Win 10 Pro to test it. I have also tried it with Python 3.6.3 with same results. **EDIT:** Also tried with PowerShell `Start-Process python-3.5.4-amd64.exe -ArgumentList /passive , PretendPath=1` with same results. Also tested on several PCs with Windows 10, same results, so the problem is not just on single PC **EDIT:** Of cource all attempts were run as administrator.
2017/11/15
[ "https://Stackoverflow.com/questions/47310884", "https://Stackoverflow.com", "https://Stackoverflow.com/users/7031374/" ]
Ok, from my point of view it seems to be bug in Python Installer and I can not find any way how to make it works. I have founds the following workaround: Use py.exe which is wrapper for all version of Python on local machine located in C:\Windows so you can run it directly from CMD anywhere thanks to C:\Windows is standard content of Path variable. ``` py -3.5 -c "import sys; print(sys.executable[:-10])" ``` This gives me directory of python 3.5 installation. And then i set it into Path manually by: ``` setx Path %UserProfile%";PythonLocFromPreviousCommand ```
try powershell to do that ``` Start-Process -NoNewWindow .\python.exe /passive ```
47,310,884
I need to passively install Python in my applications package installation so i use the following: ``` python-3.5.4-amd64.exe /passive PrependPath=1 ``` according this: [3.1.4. Installing Without UI](https://docs.python.org/3.6/using/windows.html#installing-without-ui) I use the PrependPath parameter which should add paths into Path in Windows environment variables. But it seems not to work. The variables does not take any changes. If i start installation manually and select or deselect checkbox with add into Paths then everything works. Works same with clear installation also on modify current installation. Unfortunately i do not have other PC with Win 10 Pro to test it. I have also tried it with Python 3.6.3 with same results. **EDIT:** Also tried with PowerShell `Start-Process python-3.5.4-amd64.exe -ArgumentList /passive , PretendPath=1` with same results. Also tested on several PCs with Windows 10, same results, so the problem is not just on single PC **EDIT:** Of cource all attempts were run as administrator.
2017/11/15
[ "https://Stackoverflow.com/questions/47310884", "https://Stackoverflow.com", "https://Stackoverflow.com/users/7031374/" ]
Ok, from my point of view it seems to be bug in Python Installer and I can not find any way how to make it works. I have founds the following workaround: Use py.exe which is wrapper for all version of Python on local machine located in C:\Windows so you can run it directly from CMD anywhere thanks to C:\Windows is standard content of Path variable. ``` py -3.5 -c "import sys; print(sys.executable[:-10])" ``` This gives me directory of python 3.5 installation. And then i set it into Path manually by: ``` setx Path %UserProfile%";PythonLocFromPreviousCommand ```
Make sure you are using an elevated command prompt (ie: run as administrator).
47,310,884
I need to passively install Python in my applications package installation so i use the following: ``` python-3.5.4-amd64.exe /passive PrependPath=1 ``` according this: [3.1.4. Installing Without UI](https://docs.python.org/3.6/using/windows.html#installing-without-ui) I use the PrependPath parameter which should add paths into Path in Windows environment variables. But it seems not to work. The variables does not take any changes. If i start installation manually and select or deselect checkbox with add into Paths then everything works. Works same with clear installation also on modify current installation. Unfortunately i do not have other PC with Win 10 Pro to test it. I have also tried it with Python 3.6.3 with same results. **EDIT:** Also tried with PowerShell `Start-Process python-3.5.4-amd64.exe -ArgumentList /passive , PretendPath=1` with same results. Also tested on several PCs with Windows 10, same results, so the problem is not just on single PC **EDIT:** Of cource all attempts were run as administrator.
2017/11/15
[ "https://Stackoverflow.com/questions/47310884", "https://Stackoverflow.com", "https://Stackoverflow.com/users/7031374/" ]
Ok, from my point of view it seems to be bug in Python Installer and I can not find any way how to make it works. I have founds the following workaround: Use py.exe which is wrapper for all version of Python on local machine located in C:\Windows so you can run it directly from CMD anywhere thanks to C:\Windows is standard content of Path variable. ``` py -3.5 -c "import sys; print(sys.executable[:-10])" ``` This gives me directory of python 3.5 installation. And then i set it into Path manually by: ``` setx Path %UserProfile%";PythonLocFromPreviousCommand ```
> > Have you tried to use the InstallAllUsers argument. By default it is set >to 0 so try to use it like this (which is the same example from [here][1]): > > > python-3.6.0.exe /quiet InstallAllUsers=1 PrependPath=1 Include\_test=0 > it migth make a difference to use the `/quiet` over `/passive` > > > [1]: <https://docs.python.org> > /3.6/using/windows.html#installing-without-ui "the link you supplied" > > > To answer Erik Šťastný comment i believe that a good solution to your problem is to package python with your program to make sure that all the required libaries is preinstalled.
47,310,884
I need to passively install Python in my applications package installation so i use the following: ``` python-3.5.4-amd64.exe /passive PrependPath=1 ``` according this: [3.1.4. Installing Without UI](https://docs.python.org/3.6/using/windows.html#installing-without-ui) I use the PrependPath parameter which should add paths into Path in Windows environment variables. But it seems not to work. The variables does not take any changes. If i start installation manually and select or deselect checkbox with add into Paths then everything works. Works same with clear installation also on modify current installation. Unfortunately i do not have other PC with Win 10 Pro to test it. I have also tried it with Python 3.6.3 with same results. **EDIT:** Also tried with PowerShell `Start-Process python-3.5.4-amd64.exe -ArgumentList /passive , PretendPath=1` with same results. Also tested on several PCs with Windows 10, same results, so the problem is not just on single PC **EDIT:** Of cource all attempts were run as administrator.
2017/11/15
[ "https://Stackoverflow.com/questions/47310884", "https://Stackoverflow.com", "https://Stackoverflow.com/users/7031374/" ]
Ok, from my point of view it seems to be bug in Python Installer and I can not find any way how to make it works. I have founds the following workaround: Use py.exe which is wrapper for all version of Python on local machine located in C:\Windows so you can run it directly from CMD anywhere thanks to C:\Windows is standard content of Path variable. ``` py -3.5 -c "import sys; print(sys.executable[:-10])" ``` This gives me directory of python 3.5 installation. And then i set it into Path manually by: ``` setx Path %UserProfile%";PythonLocFromPreviousCommand ```
I also tried the command line options for the python installer and noticed the same issue as you, and here's the solution I found: 1. Download the 64-bit installer from here: <https://www.python.org/downloads/windows/> (the link is titled "*Windows x86-**64** executable installer*") 2. Uninstall any current python installation. * You can use this command: `START python-3.8.3-amd64.exe /uninstall` * *(replace python-3.8.3-amd64.exe with the name of the file you downloaded).* * *(run cmd or your batch file as administrator, by right-clicking, then Run As Administrator).* 3. Install (as admin) python 64-bit for all users, **with the START command**: `START python-3.8.3-amd64.exe /passive PrependPath=1 Include_pip=1 InstallAllUsers=1` * *(replace python-3.8.3-amd64.exe with the name of the file you downloaded).* * *(run cmd or your batch file as administrator, by right-clicking, then Run As Administrator).* * *(More info on python installer command line options: <https://docs.python.org/3/using/windows.html#installing-without-ui>).* 4. (Optional) Open a new cmd window to verify that python works from any location: * You can run this command:`python --version` * *(If you don't see output like "Python 3.8.3", then Python has not been added to your PATH).* * *(Note: That command didn't work until I opened a new command prompt window).* For me, all of the details were important, so don't skip any.
34,076,773
So i have an empty main frame called `MainWindow` and a `WelcomeWidget` that gets called immidiatley on program startup and loads inside the main frame. Then i want the button `next_btn` inside `WelcomeWidget` to call `LicenseWidget` QWidget inside the `MainWindow` class . How do i do that? Here is my code: **Main.py** ``` #!/usr/bin/env python # -*- coding: utf-8 -*- # # Main.py # # Copyright 2015 Ognjen Galic <gala@thinkpad> # # This program is free software; you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation; either version 2 of the License, or # (at your option) any later version. # # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with this program; if not, write to the Free Software # Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, # MA 02110-1301, USA. # # from PyQt4 import QtGui, QtCore from MainWindow import Ui_MainWindow from WelcomeWidget import Ui_welcome_widget from LicenseWidget import Ui_license_widget import sys class WelcomeWidget(QtGui.QWidget, Ui_welcome_widget): def __init__(self, parent=None): super(WelcomeWidget, self).__init__() self.setupUi(self) self.cancel_btn.pressed.connect(self.close) self.next_btn.pressed.connect(self.license_show) def close(self): sys.exit(0) def license_show(self): mainWindow.cw = LicenseWidget(self) mainWindow.setCentralWidget(self.cw) class LicenseWidget(QtGui.QWidget, Ui_license_widget): def __init__(self, parent=None): super(LicenseWidget, self).__init__() self.setupUi(self) class mainWindow(QtGui.QMainWindow, Ui_MainWindow): def __init__(self): super(mainWindow, self).__init__() self.setupUi(self) mainWindow.cw = WelcomeWidget(self) self.setCentralWidget(self.cw) def main(): app = QtGui.QApplication(sys.argv) ui = mainWindow() ui.show() sys.exit(app.exec_()) main() ``` **LicenseWidget.py** ``` # -*- coding: utf-8 -*- # Form implementation generated from reading ui file 'LicenseWidget.ui' # # Created by: PyQt4 UI code generator 4.11.4 # # WARNING! All changes made in this file will be lost! from PyQt4 import QtCore, QtGui try: _fromUtf8 = QtCore.QString.fromUtf8 except AttributeError: def _fromUtf8(s): return s try: _encoding = QtGui.QApplication.UnicodeUTF8 def _translate(context, text, disambig): return QtGui.QApplication.translate(context, text, disambig, _encoding) except AttributeError: def _translate(context, text, disambig): return QtGui.QApplication.translate(context, text, disambig) class Ui_license_widget(object): def setupUi(self, license_widget): license_widget.setObjectName(_fromUtf8("license_widget")) license_widget.resize(640, 420) license_widget.setMinimumSize(QtCore.QSize(640, 420)) license_widget.setMaximumSize(QtCore.QSize(640, 420)) self.frame_btn = QtGui.QFrame(license_widget) self.frame_btn.setGeometry(QtCore.QRect(0, 365, 641, 56)) self.frame_btn.setFrameShape(QtGui.QFrame.StyledPanel) self.frame_btn.setFrameShadow(QtGui.QFrame.Raised) self.frame_btn.setObjectName(_fromUtf8("frame_btn")) self.no_btn = QtGui.QPushButton(self.frame_btn) self.no_btn.setGeometry(QtCore.QRect(540, 15, 87, 26)) self.no_btn.setObjectName(_fromUtf8("no_btn")) self.yes_btn = QtGui.QPushButton(self.frame_btn) self.yes_btn.setGeometry(QtCore.QRect(430, 15, 87, 26)) self.yes_btn.setObjectName(_fromUtf8("yes_btn")) self.back_btn = QtGui.QPushButton(self.frame_btn) self.back_btn.setEnabled(True) self.back_btn.setGeometry(QtCore.QRect(346, 15, 87, 26)) self.back_btn.setCheckable(False) self.back_btn.setObjectName(_fromUtf8("back_btn")) self.main_frame = QtGui.QFrame(license_widget) self.main_frame.setGeometry(QtCore.QRect(0, 0, 640, 75)) self.main_frame.setMinimumSize(QtCore.QSize(8, 0)) self.main_frame.setFrameShape(QtGui.QFrame.StyledPanel) self.main_frame.setFrameShadow(QtGui.QFrame.Raised) self.main_frame.setObjectName(_fromUtf8("main_frame")) self.Title = QtGui.QLabel(self.main_frame) self.Title.setGeometry(QtCore.QRect(10, 5, 311, 61)) self.Title.setObjectName(_fromUtf8("Title")) self.license_cont = QtGui.QTextEdit(license_widget) self.license_cont.setGeometry(QtCore.QRect(0, 74, 640, 260)) self.license_cont.setObjectName(_fromUtf8("license_cont")) self.agree_or_not = QtGui.QLabel(license_widget) self.agree_or_not.setGeometry(QtCore.QRect(10, 340, 621, 17)) self.agree_or_not.setObjectName(_fromUtf8("agree_or_not")) self.retranslateUi(license_widget) QtCore.QMetaObject.connectSlotsByName(license_widget) def retranslateUi(self, license_widget): license_widget.setWindowTitle(_translate("license_widget", "Form", None)) self.no_btn.setText(_translate("license_widget", "No", None)) self.yes_btn.setText(_translate("license_widget", "Yes", None)) self.back_btn.setText(_translate("license_widget", "Back", None)) self.Title.setText(_translate("license_widget", "<html><head/><body><p><span style=\" font-size:11pt; font-weight:600;\">Program License</span></p><p>Please read the license carefully</p></body></html>", None)) self.license_cont.setHtml(_translate("license_widget", "<!DOCTYPE HTML PUBLIC \"-//W3C//DTD HTML 4.0//EN\" \"http://www.w3.org/TR/REC-html40/strict.dtd\">\n" "<html><head><meta name=\"qrichtext\" content=\"1\" /><style type=\"text/css\">\n" "p, li { white-space: pre-wrap; }\n" "</style></head><body style=\" font-family:\'Droid Sans\'; font-size:10pt; font-weight:400; font-style:normal;\">\n" "<p style=\" margin-top:0px; margin-bottom:0px; margin-left:0px; margin-right:0px; -qt-block-indent:0; text-indent:0px;\">example license</p></body></html>", None)) self.agree_or_not.setText(_translate("license_widget", "Do you agree to the license? If you click \"No\", the installer will close.", None)) ``` **WelcomeWidget.py** ``` # -*- coding: utf-8 -*- # Form implementation generated from reading ui file 'WelcomeWidget.ui' # # Created by: PyQt4 UI code generator 4.11.4 # # WARNING! All changes made in this file will be lost! from PyQt4 import QtCore, QtGui try: _fromUtf8 = QtCore.QString.fromUtf8 except AttributeError: def _fromUtf8(s): return s try: _encoding = QtGui.QApplication.UnicodeUTF8 def _translate(context, text, disambig): return QtGui.QApplication.translate(context, text, disambig, _encoding) except AttributeError: def _translate(context, text, disambig): return QtGui.QApplication.translate(context, text, disambig) class Ui_welcome_widget(object): def setupUi(self, welcome_widget): welcome_widget.setObjectName(_fromUtf8("welcome_widget")) welcome_widget.resize(640, 420) welcome_widget.setMinimumSize(QtCore.QSize(640, 420)) welcome_widget.setMaximumSize(QtCore.QSize(640, 420)) self.side_pixmap = QtGui.QLabel(welcome_widget) self.side_pixmap.setGeometry(QtCore.QRect(0, 0, 220, 365)) self.side_pixmap.setText(_fromUtf8("")) self.side_pixmap.setPixmap(QtGui.QPixmap(_fromUtf8("media/InstallShield.png"))) self.side_pixmap.setObjectName(_fromUtf8("side_pixmap")) self.welcome_frame = QtGui.QFrame(welcome_widget) self.welcome_frame.setGeometry(QtCore.QRect(0, 365, 641, 56)) self.welcome_frame.setFrameShape(QtGui.QFrame.StyledPanel) self.welcome_frame.setFrameShadow(QtGui.QFrame.Raised) self.welcome_frame.setObjectName(_fromUtf8("welcome_frame")) self.cancel_btn = QtGui.QPushButton(self.welcome_frame) self.cancel_btn.setGeometry(QtCore.QRect(540, 15, 87, 26)) self.cancel_btn.setObjectName(_fromUtf8("cancel_btn")) self.next_btn = QtGui.QPushButton(self.welcome_frame) self.next_btn.setGeometry(QtCore.QRect(430, 15, 87, 26)) self.next_btn.setObjectName(_fromUtf8("next_btn")) self.back_btn = QtGui.QPushButton(self.welcome_frame) self.back_btn.setEnabled(False) self.back_btn.setGeometry(QtCore.QRect(346, 15, 87, 26)) self.back_btn.setObjectName(_fromUtf8("back_btn")) self.welcome_header = QtGui.QLabel(welcome_widget) self.welcome_header.setEnabled(True) self.welcome_header.setGeometry(QtCore.QRect(240, 10, 361, 91)) font = QtGui.QFont() font.setPointSize(20) self.welcome_header.setFont(font) self.welcome_header.setWordWrap(True) self.welcome_header.setObjectName(_fromUtf8("welcome_header")) self.welcome_desc = QtGui.QLabel(welcome_widget) self.welcome_desc.setGeometry(QtCore.QRect(240, 120, 391, 51)) self.welcome_desc.setWordWrap(True) self.welcome_desc.setObjectName(_fromUtf8("welcome_desc")) self.retranslateUi(welcome_widget) QtCore.QMetaObject.connectSlotsByName(welcome_widget) def retranslateUi(self, welcome_widget): welcome_widget.setWindowTitle(_translate("welcome_widget", "Form", None)) self.cancel_btn.setText(_translate("welcome_widget", "Cancel", None)) self.next_btn.setText(_translate("welcome_widget", "Next", None)) self.back_btn.setText(_translate("welcome_widget", "Back", None)) self.welcome_header.setText(_translate("welcome_widget", "<html><head/><body><p><span style=\" font-size:16pt;\">Welcome to the InstallShield wizard for Google Chrome.</span></p></body></html>", None)) self.welcome_desc.setText(_translate("welcome_widget", "<html><head/><body><p>This install wizard will install Google Chrome to your computer. To continue press Next.</p></body></html>", None)) ``` **MainWindow.py** ``` # -*- coding: utf-8 -*- # Form implementation generated from reading ui file 'MainWindow.ui' # # Created by: PyQt4 UI code generator 4.11.4 # # WARNING! All changes made in this file will be lost! from PyQt4 import QtCore, QtGui try: _fromUtf8 = QtCore.QString.fromUtf8 except AttributeError: def _fromUtf8(s): return s try: _encoding = QtGui.QApplication.UnicodeUTF8 def _translate(context, text, disambig): return QtGui.QApplication.translate(context, text, disambig, _encoding) except AttributeError: def _translate(context, text, disambig): return QtGui.QApplication.translate(context, text, disambig) class Ui_MainWindow(object): def setupUi(self, MainWindow): MainWindow.setObjectName(_fromUtf8("MainWindow")) MainWindow.resize(640, 420) MainWindow.setMinimumSize(QtCore.QSize(640, 420)) MainWindow.setMaximumSize(QtCore.QSize(640, 420)) MainWindow.setToolButtonStyle(QtCore.Qt.ToolButtonIconOnly) MainWindow.setAnimated(False) self.main_widget = QtGui.QWidget(MainWindow) self.main_widget.setObjectName(_fromUtf8("main_widget")) MainWindow.setCentralWidget(self.main_widget) self.retranslateUi(MainWindow) QtCore.QMetaObject.connectSlotsByName(MainWindow) def retranslateUi(self, MainWindow): MainWindow.setWindowTitle(_translate("MainWindow", "InstallShield Wizard", None)) ``` If this worked straight out of the WelcomeWidget class and told the MainWindow class that would be awesome. ``` class WelcomeWidget(QtGui.QWidget, Ui_welcome_widget): [ ... ] def license_show(self): mainWindow.cw = LicenseWidget(self) mainWindow.setCentralWidget(self.cw) ``` Someone with an answer gets an e-cookie!
2015/12/03
[ "https://Stackoverflow.com/questions/34076773", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3855967/" ]
`QWizard` might be of use. Another way would be to layout both widgets in a `QVerticalLayout` and hide the one you are not interested in. The visible one takes then up all the space. It could even be completely constructed in QtCreator .. just `hide()` what you don't want to see and `show()` what you want to see. It is possible to build complex layouts with lots of widgets in the QtCreator and show only what's necessary for the task at hand.
``` #!/usr/bin/env python # -*- coding: utf-8 -*- # # Main.py # # Copyright 2015 Ognjen Galic <gala@thinkpad> # # This program is free software; you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation; either version 2 of the License, or # (at your option) any later version. # # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with this program; if not, write to the Free Software # Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, # MA 02110-1301, USA. # # from PyQt4 import QtGui, QtCore from MainWindow import Ui_MainWindow from MainWidget import Ui_main_widget from WelcomeWidget import Ui_welcome_widget from LicenseWidget import Ui_license_widget import sys class mainWindow(QtGui.QMainWindow, Ui_MainWindow): def __init__(self): super(mainWindow, self).__init__() self.setupUi(self) mainWindow.welcomeWidget = WelcomeWidget(self) mainWindow.licenseWidget = LicenseWidget(self) mainWindow.mainWidget = MainWidget(self) self.setCentralWidget(self.mainWidget) self.mainWidget.addWidget(self.welcomeWidget) self.welcomeWidget.next_btn.pressed.connect(self.license_show) self.welcomeWidget.cancel_btn.pressed.connect(self.close_cancel) self.licenseWidget.cancel_btn.pressed.connect(self.close_cancel) self.licenseWidget.back_btn.pressed.connect(self.go_back) self.licenseWidget.i_do.toggled.connect(self.accept_license) self.licenseWidget.i_dont.toggled.connect(self.dont_accept_license) def go_back(self): self.mainWidget.removeWidget(self.licenseWidget) self.mainWidget.addWidget(self.welcomeWidget) def close_cancel(self): sys.exit(0) def license_show(self): self.mainWidget.removeWidget(self.welcomeWidget) self.mainWidget.addWidget(self.licenseWidget) def accept_license(self): self.licenseWidget.next_btn.setEnabled(True) def dont_accept_license(self): self.licenseWidget.next_btn.setEnabled(False) class WelcomeWidget(QtGui.QStackedWidget, Ui_welcome_widget): def __init__(self, parent=None): super(WelcomeWidget, self).__init__() self.setupUi(self, "Linux 2.6.32.68") class LicenseWidget(QtGui.QStackedWidget, Ui_license_widget): def __init__(self, parent=None): super(LicenseWidget, self).__init__() license_file = open("license.html") license_text_file = license_file.read() self.setupUi(self, license_text_file) class MainWidget(QtGui.QStackedWidget, Ui_main_widget): def __init__(self, parent=None): super(MainWidget, self).__init__() self.setupUi(self) def main(): app = QtGui.QApplication(sys.argv) ui = mainWindow() ui.show() sys.exit(app.exec_()) main() ``` This works. Thanks to anyone who helped.
56,914,224
I have a dataframe as shown in the picture: [problem dataframe: attdf](https://i.stack.imgur.com/9e5y4.png) I would like to group the data by Source class and Destination class, count the number of rows in each group and sum up Attention values. While trying to achieve that, I am unable to get past this type error: ``` --------------------------------------------------------------------------- TypeError Traceback (most recent call last) <ipython-input-100-6f2c8b3de8f2> in <module>() ----> 1 attdf.groupby(['Source Class', 'Destination Class']).count() 8 frames pandas/_libs/properties.pyx in pandas._libs.properties.CachedProperty.__get__() /usr/local/lib/python3.6/dist-packages/pandas/core/algorithms.py in _factorize_array(values, na_sentinel, size_hint, na_value) 458 table = hash_klass(size_hint or len(values)) 459 uniques, labels = table.factorize(values, na_sentinel=na_sentinel, --> 460 na_value=na_value) 461 462 labels = ensure_platform_int(labels) pandas/_libs/hashtable_class_helper.pxi in pandas._libs.hashtable.PyObjectHashTable.factorize() pandas/_libs/hashtable_class_helper.pxi in pandas._libs.hashtable.PyObjectHashTable._unique() TypeError: unhashable type: 'numpy.ndarray' ``` ``` attdf.groupby(['Source Class', 'Destination Class']) ``` gives me a `<pandas.core.groupby.generic.DataFrameGroupBy object at 0x7f1e720f2080>` which I'm not sure how to use to get what I want. Dataframe attdf can be imported from : <https://drive.google.com/open?id=1t_h4b8FQd9soVgYeiXQasY-EbnhfOEYi> Please advise.
2019/07/06
[ "https://Stackoverflow.com/questions/56914224", "https://Stackoverflow.com", "https://Stackoverflow.com/users/9003184/" ]
@Adam.Er8 and @jezarael helped me with their inputs. The unhashable type error in my case was because of the datatypes of the columns in my dataframe. [Original df and df imported from csv](https://i.stack.imgur.com/soqF0.png) It turned out that the original dataframe had two object columns which i was trying to use up in the groupby. Hence the unhashable type error. But on importing the data into a new dataframe right out of a csv fixed the datatypes. Consequently, no type errors faced anymore.
try using [`.agg`](https://pandas.pydata.org/pandas-docs/version/0.22/generated/pandas.core.groupby.DataFrameGroupBy.agg.html) as follows: ```py import pandas as pd attdf = pd.read_csv("attdf.csv") print(attdf.groupby(['Source Class', 'Destination Class']).agg({"Attention": ['sum', 'count']})) ``` Output: ``` Attention sum count Source Class Destination Class 0 0 282.368908 1419 1 7.251101 32 2 3.361009 23 3 22.482438 161 4 14.020189 88 5 10.138409 75 6 11.377947 80 1 0 6.172269 32 1 181.582437 1035 2 9.440956 62 3 12.007303 67 4 3.025752 20 5 4.491725 28 6 0.279559 2 2 0 3.349921 23 1 8.521828 62 2 391.116034 2072 3 9.937170 53 4 0.412747 2 5 4.441985 30 6 0.220316 2 3 0 33.156251 161 1 11.944373 67 2 9.176584 53 3 722.685180 3168 4 29.776050 137 5 8.827215 54 6 2.434347 16 4 0 17.431855 88 1 4.195519 20 2 0.457089 2 3 20.401789 137 4 378.802604 1746 5 3.616083 19 6 1.095061 6 5 0 13.525333 75 1 4.289306 28 2 6.424412 30 3 10.911705 54 4 3.896328 19 5 250.309764 1132 6 8.643153 46 6 0 15.249959 80 1 0.150240 2 2 0.413639 2 3 3.108417 16 4 0.850280 6 5 8.655959 46 6 151.571505 686 ```
5,475,259
I run a small VPS with 512M memory of memory that currently hosts 3 very low traffic PHP sites and a personal email account. I have been teaching myself Django over the last few weeks and am starting to think about deploying a project. There seem to be a very large number of methods for deploying a Django site. Given the limited resources I have available, what would be the most appropriate option? Will the VPS be suitable to host both python and PHP sites or would it be worth getting a separate server? Any advice appreciated. Thanks.
2011/03/29
[ "https://Stackoverflow.com/questions/5475259", "https://Stackoverflow.com", "https://Stackoverflow.com/users/570068/" ]
There aren't really a great number of ways to do it. In fact, there's the recommended way - via Apache/mod\_wsgi - and all the other ways. The recommended way is fully documented [here](http://docs.djangoproject.com/en/1.3/howto/deployment/modwsgi/). For a low-traffic site, you should have no trouble fitting it in your 512MB VPS along with your PHP sites.
Django has documentation describing possible [server arrangements](http://code.djangoproject.com/wiki/ServerArrangements). For light weight, yet very robust set up, I'd recommend Nginx setup. It's much lighter than Apache.
5,475,259
I run a small VPS with 512M memory of memory that currently hosts 3 very low traffic PHP sites and a personal email account. I have been teaching myself Django over the last few weeks and am starting to think about deploying a project. There seem to be a very large number of methods for deploying a Django site. Given the limited resources I have available, what would be the most appropriate option? Will the VPS be suitable to host both python and PHP sites or would it be worth getting a separate server? Any advice appreciated. Thanks.
2011/03/29
[ "https://Stackoverflow.com/questions/5475259", "https://Stackoverflow.com", "https://Stackoverflow.com/users/570068/" ]
There aren't really a great number of ways to do it. In fact, there's the recommended way - via Apache/mod\_wsgi - and all the other ways. The recommended way is fully documented [here](http://docs.djangoproject.com/en/1.3/howto/deployment/modwsgi/). For a low-traffic site, you should have no trouble fitting it in your 512MB VPS along with your PHP sites.
I run several low-traffic Django sites on a 256 VPS without problem. I have Nginx setup as a reverse proxy and to serve static files (javascript, CSS, images) and Apache using mod\_wsgi for serving Django as described in the documentation. Running PHP sites as well may add a little overhead, but, if you're talking about low-traffic "fun" sites then you should be fine.
50,552,404
So far `pandas` read through all my CSV files without any problem, however now there seems to be a problem.. When doing: ``` df = pd.read_csv(r'path to file', sep=';') ``` I get: > > OSError Traceback (most recent call > last) in () > ----> 1 df = pd.read\_csv(r'path > Übersicht\Input\test\test.csv', sep=';') > > > c:\program files\python36\lib\site-packages\pandas\io\parsers.py in > parser\_f(filepath\_or\_buffer, sep, delimiter, header, names, index\_col, > usecols, squeeze, prefix, mangle\_dupe\_cols, dtype, engine, converters, > true\_values, false\_values, skipinitialspace, skiprows, nrows, > na\_values, keep\_default\_na, na\_filter, verbose, skip\_blank\_lines, > parse\_dates, infer\_datetime\_format, keep\_date\_col, date\_parser, > dayfirst, iterator, chunksize, compression, thousands, decimal, > lineterminator, quotechar, quoting, escapechar, comment, encoding, > dialect, tupleize\_cols, error\_bad\_lines, warn\_bad\_lines, skipfooter, > skip\_footer, doublequote, delim\_whitespace, as\_recarray, compact\_ints, > use\_unsigned, low\_memory, buffer\_lines, memory\_map, float\_precision) > 703 skip\_blank\_lines=skip\_blank\_lines) > 704 > --> 705 return \_read(filepath\_or\_buffer, kwds) > 706 > 707 parser\_f.**name** = name > > > c:\program files\python36\lib\site-packages\pandas\io\parsers.py in > \_read(filepath\_or\_buffer, kwds) > 443 > 444 # Create the parser. > --> 445 parser = TextFileReader(filepath\_or\_buffer, \*\*kwds) > 446 > 447 if chunksize or iterator: > > > c:\program files\python36\lib\site-packages\pandas\io\parsers.py in > **init**(self, f, engine, \*\*kwds) > 812 self.options['has\_index\_names'] = kwds['has\_index\_names'] > 813 > --> 814 self.\_make\_engine(self.engine) > 815 > 816 def close(self): > > > c:\program files\python36\lib\site-packages\pandas\io\parsers.py in > \_make\_engine(self, engine) 1043 def \_make\_engine(self, engine='c'): 1044 if engine == 'c': > -> 1045 self.\_engine = CParserWrapper(self.f, \*\*self.options) 1046 else: 1047 if engine == 'python': > > > c:\program files\python36\lib\site-packages\pandas\io\parsers.py in > **init**(self, src, \*\*kwds) 1682 kwds['allow\_leading\_cols'] = self.index\_col is not False 1683 > -> 1684 self.\_reader = parsers.TextReader(src, \*\*kwds) 1685 1686 # XXX > > > pandas\_libs\parsers.pyx in > pandas.\_libs.parsers.TextReader.**cinit**() > > > pandas\_libs\parsers.pyx in > pandas.\_libs.parsers.TextReader.\_setup\_parser\_source() > > > OSError: Initializing from file failed > > > Other files in the same folder that are XLS files can be accessed without an issue. When using the Python library like so: ``` import csv file = csv.reader(open(r'pathtofile')) for row in file: print(row) break df = pd.read_csv(file, sep=';') ``` the file is being loaded and the first line is printed. However I get: > > ValueError: Invalid file path or buffer object type: > > > Probably because I can't use `read_csv` this way... How to get the first `pandas` function to work? The csv does not contain any special characters except German ones. The filesize is 10MB.
2018/05/27
[ "https://Stackoverflow.com/questions/50552404", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2252633/" ]
I ran into a similar problem. It turned out the CSV I had downloaded had no permissions at all. The error message from pandas did not point this out, making it hard to debug. Check that your file have read permissions
pandas read\_csv OSError: Initializing from file failed We could try `chmod 600 file.csv`.
50,552,404
So far `pandas` read through all my CSV files without any problem, however now there seems to be a problem.. When doing: ``` df = pd.read_csv(r'path to file', sep=';') ``` I get: > > OSError Traceback (most recent call > last) in () > ----> 1 df = pd.read\_csv(r'path > Übersicht\Input\test\test.csv', sep=';') > > > c:\program files\python36\lib\site-packages\pandas\io\parsers.py in > parser\_f(filepath\_or\_buffer, sep, delimiter, header, names, index\_col, > usecols, squeeze, prefix, mangle\_dupe\_cols, dtype, engine, converters, > true\_values, false\_values, skipinitialspace, skiprows, nrows, > na\_values, keep\_default\_na, na\_filter, verbose, skip\_blank\_lines, > parse\_dates, infer\_datetime\_format, keep\_date\_col, date\_parser, > dayfirst, iterator, chunksize, compression, thousands, decimal, > lineterminator, quotechar, quoting, escapechar, comment, encoding, > dialect, tupleize\_cols, error\_bad\_lines, warn\_bad\_lines, skipfooter, > skip\_footer, doublequote, delim\_whitespace, as\_recarray, compact\_ints, > use\_unsigned, low\_memory, buffer\_lines, memory\_map, float\_precision) > 703 skip\_blank\_lines=skip\_blank\_lines) > 704 > --> 705 return \_read(filepath\_or\_buffer, kwds) > 706 > 707 parser\_f.**name** = name > > > c:\program files\python36\lib\site-packages\pandas\io\parsers.py in > \_read(filepath\_or\_buffer, kwds) > 443 > 444 # Create the parser. > --> 445 parser = TextFileReader(filepath\_or\_buffer, \*\*kwds) > 446 > 447 if chunksize or iterator: > > > c:\program files\python36\lib\site-packages\pandas\io\parsers.py in > **init**(self, f, engine, \*\*kwds) > 812 self.options['has\_index\_names'] = kwds['has\_index\_names'] > 813 > --> 814 self.\_make\_engine(self.engine) > 815 > 816 def close(self): > > > c:\program files\python36\lib\site-packages\pandas\io\parsers.py in > \_make\_engine(self, engine) 1043 def \_make\_engine(self, engine='c'): 1044 if engine == 'c': > -> 1045 self.\_engine = CParserWrapper(self.f, \*\*self.options) 1046 else: 1047 if engine == 'python': > > > c:\program files\python36\lib\site-packages\pandas\io\parsers.py in > **init**(self, src, \*\*kwds) 1682 kwds['allow\_leading\_cols'] = self.index\_col is not False 1683 > -> 1684 self.\_reader = parsers.TextReader(src, \*\*kwds) 1685 1686 # XXX > > > pandas\_libs\parsers.pyx in > pandas.\_libs.parsers.TextReader.**cinit**() > > > pandas\_libs\parsers.pyx in > pandas.\_libs.parsers.TextReader.\_setup\_parser\_source() > > > OSError: Initializing from file failed > > > Other files in the same folder that are XLS files can be accessed without an issue. When using the Python library like so: ``` import csv file = csv.reader(open(r'pathtofile')) for row in file: print(row) break df = pd.read_csv(file, sep=';') ``` the file is being loaded and the first line is printed. However I get: > > ValueError: Invalid file path or buffer object type: > > > Probably because I can't use `read_csv` this way... How to get the first `pandas` function to work? The csv does not contain any special characters except German ones. The filesize is 10MB.
2018/05/27
[ "https://Stackoverflow.com/questions/50552404", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2252633/" ]
I find the same problem under Win10 OS when I try to read a csv file which name is in Chinese. Then there's no problem any more after I rename my file into EN. Maybe you should ensure your full csv file path in EN. OS:Windows 10; Python version:3.6.5; Ipython:7.0.1; Pandas:0.23.0 First, when use ``` import pandas as pd answer_df = pd.read_csv('./答案.csv') ``` my ipython notebook raise 'OSError: Initializing from file failed'. Then I rename my file into `answers.csv` ``` import pandas as pd answer_df = pd.read_csv('./answers.csv') ``` everythings' OK. May to help you.
pandas read\_csv OSError: Initializing from file failed We could try `chmod 600 file.csv`.
50,552,404
So far `pandas` read through all my CSV files without any problem, however now there seems to be a problem.. When doing: ``` df = pd.read_csv(r'path to file', sep=';') ``` I get: > > OSError Traceback (most recent call > last) in () > ----> 1 df = pd.read\_csv(r'path > Übersicht\Input\test\test.csv', sep=';') > > > c:\program files\python36\lib\site-packages\pandas\io\parsers.py in > parser\_f(filepath\_or\_buffer, sep, delimiter, header, names, index\_col, > usecols, squeeze, prefix, mangle\_dupe\_cols, dtype, engine, converters, > true\_values, false\_values, skipinitialspace, skiprows, nrows, > na\_values, keep\_default\_na, na\_filter, verbose, skip\_blank\_lines, > parse\_dates, infer\_datetime\_format, keep\_date\_col, date\_parser, > dayfirst, iterator, chunksize, compression, thousands, decimal, > lineterminator, quotechar, quoting, escapechar, comment, encoding, > dialect, tupleize\_cols, error\_bad\_lines, warn\_bad\_lines, skipfooter, > skip\_footer, doublequote, delim\_whitespace, as\_recarray, compact\_ints, > use\_unsigned, low\_memory, buffer\_lines, memory\_map, float\_precision) > 703 skip\_blank\_lines=skip\_blank\_lines) > 704 > --> 705 return \_read(filepath\_or\_buffer, kwds) > 706 > 707 parser\_f.**name** = name > > > c:\program files\python36\lib\site-packages\pandas\io\parsers.py in > \_read(filepath\_or\_buffer, kwds) > 443 > 444 # Create the parser. > --> 445 parser = TextFileReader(filepath\_or\_buffer, \*\*kwds) > 446 > 447 if chunksize or iterator: > > > c:\program files\python36\lib\site-packages\pandas\io\parsers.py in > **init**(self, f, engine, \*\*kwds) > 812 self.options['has\_index\_names'] = kwds['has\_index\_names'] > 813 > --> 814 self.\_make\_engine(self.engine) > 815 > 816 def close(self): > > > c:\program files\python36\lib\site-packages\pandas\io\parsers.py in > \_make\_engine(self, engine) 1043 def \_make\_engine(self, engine='c'): 1044 if engine == 'c': > -> 1045 self.\_engine = CParserWrapper(self.f, \*\*self.options) 1046 else: 1047 if engine == 'python': > > > c:\program files\python36\lib\site-packages\pandas\io\parsers.py in > **init**(self, src, \*\*kwds) 1682 kwds['allow\_leading\_cols'] = self.index\_col is not False 1683 > -> 1684 self.\_reader = parsers.TextReader(src, \*\*kwds) 1685 1686 # XXX > > > pandas\_libs\parsers.pyx in > pandas.\_libs.parsers.TextReader.**cinit**() > > > pandas\_libs\parsers.pyx in > pandas.\_libs.parsers.TextReader.\_setup\_parser\_source() > > > OSError: Initializing from file failed > > > Other files in the same folder that are XLS files can be accessed without an issue. When using the Python library like so: ``` import csv file = csv.reader(open(r'pathtofile')) for row in file: print(row) break df = pd.read_csv(file, sep=';') ``` the file is being loaded and the first line is printed. However I get: > > ValueError: Invalid file path or buffer object type: > > > Probably because I can't use `read_csv` this way... How to get the first `pandas` function to work? The csv does not contain any special characters except German ones. The filesize is 10MB.
2018/05/27
[ "https://Stackoverflow.com/questions/50552404", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2252633/" ]
In my case, I was entering incorrectly the path. I was working with a dataset downloaded from Kaggle as a zip. The structure of downloads was: main.zip/subfiles.zip. I unzipped the main but the solution was to unzip the subfiles.zip and then my wanted file was within the subfiles.zip. So the path would be main/subfiles/wantedfile.csv and this was my simple fix.
OSError: Initializing from file failed By this error Python indicates the file is not readable- either, file permission, file error, incorrect path. etc. My code giving this error : ``` DataFrame=pd.read_csv("C:\\Users\\arindam\\Documents\\#Books & Docs\\#ML-DATA\\train.csv") ``` Fixed by this : ``` DataFrame=pd.read_csv("C:\\Users\\arindam\\Documents\\train.csv") ``` Make sure the path is readable by python.
50,552,404
So far `pandas` read through all my CSV files without any problem, however now there seems to be a problem.. When doing: ``` df = pd.read_csv(r'path to file', sep=';') ``` I get: > > OSError Traceback (most recent call > last) in () > ----> 1 df = pd.read\_csv(r'path > Übersicht\Input\test\test.csv', sep=';') > > > c:\program files\python36\lib\site-packages\pandas\io\parsers.py in > parser\_f(filepath\_or\_buffer, sep, delimiter, header, names, index\_col, > usecols, squeeze, prefix, mangle\_dupe\_cols, dtype, engine, converters, > true\_values, false\_values, skipinitialspace, skiprows, nrows, > na\_values, keep\_default\_na, na\_filter, verbose, skip\_blank\_lines, > parse\_dates, infer\_datetime\_format, keep\_date\_col, date\_parser, > dayfirst, iterator, chunksize, compression, thousands, decimal, > lineterminator, quotechar, quoting, escapechar, comment, encoding, > dialect, tupleize\_cols, error\_bad\_lines, warn\_bad\_lines, skipfooter, > skip\_footer, doublequote, delim\_whitespace, as\_recarray, compact\_ints, > use\_unsigned, low\_memory, buffer\_lines, memory\_map, float\_precision) > 703 skip\_blank\_lines=skip\_blank\_lines) > 704 > --> 705 return \_read(filepath\_or\_buffer, kwds) > 706 > 707 parser\_f.**name** = name > > > c:\program files\python36\lib\site-packages\pandas\io\parsers.py in > \_read(filepath\_or\_buffer, kwds) > 443 > 444 # Create the parser. > --> 445 parser = TextFileReader(filepath\_or\_buffer, \*\*kwds) > 446 > 447 if chunksize or iterator: > > > c:\program files\python36\lib\site-packages\pandas\io\parsers.py in > **init**(self, f, engine, \*\*kwds) > 812 self.options['has\_index\_names'] = kwds['has\_index\_names'] > 813 > --> 814 self.\_make\_engine(self.engine) > 815 > 816 def close(self): > > > c:\program files\python36\lib\site-packages\pandas\io\parsers.py in > \_make\_engine(self, engine) 1043 def \_make\_engine(self, engine='c'): 1044 if engine == 'c': > -> 1045 self.\_engine = CParserWrapper(self.f, \*\*self.options) 1046 else: 1047 if engine == 'python': > > > c:\program files\python36\lib\site-packages\pandas\io\parsers.py in > **init**(self, src, \*\*kwds) 1682 kwds['allow\_leading\_cols'] = self.index\_col is not False 1683 > -> 1684 self.\_reader = parsers.TextReader(src, \*\*kwds) 1685 1686 # XXX > > > pandas\_libs\parsers.pyx in > pandas.\_libs.parsers.TextReader.**cinit**() > > > pandas\_libs\parsers.pyx in > pandas.\_libs.parsers.TextReader.\_setup\_parser\_source() > > > OSError: Initializing from file failed > > > Other files in the same folder that are XLS files can be accessed without an issue. When using the Python library like so: ``` import csv file = csv.reader(open(r'pathtofile')) for row in file: print(row) break df = pd.read_csv(file, sep=';') ``` the file is being loaded and the first line is printed. However I get: > > ValueError: Invalid file path or buffer object type: > > > Probably because I can't use `read_csv` this way... How to get the first `pandas` function to work? The csv does not contain any special characters except German ones. The filesize is 10MB.
2018/05/27
[ "https://Stackoverflow.com/questions/50552404", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2252633/" ]
In my case, I was entering incorrectly the path. I was working with a dataset downloaded from Kaggle as a zip. The structure of downloads was: main.zip/subfiles.zip. I unzipped the main but the solution was to unzip the subfiles.zip and then my wanted file was within the subfiles.zip. So the path would be main/subfiles/wantedfile.csv and this was my simple fix.
just change the permessions of csv file, It would work chmod 750 filename.csv (in command line) or !chmod 750 filename.csv (in jupyter notebook)
50,552,404
So far `pandas` read through all my CSV files without any problem, however now there seems to be a problem.. When doing: ``` df = pd.read_csv(r'path to file', sep=';') ``` I get: > > OSError Traceback (most recent call > last) in () > ----> 1 df = pd.read\_csv(r'path > Übersicht\Input\test\test.csv', sep=';') > > > c:\program files\python36\lib\site-packages\pandas\io\parsers.py in > parser\_f(filepath\_or\_buffer, sep, delimiter, header, names, index\_col, > usecols, squeeze, prefix, mangle\_dupe\_cols, dtype, engine, converters, > true\_values, false\_values, skipinitialspace, skiprows, nrows, > na\_values, keep\_default\_na, na\_filter, verbose, skip\_blank\_lines, > parse\_dates, infer\_datetime\_format, keep\_date\_col, date\_parser, > dayfirst, iterator, chunksize, compression, thousands, decimal, > lineterminator, quotechar, quoting, escapechar, comment, encoding, > dialect, tupleize\_cols, error\_bad\_lines, warn\_bad\_lines, skipfooter, > skip\_footer, doublequote, delim\_whitespace, as\_recarray, compact\_ints, > use\_unsigned, low\_memory, buffer\_lines, memory\_map, float\_precision) > 703 skip\_blank\_lines=skip\_blank\_lines) > 704 > --> 705 return \_read(filepath\_or\_buffer, kwds) > 706 > 707 parser\_f.**name** = name > > > c:\program files\python36\lib\site-packages\pandas\io\parsers.py in > \_read(filepath\_or\_buffer, kwds) > 443 > 444 # Create the parser. > --> 445 parser = TextFileReader(filepath\_or\_buffer, \*\*kwds) > 446 > 447 if chunksize or iterator: > > > c:\program files\python36\lib\site-packages\pandas\io\parsers.py in > **init**(self, f, engine, \*\*kwds) > 812 self.options['has\_index\_names'] = kwds['has\_index\_names'] > 813 > --> 814 self.\_make\_engine(self.engine) > 815 > 816 def close(self): > > > c:\program files\python36\lib\site-packages\pandas\io\parsers.py in > \_make\_engine(self, engine) 1043 def \_make\_engine(self, engine='c'): 1044 if engine == 'c': > -> 1045 self.\_engine = CParserWrapper(self.f, \*\*self.options) 1046 else: 1047 if engine == 'python': > > > c:\program files\python36\lib\site-packages\pandas\io\parsers.py in > **init**(self, src, \*\*kwds) 1682 kwds['allow\_leading\_cols'] = self.index\_col is not False 1683 > -> 1684 self.\_reader = parsers.TextReader(src, \*\*kwds) 1685 1686 # XXX > > > pandas\_libs\parsers.pyx in > pandas.\_libs.parsers.TextReader.**cinit**() > > > pandas\_libs\parsers.pyx in > pandas.\_libs.parsers.TextReader.\_setup\_parser\_source() > > > OSError: Initializing from file failed > > > Other files in the same folder that are XLS files can be accessed without an issue. When using the Python library like so: ``` import csv file = csv.reader(open(r'pathtofile')) for row in file: print(row) break df = pd.read_csv(file, sep=';') ``` the file is being loaded and the first line is printed. However I get: > > ValueError: Invalid file path or buffer object type: > > > Probably because I can't use `read_csv` this way... How to get the first `pandas` function to work? The csv does not contain any special characters except German ones. The filesize is 10MB.
2018/05/27
[ "https://Stackoverflow.com/questions/50552404", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2252633/" ]
I had the same issue, you should check your permissions. After `chmod 644 file.csv` it worked well.
Same problem when I was trying to load files with Japanese filenames. ``` import pandas as pd result = pd.read_csv('./result/けっこう.csv') OSError: Initializing from file failed' ``` Then I added an argument `engine="python"`. ``` result = pd.read_csv('./result/けっこう.csv', engine="python") ``` It worked for me.
50,552,404
So far `pandas` read through all my CSV files without any problem, however now there seems to be a problem.. When doing: ``` df = pd.read_csv(r'path to file', sep=';') ``` I get: > > OSError Traceback (most recent call > last) in () > ----> 1 df = pd.read\_csv(r'path > Übersicht\Input\test\test.csv', sep=';') > > > c:\program files\python36\lib\site-packages\pandas\io\parsers.py in > parser\_f(filepath\_or\_buffer, sep, delimiter, header, names, index\_col, > usecols, squeeze, prefix, mangle\_dupe\_cols, dtype, engine, converters, > true\_values, false\_values, skipinitialspace, skiprows, nrows, > na\_values, keep\_default\_na, na\_filter, verbose, skip\_blank\_lines, > parse\_dates, infer\_datetime\_format, keep\_date\_col, date\_parser, > dayfirst, iterator, chunksize, compression, thousands, decimal, > lineterminator, quotechar, quoting, escapechar, comment, encoding, > dialect, tupleize\_cols, error\_bad\_lines, warn\_bad\_lines, skipfooter, > skip\_footer, doublequote, delim\_whitespace, as\_recarray, compact\_ints, > use\_unsigned, low\_memory, buffer\_lines, memory\_map, float\_precision) > 703 skip\_blank\_lines=skip\_blank\_lines) > 704 > --> 705 return \_read(filepath\_or\_buffer, kwds) > 706 > 707 parser\_f.**name** = name > > > c:\program files\python36\lib\site-packages\pandas\io\parsers.py in > \_read(filepath\_or\_buffer, kwds) > 443 > 444 # Create the parser. > --> 445 parser = TextFileReader(filepath\_or\_buffer, \*\*kwds) > 446 > 447 if chunksize or iterator: > > > c:\program files\python36\lib\site-packages\pandas\io\parsers.py in > **init**(self, f, engine, \*\*kwds) > 812 self.options['has\_index\_names'] = kwds['has\_index\_names'] > 813 > --> 814 self.\_make\_engine(self.engine) > 815 > 816 def close(self): > > > c:\program files\python36\lib\site-packages\pandas\io\parsers.py in > \_make\_engine(self, engine) 1043 def \_make\_engine(self, engine='c'): 1044 if engine == 'c': > -> 1045 self.\_engine = CParserWrapper(self.f, \*\*self.options) 1046 else: 1047 if engine == 'python': > > > c:\program files\python36\lib\site-packages\pandas\io\parsers.py in > **init**(self, src, \*\*kwds) 1682 kwds['allow\_leading\_cols'] = self.index\_col is not False 1683 > -> 1684 self.\_reader = parsers.TextReader(src, \*\*kwds) 1685 1686 # XXX > > > pandas\_libs\parsers.pyx in > pandas.\_libs.parsers.TextReader.**cinit**() > > > pandas\_libs\parsers.pyx in > pandas.\_libs.parsers.TextReader.\_setup\_parser\_source() > > > OSError: Initializing from file failed > > > Other files in the same folder that are XLS files can be accessed without an issue. When using the Python library like so: ``` import csv file = csv.reader(open(r'pathtofile')) for row in file: print(row) break df = pd.read_csv(file, sep=';') ``` the file is being loaded and the first line is printed. However I get: > > ValueError: Invalid file path or buffer object type: > > > Probably because I can't use `read_csv` this way... How to get the first `pandas` function to work? The csv does not contain any special characters except German ones. The filesize is 10MB.
2018/05/27
[ "https://Stackoverflow.com/questions/50552404", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2252633/" ]
``` import pandas as pd pd.read_csv("your_file.txt", engine='python') ``` Try this. It totally worked for me. * source : <http://kkckc.tistory.com/187>
OSError: Initializing from file failed By this error Python indicates the file is not readable- either, file permission, file error, incorrect path. etc. My code giving this error : ``` DataFrame=pd.read_csv("C:\\Users\\arindam\\Documents\\#Books & Docs\\#ML-DATA\\train.csv") ``` Fixed by this : ``` DataFrame=pd.read_csv("C:\\Users\\arindam\\Documents\\train.csv") ``` Make sure the path is readable by python.
50,552,404
So far `pandas` read through all my CSV files without any problem, however now there seems to be a problem.. When doing: ``` df = pd.read_csv(r'path to file', sep=';') ``` I get: > > OSError Traceback (most recent call > last) in () > ----> 1 df = pd.read\_csv(r'path > Übersicht\Input\test\test.csv', sep=';') > > > c:\program files\python36\lib\site-packages\pandas\io\parsers.py in > parser\_f(filepath\_or\_buffer, sep, delimiter, header, names, index\_col, > usecols, squeeze, prefix, mangle\_dupe\_cols, dtype, engine, converters, > true\_values, false\_values, skipinitialspace, skiprows, nrows, > na\_values, keep\_default\_na, na\_filter, verbose, skip\_blank\_lines, > parse\_dates, infer\_datetime\_format, keep\_date\_col, date\_parser, > dayfirst, iterator, chunksize, compression, thousands, decimal, > lineterminator, quotechar, quoting, escapechar, comment, encoding, > dialect, tupleize\_cols, error\_bad\_lines, warn\_bad\_lines, skipfooter, > skip\_footer, doublequote, delim\_whitespace, as\_recarray, compact\_ints, > use\_unsigned, low\_memory, buffer\_lines, memory\_map, float\_precision) > 703 skip\_blank\_lines=skip\_blank\_lines) > 704 > --> 705 return \_read(filepath\_or\_buffer, kwds) > 706 > 707 parser\_f.**name** = name > > > c:\program files\python36\lib\site-packages\pandas\io\parsers.py in > \_read(filepath\_or\_buffer, kwds) > 443 > 444 # Create the parser. > --> 445 parser = TextFileReader(filepath\_or\_buffer, \*\*kwds) > 446 > 447 if chunksize or iterator: > > > c:\program files\python36\lib\site-packages\pandas\io\parsers.py in > **init**(self, f, engine, \*\*kwds) > 812 self.options['has\_index\_names'] = kwds['has\_index\_names'] > 813 > --> 814 self.\_make\_engine(self.engine) > 815 > 816 def close(self): > > > c:\program files\python36\lib\site-packages\pandas\io\parsers.py in > \_make\_engine(self, engine) 1043 def \_make\_engine(self, engine='c'): 1044 if engine == 'c': > -> 1045 self.\_engine = CParserWrapper(self.f, \*\*self.options) 1046 else: 1047 if engine == 'python': > > > c:\program files\python36\lib\site-packages\pandas\io\parsers.py in > **init**(self, src, \*\*kwds) 1682 kwds['allow\_leading\_cols'] = self.index\_col is not False 1683 > -> 1684 self.\_reader = parsers.TextReader(src, \*\*kwds) 1685 1686 # XXX > > > pandas\_libs\parsers.pyx in > pandas.\_libs.parsers.TextReader.**cinit**() > > > pandas\_libs\parsers.pyx in > pandas.\_libs.parsers.TextReader.\_setup\_parser\_source() > > > OSError: Initializing from file failed > > > Other files in the same folder that are XLS files can be accessed without an issue. When using the Python library like so: ``` import csv file = csv.reader(open(r'pathtofile')) for row in file: print(row) break df = pd.read_csv(file, sep=';') ``` the file is being loaded and the first line is printed. However I get: > > ValueError: Invalid file path or buffer object type: > > > Probably because I can't use `read_csv` this way... How to get the first `pandas` function to work? The csv does not contain any special characters except German ones. The filesize is 10MB.
2018/05/27
[ "https://Stackoverflow.com/questions/50552404", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2252633/" ]
I had the same issue, you should check your permissions. After `chmod 644 file.csv` it worked well.
I find the same problem under Win10 OS when I try to read a csv file which name is in Chinese. Then there's no problem any more after I rename my file into EN. Maybe you should ensure your full csv file path in EN. OS:Windows 10; Python version:3.6.5; Ipython:7.0.1; Pandas:0.23.0 First, when use ``` import pandas as pd answer_df = pd.read_csv('./答案.csv') ``` my ipython notebook raise 'OSError: Initializing from file failed'. Then I rename my file into `answers.csv` ``` import pandas as pd answer_df = pd.read_csv('./answers.csv') ``` everythings' OK. May to help you.
50,552,404
So far `pandas` read through all my CSV files without any problem, however now there seems to be a problem.. When doing: ``` df = pd.read_csv(r'path to file', sep=';') ``` I get: > > OSError Traceback (most recent call > last) in () > ----> 1 df = pd.read\_csv(r'path > Übersicht\Input\test\test.csv', sep=';') > > > c:\program files\python36\lib\site-packages\pandas\io\parsers.py in > parser\_f(filepath\_or\_buffer, sep, delimiter, header, names, index\_col, > usecols, squeeze, prefix, mangle\_dupe\_cols, dtype, engine, converters, > true\_values, false\_values, skipinitialspace, skiprows, nrows, > na\_values, keep\_default\_na, na\_filter, verbose, skip\_blank\_lines, > parse\_dates, infer\_datetime\_format, keep\_date\_col, date\_parser, > dayfirst, iterator, chunksize, compression, thousands, decimal, > lineterminator, quotechar, quoting, escapechar, comment, encoding, > dialect, tupleize\_cols, error\_bad\_lines, warn\_bad\_lines, skipfooter, > skip\_footer, doublequote, delim\_whitespace, as\_recarray, compact\_ints, > use\_unsigned, low\_memory, buffer\_lines, memory\_map, float\_precision) > 703 skip\_blank\_lines=skip\_blank\_lines) > 704 > --> 705 return \_read(filepath\_or\_buffer, kwds) > 706 > 707 parser\_f.**name** = name > > > c:\program files\python36\lib\site-packages\pandas\io\parsers.py in > \_read(filepath\_or\_buffer, kwds) > 443 > 444 # Create the parser. > --> 445 parser = TextFileReader(filepath\_or\_buffer, \*\*kwds) > 446 > 447 if chunksize or iterator: > > > c:\program files\python36\lib\site-packages\pandas\io\parsers.py in > **init**(self, f, engine, \*\*kwds) > 812 self.options['has\_index\_names'] = kwds['has\_index\_names'] > 813 > --> 814 self.\_make\_engine(self.engine) > 815 > 816 def close(self): > > > c:\program files\python36\lib\site-packages\pandas\io\parsers.py in > \_make\_engine(self, engine) 1043 def \_make\_engine(self, engine='c'): 1044 if engine == 'c': > -> 1045 self.\_engine = CParserWrapper(self.f, \*\*self.options) 1046 else: 1047 if engine == 'python': > > > c:\program files\python36\lib\site-packages\pandas\io\parsers.py in > **init**(self, src, \*\*kwds) 1682 kwds['allow\_leading\_cols'] = self.index\_col is not False 1683 > -> 1684 self.\_reader = parsers.TextReader(src, \*\*kwds) 1685 1686 # XXX > > > pandas\_libs\parsers.pyx in > pandas.\_libs.parsers.TextReader.**cinit**() > > > pandas\_libs\parsers.pyx in > pandas.\_libs.parsers.TextReader.\_setup\_parser\_source() > > > OSError: Initializing from file failed > > > Other files in the same folder that are XLS files can be accessed without an issue. When using the Python library like so: ``` import csv file = csv.reader(open(r'pathtofile')) for row in file: print(row) break df = pd.read_csv(file, sep=';') ``` the file is being loaded and the first line is printed. However I get: > > ValueError: Invalid file path or buffer object type: > > > Probably because I can't use `read_csv` this way... How to get the first `pandas` function to work? The csv does not contain any special characters except German ones. The filesize is 10MB.
2018/05/27
[ "https://Stackoverflow.com/questions/50552404", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2252633/" ]
``` import pandas as pd pd.read_csv("your_file.txt", engine='python') ``` Try this. It totally worked for me. * source : <http://kkckc.tistory.com/187>
I had the same issue, you should check your permissions. After `chmod 644 file.csv` it worked well.
50,552,404
So far `pandas` read through all my CSV files without any problem, however now there seems to be a problem.. When doing: ``` df = pd.read_csv(r'path to file', sep=';') ``` I get: > > OSError Traceback (most recent call > last) in () > ----> 1 df = pd.read\_csv(r'path > Übersicht\Input\test\test.csv', sep=';') > > > c:\program files\python36\lib\site-packages\pandas\io\parsers.py in > parser\_f(filepath\_or\_buffer, sep, delimiter, header, names, index\_col, > usecols, squeeze, prefix, mangle\_dupe\_cols, dtype, engine, converters, > true\_values, false\_values, skipinitialspace, skiprows, nrows, > na\_values, keep\_default\_na, na\_filter, verbose, skip\_blank\_lines, > parse\_dates, infer\_datetime\_format, keep\_date\_col, date\_parser, > dayfirst, iterator, chunksize, compression, thousands, decimal, > lineterminator, quotechar, quoting, escapechar, comment, encoding, > dialect, tupleize\_cols, error\_bad\_lines, warn\_bad\_lines, skipfooter, > skip\_footer, doublequote, delim\_whitespace, as\_recarray, compact\_ints, > use\_unsigned, low\_memory, buffer\_lines, memory\_map, float\_precision) > 703 skip\_blank\_lines=skip\_blank\_lines) > 704 > --> 705 return \_read(filepath\_or\_buffer, kwds) > 706 > 707 parser\_f.**name** = name > > > c:\program files\python36\lib\site-packages\pandas\io\parsers.py in > \_read(filepath\_or\_buffer, kwds) > 443 > 444 # Create the parser. > --> 445 parser = TextFileReader(filepath\_or\_buffer, \*\*kwds) > 446 > 447 if chunksize or iterator: > > > c:\program files\python36\lib\site-packages\pandas\io\parsers.py in > **init**(self, f, engine, \*\*kwds) > 812 self.options['has\_index\_names'] = kwds['has\_index\_names'] > 813 > --> 814 self.\_make\_engine(self.engine) > 815 > 816 def close(self): > > > c:\program files\python36\lib\site-packages\pandas\io\parsers.py in > \_make\_engine(self, engine) 1043 def \_make\_engine(self, engine='c'): 1044 if engine == 'c': > -> 1045 self.\_engine = CParserWrapper(self.f, \*\*self.options) 1046 else: 1047 if engine == 'python': > > > c:\program files\python36\lib\site-packages\pandas\io\parsers.py in > **init**(self, src, \*\*kwds) 1682 kwds['allow\_leading\_cols'] = self.index\_col is not False 1683 > -> 1684 self.\_reader = parsers.TextReader(src, \*\*kwds) 1685 1686 # XXX > > > pandas\_libs\parsers.pyx in > pandas.\_libs.parsers.TextReader.**cinit**() > > > pandas\_libs\parsers.pyx in > pandas.\_libs.parsers.TextReader.\_setup\_parser\_source() > > > OSError: Initializing from file failed > > > Other files in the same folder that are XLS files can be accessed without an issue. When using the Python library like so: ``` import csv file = csv.reader(open(r'pathtofile')) for row in file: print(row) break df = pd.read_csv(file, sep=';') ``` the file is being loaded and the first line is printed. However I get: > > ValueError: Invalid file path or buffer object type: > > > Probably because I can't use `read_csv` this way... How to get the first `pandas` function to work? The csv does not contain any special characters except German ones. The filesize is 10MB.
2018/05/27
[ "https://Stackoverflow.com/questions/50552404", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2252633/" ]
``` import pandas as pd pd.read_csv("your_file.txt", engine='python') ``` Try this. It totally worked for me. * source : <http://kkckc.tistory.com/187>
just change the permessions of csv file, It would work chmod 750 filename.csv (in command line) or !chmod 750 filename.csv (in jupyter notebook)
50,552,404
So far `pandas` read through all my CSV files without any problem, however now there seems to be a problem.. When doing: ``` df = pd.read_csv(r'path to file', sep=';') ``` I get: > > OSError Traceback (most recent call > last) in () > ----> 1 df = pd.read\_csv(r'path > Übersicht\Input\test\test.csv', sep=';') > > > c:\program files\python36\lib\site-packages\pandas\io\parsers.py in > parser\_f(filepath\_or\_buffer, sep, delimiter, header, names, index\_col, > usecols, squeeze, prefix, mangle\_dupe\_cols, dtype, engine, converters, > true\_values, false\_values, skipinitialspace, skiprows, nrows, > na\_values, keep\_default\_na, na\_filter, verbose, skip\_blank\_lines, > parse\_dates, infer\_datetime\_format, keep\_date\_col, date\_parser, > dayfirst, iterator, chunksize, compression, thousands, decimal, > lineterminator, quotechar, quoting, escapechar, comment, encoding, > dialect, tupleize\_cols, error\_bad\_lines, warn\_bad\_lines, skipfooter, > skip\_footer, doublequote, delim\_whitespace, as\_recarray, compact\_ints, > use\_unsigned, low\_memory, buffer\_lines, memory\_map, float\_precision) > 703 skip\_blank\_lines=skip\_blank\_lines) > 704 > --> 705 return \_read(filepath\_or\_buffer, kwds) > 706 > 707 parser\_f.**name** = name > > > c:\program files\python36\lib\site-packages\pandas\io\parsers.py in > \_read(filepath\_or\_buffer, kwds) > 443 > 444 # Create the parser. > --> 445 parser = TextFileReader(filepath\_or\_buffer, \*\*kwds) > 446 > 447 if chunksize or iterator: > > > c:\program files\python36\lib\site-packages\pandas\io\parsers.py in > **init**(self, f, engine, \*\*kwds) > 812 self.options['has\_index\_names'] = kwds['has\_index\_names'] > 813 > --> 814 self.\_make\_engine(self.engine) > 815 > 816 def close(self): > > > c:\program files\python36\lib\site-packages\pandas\io\parsers.py in > \_make\_engine(self, engine) 1043 def \_make\_engine(self, engine='c'): 1044 if engine == 'c': > -> 1045 self.\_engine = CParserWrapper(self.f, \*\*self.options) 1046 else: 1047 if engine == 'python': > > > c:\program files\python36\lib\site-packages\pandas\io\parsers.py in > **init**(self, src, \*\*kwds) 1682 kwds['allow\_leading\_cols'] = self.index\_col is not False 1683 > -> 1684 self.\_reader = parsers.TextReader(src, \*\*kwds) 1685 1686 # XXX > > > pandas\_libs\parsers.pyx in > pandas.\_libs.parsers.TextReader.**cinit**() > > > pandas\_libs\parsers.pyx in > pandas.\_libs.parsers.TextReader.\_setup\_parser\_source() > > > OSError: Initializing from file failed > > > Other files in the same folder that are XLS files can be accessed without an issue. When using the Python library like so: ``` import csv file = csv.reader(open(r'pathtofile')) for row in file: print(row) break df = pd.read_csv(file, sep=';') ``` the file is being loaded and the first line is printed. However I get: > > ValueError: Invalid file path or buffer object type: > > > Probably because I can't use `read_csv` this way... How to get the first `pandas` function to work? The csv does not contain any special characters except German ones. The filesize is 10MB.
2018/05/27
[ "https://Stackoverflow.com/questions/50552404", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2252633/" ]
I ran into a similar problem. It turned out the CSV I had downloaded had no permissions at all. The error message from pandas did not point this out, making it hard to debug. Check that your file have read permissions
You can try using `os.path.join()` to build your path: ``` import os rpath = os.path.join('U:','folder','Input','test.csv') df = pd.read_csv(rpath, sep=';') ``` To traverse path based on your parent directory, you can use: ``` os.path.pardir ```
65,645,999
For some context, I am coding some geometric transformations into a python class, adding some matrix multiplication methods. There can be many 3D objects inside of a 3D "scene". In order to allow users to switch between applying transformations to the entire scene or to one object in the scene, I'm computing the geometric center of the object's bounding box (cuboid?) in order to allow that geometric center to function as the "origin" in the objects Euclidean space, and then to apply transformation matrix multiplications to that object alone. My specific question occurs when mapping points from scene space to local object space, I subtract the geometric center from the points. Then after the transformation, to convert back, I add the geometric center to the points. Is there a pythonic way to change my function from adding to subtracting via keyword argument? I don't like what I have now, it doesn't seem very pythonic. ```py def apply_centroid_transform(point, centroid, reverse=False): reverse_mult = 1 if reverse: reverse_mult = -1 new_point = [ point[0] - (reverse_mult * centroid["x"]), point[1] - (reverse_mult * centroid["y"]), point[2] - (reverse_mult * centroid["z"]), ] return new_point ``` I don't want to have the keyword argument be `multiply_factor=1` and then make the user know to type `-1` there, because that seems unintuitive. I hope my question makes sense. Thanks for any guidance you may have.
2021/01/09
[ "https://Stackoverflow.com/questions/65645999", "https://Stackoverflow.com", "https://Stackoverflow.com/users/10151432/" ]
One line: ```py def apply_centroid_transform(point, centroid, reverse=False): return [point[i] - [1, -1][reverse]*centroid[j] for i, j in enumerate('xyz')] ``` It is not very readable, but it is very concise :)
Well, if you like, you can do this: ``` def apply_centroid_transform(point, centroid, reverse=False): op = float.__sub__ if reverse else float.__add__ return [op(p_i, c_i) for p_i, c_i in zip(point, centroid)] ```
65,645,999
For some context, I am coding some geometric transformations into a python class, adding some matrix multiplication methods. There can be many 3D objects inside of a 3D "scene". In order to allow users to switch between applying transformations to the entire scene or to one object in the scene, I'm computing the geometric center of the object's bounding box (cuboid?) in order to allow that geometric center to function as the "origin" in the objects Euclidean space, and then to apply transformation matrix multiplications to that object alone. My specific question occurs when mapping points from scene space to local object space, I subtract the geometric center from the points. Then after the transformation, to convert back, I add the geometric center to the points. Is there a pythonic way to change my function from adding to subtracting via keyword argument? I don't like what I have now, it doesn't seem very pythonic. ```py def apply_centroid_transform(point, centroid, reverse=False): reverse_mult = 1 if reverse: reverse_mult = -1 new_point = [ point[0] - (reverse_mult * centroid["x"]), point[1] - (reverse_mult * centroid["y"]), point[2] - (reverse_mult * centroid["z"]), ] return new_point ``` I don't want to have the keyword argument be `multiply_factor=1` and then make the user know to type `-1` there, because that seems unintuitive. I hope my question makes sense. Thanks for any guidance you may have.
2021/01/09
[ "https://Stackoverflow.com/questions/65645999", "https://Stackoverflow.com", "https://Stackoverflow.com/users/10151432/" ]
How about ``` def apply_centroid_transform(point, centroid, reverse=False): if reverse: centroid["x"]= centroid["x"]*-1 centroid["y"]= centroid["y"]*-1 centroid["z"]= centroid["z"]*-1 new_point = [ point[0] - (centroid["x"]), point[1] - (centroid["y"]), point[2] - (centroid["z"]), ] return new_point ``` assuming there are also other components in the dict centroid. In general I would advice to create a dataclass "centroid" and a dataclass "point" with x,y,z as attributes to make the code more readable. This would look like this: ``` from dataclasses import dataclass @dataclass class Centroid(dataclass): x: float y: float z: float def reverse(self): return Centroid(x=self.x*-1, x=self.y*-1, x=self.z*-1) @dataclass class Point(dataclass): x: float y: float z: float def apply_centroid_transform(point: Point, centroid: Centroid, reverse: Bool=False): if reverse: centroid = centroid.reverse() return Point(point.x - (centroid.x), point.y - (centroid.y), point.z - (centroid.z))) ```
Well, if you like, you can do this: ``` def apply_centroid_transform(point, centroid, reverse=False): op = float.__sub__ if reverse else float.__add__ return [op(p_i, c_i) for p_i, c_i in zip(point, centroid)] ```
65,645,999
For some context, I am coding some geometric transformations into a python class, adding some matrix multiplication methods. There can be many 3D objects inside of a 3D "scene". In order to allow users to switch between applying transformations to the entire scene or to one object in the scene, I'm computing the geometric center of the object's bounding box (cuboid?) in order to allow that geometric center to function as the "origin" in the objects Euclidean space, and then to apply transformation matrix multiplications to that object alone. My specific question occurs when mapping points from scene space to local object space, I subtract the geometric center from the points. Then after the transformation, to convert back, I add the geometric center to the points. Is there a pythonic way to change my function from adding to subtracting via keyword argument? I don't like what I have now, it doesn't seem very pythonic. ```py def apply_centroid_transform(point, centroid, reverse=False): reverse_mult = 1 if reverse: reverse_mult = -1 new_point = [ point[0] - (reverse_mult * centroid["x"]), point[1] - (reverse_mult * centroid["y"]), point[2] - (reverse_mult * centroid["z"]), ] return new_point ``` I don't want to have the keyword argument be `multiply_factor=1` and then make the user know to type `-1` there, because that seems unintuitive. I hope my question makes sense. Thanks for any guidance you may have.
2021/01/09
[ "https://Stackoverflow.com/questions/65645999", "https://Stackoverflow.com", "https://Stackoverflow.com/users/10151432/" ]
You are applying a scalar to the transformation and it just so happens that -1 moves a local object back to scene space. So, make the function generic and add a doc string. ``` def apply_centroid_transform(point, centroid, scalar=1): """Move point from centroid with scale. By default, scalar=1 and moves from scene to local object space. With -1, the operation is reversed.""" new_point = [ point[0] - (scalar * centroid["x"]), point[1] - (scalar * centroid["y"]), point[2] - (scalar * centroid["z"]), ] return new_point ```
Well, if you like, you can do this: ``` def apply_centroid_transform(point, centroid, reverse=False): op = float.__sub__ if reverse else float.__add__ return [op(p_i, c_i) for p_i, c_i in zip(point, centroid)] ```
26,199,343
I have a bunch of `Album` objects in a `list` (code for objects posted below). 5570 to be exact. However, when looking at unique objects, I should have 385. Because of the way that the objects are created (I don't know if I can explain it properly), I thought it would be best to add all the objects into the list, and then delete the ones that are similar afterwards. Certain objects have the same strings for each argument(`artist`, `title`, `tracks`) and I would like to get rid of them. However, I know I cannot simply remove the duplicates, since they are stored in separate memory locations, and therefore aren't exactly identical. Can anyone help me with removing the duplicates? As you can probably tell, I am quite new to python. Thanks in advance! ``` class Album(object) : def __init__(self, artist, title, tracks = None) : tracks = [] self.artist = artist self.title = title self.tracks = tracks def add_track(self, track) : self.track = track (self.tracks).append(track) print "The track %s was added." % (track) def __str__(self) : return "Artist: %s, Album: %s [" % (self.artist, self.title) + str(len(self.tracks)) + " Tracks]" ```
2014/10/05
[ "https://Stackoverflow.com/questions/26199343", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4027606/" ]
when you do mvvm and wanna use button then you should use DelegateCommand or RelayCommand. if you use this then you just have to implement the ICommand properly (CanExecute!) the Command binding to the button will handle IsEnabled for you. ``` <Button Command="{Binding MyRemoveCommand}"></Button> ``` cs. ``` public ICommand MyRemoveCommand {get;set;} this.MyRemoveCommand = new DelegateCommand(this.RemoveCommandExecute, this.CanRemoveCommandExecute); private bool CanRemoveCommandExecute() { return this.CanRemove; } private bool RemoveCommandExecute() { if(!this.CanRemoveCommandExecute) return; //execution logic here } ```
As far as i can see in your MVVM there is a bool "CanRemove". You can bind this to your buttons visibility with the already [BooleanToVisibilityConverter](http://msdn.microsoft.com/en-us/library/system.windows.controls.booleantovisibilityconverter%28v=vs.110%29.aspx) which is provided by .NET
26,199,343
I have a bunch of `Album` objects in a `list` (code for objects posted below). 5570 to be exact. However, when looking at unique objects, I should have 385. Because of the way that the objects are created (I don't know if I can explain it properly), I thought it would be best to add all the objects into the list, and then delete the ones that are similar afterwards. Certain objects have the same strings for each argument(`artist`, `title`, `tracks`) and I would like to get rid of them. However, I know I cannot simply remove the duplicates, since they are stored in separate memory locations, and therefore aren't exactly identical. Can anyone help me with removing the duplicates? As you can probably tell, I am quite new to python. Thanks in advance! ``` class Album(object) : def __init__(self, artist, title, tracks = None) : tracks = [] self.artist = artist self.title = title self.tracks = tracks def add_track(self, track) : self.track = track (self.tracks).append(track) print "The track %s was added." % (track) def __str__(self) : return "Artist: %s, Album: %s [" % (self.artist, self.title) + str(len(self.tracks)) + " Tracks]" ```
2014/10/05
[ "https://Stackoverflow.com/questions/26199343", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4027606/" ]
You don't need converter at all when you can access ViewModel directly using **`RelativeSource`** markup extension. This should work: ``` <Button IsEnabled="{Binding DataContext.CanRemove, RelativeSource={RelativeSource FindAncestor, AncestorType=ListBox}}"/> ``` Since DataContext of ListBox points to viewModel instance, above posted code will work.
As far as i can see in your MVVM there is a bool "CanRemove". You can bind this to your buttons visibility with the already [BooleanToVisibilityConverter](http://msdn.microsoft.com/en-us/library/system.windows.controls.booleantovisibilityconverter%28v=vs.110%29.aspx) which is provided by .NET
26,199,343
I have a bunch of `Album` objects in a `list` (code for objects posted below). 5570 to be exact. However, when looking at unique objects, I should have 385. Because of the way that the objects are created (I don't know if I can explain it properly), I thought it would be best to add all the objects into the list, and then delete the ones that are similar afterwards. Certain objects have the same strings for each argument(`artist`, `title`, `tracks`) and I would like to get rid of them. However, I know I cannot simply remove the duplicates, since they are stored in separate memory locations, and therefore aren't exactly identical. Can anyone help me with removing the duplicates? As you can probably tell, I am quite new to python. Thanks in advance! ``` class Album(object) : def __init__(self, artist, title, tracks = None) : tracks = [] self.artist = artist self.title = title self.tracks = tracks def add_track(self, track) : self.track = track (self.tracks).append(track) print "The track %s was added." % (track) def __str__(self) : return "Artist: %s, Album: %s [" % (self.artist, self.title) + str(len(self.tracks)) + " Tracks]" ```
2014/10/05
[ "https://Stackoverflow.com/questions/26199343", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4027606/" ]
You don't need converter at all when you can access ViewModel directly using **`RelativeSource`** markup extension. This should work: ``` <Button IsEnabled="{Binding DataContext.CanRemove, RelativeSource={RelativeSource FindAncestor, AncestorType=ListBox}}"/> ``` Since DataContext of ListBox points to viewModel instance, above posted code will work.
when you do mvvm and wanna use button then you should use DelegateCommand or RelayCommand. if you use this then you just have to implement the ICommand properly (CanExecute!) the Command binding to the button will handle IsEnabled for you. ``` <Button Command="{Binding MyRemoveCommand}"></Button> ``` cs. ``` public ICommand MyRemoveCommand {get;set;} this.MyRemoveCommand = new DelegateCommand(this.RemoveCommandExecute, this.CanRemoveCommandExecute); private bool CanRemoveCommandExecute() { return this.CanRemove; } private bool RemoveCommandExecute() { if(!this.CanRemoveCommandExecute) return; //execution logic here } ```
3,904,033
Is there an easy way to get a python code segment to run every 5 minutes? I know I could do it using time.sleep() but was there any other way? For example I want to run this every 5 minutes: ``` x = 0 def run_5(): print "5 minutes later" global x += 5 print x, "minutes since start" ``` That's only a fake example but the idea is there. Any ideas? I am on linux and would happily use cron but was just wondering if there was a python alternative?
2010/10/11
[ "https://Stackoverflow.com/questions/3904033", "https://Stackoverflow.com", "https://Stackoverflow.com/users/472006/" ]
you can do it with the threading module ``` >>> import threading >>> END = False >>> def run(x=0): ... x += 5 ... print x ... if not END: ... threading.Timer(1.0, run, [x]).start() ... >>> threading.Timer(1.0, run, [x]).start() >>> 5 10 15 20 25 30 35 40 ``` Then when you want it to stop, set `END = True`.
You might want to have a look at `cron` if you are running a \*nix type OS. You could easily have it run you program every 5 minutes <http://www.unixgeeks.org/security/newbie/unix/cron-1.html> <https://help.ubuntu.com/community/CronHowto>
3,904,033
Is there an easy way to get a python code segment to run every 5 minutes? I know I could do it using time.sleep() but was there any other way? For example I want to run this every 5 minutes: ``` x = 0 def run_5(): print "5 minutes later" global x += 5 print x, "minutes since start" ``` That's only a fake example but the idea is there. Any ideas? I am on linux and would happily use cron but was just wondering if there was a python alternative?
2010/10/11
[ "https://Stackoverflow.com/questions/3904033", "https://Stackoverflow.com", "https://Stackoverflow.com/users/472006/" ]
you can do it with the threading module ``` >>> import threading >>> END = False >>> def run(x=0): ... x += 5 ... print x ... if not END: ... threading.Timer(1.0, run, [x]).start() ... >>> threading.Timer(1.0, run, [x]).start() >>> 5 10 15 20 25 30 35 40 ``` Then when you want it to stop, set `END = True`.
If you are on a windows platform, you could use a scheduled task. Otherwise, use cron, wonderful, wonderful cron.
3,904,033
Is there an easy way to get a python code segment to run every 5 minutes? I know I could do it using time.sleep() but was there any other way? For example I want to run this every 5 minutes: ``` x = 0 def run_5(): print "5 minutes later" global x += 5 print x, "minutes since start" ``` That's only a fake example but the idea is there. Any ideas? I am on linux and would happily use cron but was just wondering if there was a python alternative?
2010/10/11
[ "https://Stackoverflow.com/questions/3904033", "https://Stackoverflow.com", "https://Stackoverflow.com/users/472006/" ]
You might want to have a look at `cron` if you are running a \*nix type OS. You could easily have it run you program every 5 minutes <http://www.unixgeeks.org/security/newbie/unix/cron-1.html> <https://help.ubuntu.com/community/CronHowto>
If you are on a windows platform, you could use a scheduled task. Otherwise, use cron, wonderful, wonderful cron.
3,113,002
After doing web development (php/js) for the last few years i thought it is about time to also have a look at something different. I thought it may be always good to have look of different areas in programming to understand some different approaches better, so i now want to have look at GUI development. As programming language i did choose Python where i now slowly get the basics and i also found this question: [How to learn python](https://stackoverflow.com/questions/17988/how-to-learn-python) which already contains good links and book proposals. So i am now mainly looking for some infos about PyQt: * Tutorials * Books * General tips for GUI development I already looked at some tutorials, but didn't find any really good ones. Most were pretty short and didn't really explain anything. Thanks in advance for advises.
2010/06/24
[ "https://Stackoverflow.com/questions/3113002", "https://Stackoverflow.com", "https://Stackoverflow.com/users/276382/" ]
The first thing to realize is that you'll get more mileage out of understanding Qt than understanding PyQt. Most of the good documentation discusses Qt, not PyQt, so getting conversant with them (and how to convert that code to PyQt code) is a lifesaver. Note, I don't actually recommend *programming* Qt in C++; Python is a fantastic language for Qt programming, since it takes care of a lot of gruntwork, leaving you to actually code application logic. The best book I've found for working with PyQt is [Rapid GUI Programming with Python and Qt](http://www.qtrac.eu/pyqtbook.html). It's got a nice small Python tutorial in the front, then takes you through the basics of building a Qt application. By the end of the book you should have a good idea of how to build an application, and some basic idea of where to start for more advanced topics. The other critical reference is the [bindings documentation for PyQt](http://www.riverbankcomputing.co.uk/static/Docs/PyQt4/pyqt4ref.html). Pay particular attention to the "New-style Signal and Slot Support"; it's a *huge* improvement over the old style. Once you really understand that document (and it's pretty short) you'll be able to navigate the Qt docs pretty easily.
I had this bookmark saved: <http://www.harshj.com/2009/04/26/the-pyqt-intro/>
3,113,002
After doing web development (php/js) for the last few years i thought it is about time to also have a look at something different. I thought it may be always good to have look of different areas in programming to understand some different approaches better, so i now want to have look at GUI development. As programming language i did choose Python where i now slowly get the basics and i also found this question: [How to learn python](https://stackoverflow.com/questions/17988/how-to-learn-python) which already contains good links and book proposals. So i am now mainly looking for some infos about PyQt: * Tutorials * Books * General tips for GUI development I already looked at some tutorials, but didn't find any really good ones. Most were pretty short and didn't really explain anything. Thanks in advance for advises.
2010/06/24
[ "https://Stackoverflow.com/questions/3113002", "https://Stackoverflow.com", "https://Stackoverflow.com/users/276382/" ]
I had this bookmark saved: <http://www.harshj.com/2009/04/26/the-pyqt-intro/>
My advice would be: have some particular goal in mind, some app that you, or even better someone else, would use in a real world scenario. I started with the same book Chris B mentioned, i.e. [Rapid GUI Programming with Python and Qt](http://www.qtrac.eu/pyqtbook.html) and I found it useful and it touched many of the topics you would need in most GUI applications. Additionally, after some time and some confidence gained, you want to have [PyQT Classes](http://www.riverbankcomputing.co.uk/static/Docs/PyQt4/html/classes.html) handy. Do not avoid C++ examples to explain some problem you'd like to solve, rewriting it in Python is not that hard (depending on the problem, and scope of course).
3,113,002
After doing web development (php/js) for the last few years i thought it is about time to also have a look at something different. I thought it may be always good to have look of different areas in programming to understand some different approaches better, so i now want to have look at GUI development. As programming language i did choose Python where i now slowly get the basics and i also found this question: [How to learn python](https://stackoverflow.com/questions/17988/how-to-learn-python) which already contains good links and book proposals. So i am now mainly looking for some infos about PyQt: * Tutorials * Books * General tips for GUI development I already looked at some tutorials, but didn't find any really good ones. Most were pretty short and didn't really explain anything. Thanks in advance for advises.
2010/06/24
[ "https://Stackoverflow.com/questions/3113002", "https://Stackoverflow.com", "https://Stackoverflow.com/users/276382/" ]
The first thing to realize is that you'll get more mileage out of understanding Qt than understanding PyQt. Most of the good documentation discusses Qt, not PyQt, so getting conversant with them (and how to convert that code to PyQt code) is a lifesaver. Note, I don't actually recommend *programming* Qt in C++; Python is a fantastic language for Qt programming, since it takes care of a lot of gruntwork, leaving you to actually code application logic. The best book I've found for working with PyQt is [Rapid GUI Programming with Python and Qt](http://www.qtrac.eu/pyqtbook.html). It's got a nice small Python tutorial in the front, then takes you through the basics of building a Qt application. By the end of the book you should have a good idea of how to build an application, and some basic idea of where to start for more advanced topics. The other critical reference is the [bindings documentation for PyQt](http://www.riverbankcomputing.co.uk/static/Docs/PyQt4/pyqt4ref.html). Pay particular attention to the "New-style Signal and Slot Support"; it's a *huge* improvement over the old style. Once you really understand that document (and it's pretty short) you'll be able to navigate the Qt docs pretty easily.
My advice would be: have some particular goal in mind, some app that you, or even better someone else, would use in a real world scenario. I started with the same book Chris B mentioned, i.e. [Rapid GUI Programming with Python and Qt](http://www.qtrac.eu/pyqtbook.html) and I found it useful and it touched many of the topics you would need in most GUI applications. Additionally, after some time and some confidence gained, you want to have [PyQT Classes](http://www.riverbankcomputing.co.uk/static/Docs/PyQt4/html/classes.html) handy. Do not avoid C++ examples to explain some problem you'd like to solve, rewriting it in Python is not that hard (depending on the problem, and scope of course).
3,113,002
After doing web development (php/js) for the last few years i thought it is about time to also have a look at something different. I thought it may be always good to have look of different areas in programming to understand some different approaches better, so i now want to have look at GUI development. As programming language i did choose Python where i now slowly get the basics and i also found this question: [How to learn python](https://stackoverflow.com/questions/17988/how-to-learn-python) which already contains good links and book proposals. So i am now mainly looking for some infos about PyQt: * Tutorials * Books * General tips for GUI development I already looked at some tutorials, but didn't find any really good ones. Most were pretty short and didn't really explain anything. Thanks in advance for advises.
2010/06/24
[ "https://Stackoverflow.com/questions/3113002", "https://Stackoverflow.com", "https://Stackoverflow.com/users/276382/" ]
The first thing to realize is that you'll get more mileage out of understanding Qt than understanding PyQt. Most of the good documentation discusses Qt, not PyQt, so getting conversant with them (and how to convert that code to PyQt code) is a lifesaver. Note, I don't actually recommend *programming* Qt in C++; Python is a fantastic language for Qt programming, since it takes care of a lot of gruntwork, leaving you to actually code application logic. The best book I've found for working with PyQt is [Rapid GUI Programming with Python and Qt](http://www.qtrac.eu/pyqtbook.html). It's got a nice small Python tutorial in the front, then takes you through the basics of building a Qt application. By the end of the book you should have a good idea of how to build an application, and some basic idea of where to start for more advanced topics. The other critical reference is the [bindings documentation for PyQt](http://www.riverbankcomputing.co.uk/static/Docs/PyQt4/pyqt4ref.html). Pay particular attention to the "New-style Signal and Slot Support"; it's a *huge* improvement over the old style. Once you really understand that document (and it's pretty short) you'll be able to navigate the Qt docs pretty easily.
There is a [step-by-step guide](http://popdevelop.com/2010/04/setting-up-ide-and-creating-a-cross-platform-qt-python-gui-application/) at popdevelop.com on how to set up Eclipse with PyQT.
3,113,002
After doing web development (php/js) for the last few years i thought it is about time to also have a look at something different. I thought it may be always good to have look of different areas in programming to understand some different approaches better, so i now want to have look at GUI development. As programming language i did choose Python where i now slowly get the basics and i also found this question: [How to learn python](https://stackoverflow.com/questions/17988/how-to-learn-python) which already contains good links and book proposals. So i am now mainly looking for some infos about PyQt: * Tutorials * Books * General tips for GUI development I already looked at some tutorials, but didn't find any really good ones. Most were pretty short and didn't really explain anything. Thanks in advance for advises.
2010/06/24
[ "https://Stackoverflow.com/questions/3113002", "https://Stackoverflow.com", "https://Stackoverflow.com/users/276382/" ]
There is a [step-by-step guide](http://popdevelop.com/2010/04/setting-up-ide-and-creating-a-cross-platform-qt-python-gui-application/) at popdevelop.com on how to set up Eclipse with PyQT.
My advice would be: have some particular goal in mind, some app that you, or even better someone else, would use in a real world scenario. I started with the same book Chris B mentioned, i.e. [Rapid GUI Programming with Python and Qt](http://www.qtrac.eu/pyqtbook.html) and I found it useful and it touched many of the topics you would need in most GUI applications. Additionally, after some time and some confidence gained, you want to have [PyQT Classes](http://www.riverbankcomputing.co.uk/static/Docs/PyQt4/html/classes.html) handy. Do not avoid C++ examples to explain some problem you'd like to solve, rewriting it in Python is not that hard (depending on the problem, and scope of course).
18,905,026
When trying to unit test a method that returns a tuple and I am trying to see if the code accesses the correct tuple index, python tries to evaluate the expected call and turns it into a string. `call().methodA().__getitem__(0)` ends up getting converted into `'().methodA'` in my `expected_calls` list for the assertion. The example code provided, results in the output and traceback: ``` expected_calls=[call().methodA(), '().methodA'] result_calls=[call().methodA(), call().methodA().__getitem__(0)] ====================================================================== ERROR: test_methodB (badMockCalls.Test_UsingToBeMocked_methods) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\dev\workspace\TestCode\src\badMockCalls.py", line 43, in test_methodB self.assertListEqual(expected_calls, self.result_calls) File "C:\Python33\lib\unittest\case.py", line 844, in assertListEqual self.assertSequenceEqual(list1, list2, msg, seq_type=list) File "C:\Python33\lib\unittest\case.py", line 764, in assertSequenceEqual if seq1 == seq2: File "C:\Python33\lib\unittest\mock.py", line 1927, in __eq__ first, second = other ValueError: too many values to unpack (expected 2) ---------------------------------------------------------------------- Ran 1 test in 0.006s FAILED (errors=1) ``` How do I go about asserting that methodB is calling self.tbm.methodA()[0] properly? Example code (Python 3.3.2): ``` import unittest from unittest.mock import call, patch import logging log = logging.getLogger(__name__) log.setLevel(logging.DEBUG) _ch = logging.StreamHandler() _ch.setLevel(logging.DEBUG) log.addHandler(_ch) class ToBeMocked(): # external resource that can't be changed def methodA(self): return (1,) class UsingToBeMocked(): # project code def __init__(self): self.tbm = ToBeMocked() def methodB(self): value = self.tbm.methodA()[0] return value class Test_UsingToBeMocked_methods(unittest.TestCase): def setUp(self): self.patcher = patch(__name__ + '.ToBeMocked') self.mock_ToBeMocked = self.patcher.start() self.utbm = UsingToBeMocked() # clear out the mock_calls list from the constructor calls self.mock_ToBeMocked.mock_calls = [] # set result to always point to the mock_calls that we are testing self.result_calls = self.mock_ToBeMocked.mock_calls def tearDown(self): self.patcher.stop() def test_methodB(self): self.utbm.methodB() # make sure the correct sequence of calls is made with correct parameters expected_calls = [call().methodA(), call().methodA().__getitem__(0)] log.debug('expected_calls=' + str(expected_calls)) log.debug(' result_calls=' + str(self.result_calls)) self.assertListEqual(expected_calls, self.result_calls) if __name__ == "__main__": unittest.main() ```
2013/09/19
[ "https://Stackoverflow.com/questions/18905026", "https://Stackoverflow.com", "https://Stackoverflow.com/users/373628/" ]
To test for `mock_object.account['xxx1'].patch(body={'status': 'active'})` I had to use the test: ``` mock_object.account.__getitem__.assert_has_calls([ call('xxxx1'), call().patch(body={'status': 'active'}), ]) ``` I can't explain why this works, this looks like weird behaviour, possibly a bug in mock, but I consistently get these results and it works.
I've just stumbled upon the same problem. I've used the solution/work-around from here: <http://www.voidspace.org.uk/python/mock/examples.html#mocking-a-dictionary-with-magicmock> namely: ``` >> mock.__getitem__.call_args_list [call('a'), call('c'), call('d'), call('b'), call('d')] ``` You can skip the magic function name misinterpretation and check against its arguments.
14,366,668
I've been working on learning python and somehow came up with following codes: ``` for item in list: while list.count(item)!=1: list.remove(item) ``` I was wondering if this kind of coding can be done in c++. (Using list length for the for loop while decreasing its size) If not, can anyone tell me why? Thanks!
2013/01/16
[ "https://Stackoverflow.com/questions/14366668", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1948847/" ]
I am not a big Python programmer, but it seems like the above code removes duplicates from a list. Here is a C++ equivalent: ``` list.sort(); list.unique(); ``` As for modifying the list while iterating over it, you can do that as well. Here is an example: ``` for (auto it = list.begin(), eit = list.end(); it != eit; ) { if (std::count(it, eit, *it) > 1) it = list.erase(it); else ++it; } ``` Hope it helps.
In C++, you can compose something like this from various algorithms of the standard library, check out remove(), find(), However, the way your algorithm is written, it looks like O(n^2) complexity. Sorting the list and then scanning over it to put one of each value into a new list has O(n log n) complexity, but ruins the order. In general, both for Python and C++, it is often better to copy or move elements to a temporary container and then swap with the original than modifying the original in-place. This is easier to get right since you don't step on your own feet (see delnan's comment) and it is faster because it avoids repeated reallocation and copying of objects.
14,366,668
I've been working on learning python and somehow came up with following codes: ``` for item in list: while list.count(item)!=1: list.remove(item) ``` I was wondering if this kind of coding can be done in c++. (Using list length for the for loop while decreasing its size) If not, can anyone tell me why? Thanks!
2013/01/16
[ "https://Stackoverflow.com/questions/14366668", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1948847/" ]
I am not a big Python programmer, but it seems like the above code removes duplicates from a list. Here is a C++ equivalent: ``` list.sort(); list.unique(); ``` As for modifying the list while iterating over it, you can do that as well. Here is an example: ``` for (auto it = list.begin(), eit = list.end(); it != eit; ) { if (std::count(it, eit, *it) > 1) it = list.erase(it); else ++it; } ``` Hope it helps.
Here's how I'd do it. ``` //If we will not delete an element of the list for (std::list<MyType>::iterator it = MyList.begin(); it != MyList.end();++it) { //my operation here } //If we will delete an element of the list for (std::list<MyType>::iterator it = MyList.begin(); it != MyList.end();) { std::list<MyType>::iterator itt = it; ++itt; MyList.erase(it); it = itt; } ``` You can use the size of the list, but it is not comparable to [it] because [it]. Certain features of std:: data classes are enabled or disabled as a design decision. Sure, you can make your own function MyList[int i], but it will lead to a large speed gimp due to the nature of lists.
14,366,668
I've been working on learning python and somehow came up with following codes: ``` for item in list: while list.count(item)!=1: list.remove(item) ``` I was wondering if this kind of coding can be done in c++. (Using list length for the for loop while decreasing its size) If not, can anyone tell me why? Thanks!
2013/01/16
[ "https://Stackoverflow.com/questions/14366668", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1948847/" ]
I am not a big Python programmer, but it seems like the above code removes duplicates from a list. Here is a C++ equivalent: ``` list.sort(); list.unique(); ``` As for modifying the list while iterating over it, you can do that as well. Here is an example: ``` for (auto it = list.begin(), eit = list.end(); it != eit; ) { if (std::count(it, eit, *it) > 1) it = list.erase(it); else ++it; } ``` Hope it helps.
In C++ you can in some conditions remove elements from a container while iterating over it. This depends on the container and on the operation you want to do. Currently there are different interpretations of your code snipplet in the different answers. My interpretation is, that you want to delete all the elements which exists more than once in the list. Here is the solution in C++: it first counts the elements in another container (std::map) and removes the appropriate elements from the list afterwards. ``` #include <list> #include <map> #include <algorithm> #include <iostream> int main() { std::list<int> li { 0, 1, 2, 3, 4, 5, 1, 2, 3, 2, 2 }; // Create count map: element -> count std::map<int, int> cm; std::for_each( li.begin(), li.end(), [&cm](int i) { ++cm[i]; } ); // Remove all elements from list with count > 1 std::for_each( cm.begin(), cm.end(), [&li](std::pair<const int, int> const p) { if( p.second > 1) { li.remove( p.first ); } } ); // Output all elements from remaining list std::for_each( li.begin(), li.end(), [](int i) { std::cout << i << std::endl; } ); return 0; } ```
14,366,668
I've been working on learning python and somehow came up with following codes: ``` for item in list: while list.count(item)!=1: list.remove(item) ``` I was wondering if this kind of coding can be done in c++. (Using list length for the for loop while decreasing its size) If not, can anyone tell me why? Thanks!
2013/01/16
[ "https://Stackoverflow.com/questions/14366668", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1948847/" ]
I am not a big Python programmer, but it seems like the above code removes duplicates from a list. Here is a C++ equivalent: ``` list.sort(); list.unique(); ``` As for modifying the list while iterating over it, you can do that as well. Here is an example: ``` for (auto it = list.begin(), eit = list.end(); it != eit; ) { if (std::count(it, eit, *it) > 1) it = list.erase(it); else ++it; } ``` Hope it helps.
I don't know Python but someone said in a comment that a list is equivalent to a C++ vector and it is not sorted, so here goes.... ``` std::vector<int> v{1, 2, 2, 2, 3, 3, 2, 2, 1}; v.erase(std::unique(v.begin(), v.end()), v.end()); ``` `v` contains `{1, 2, 3, 2, 1}` after this code. If the goal is to remove all duplicates (not just consecutive duplicates) you'll have to sort the vector first: `std::sort(v.begin(), v.end());`
14,366,668
I've been working on learning python and somehow came up with following codes: ``` for item in list: while list.count(item)!=1: list.remove(item) ``` I was wondering if this kind of coding can be done in c++. (Using list length for the for loop while decreasing its size) If not, can anyone tell me why? Thanks!
2013/01/16
[ "https://Stackoverflow.com/questions/14366668", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1948847/" ]
I am not a big Python programmer, but it seems like the above code removes duplicates from a list. Here is a C++ equivalent: ``` list.sort(); list.unique(); ``` As for modifying the list while iterating over it, you can do that as well. Here is an example: ``` for (auto it = list.begin(), eit = list.end(); it != eit; ) { if (std::count(it, eit, *it) > 1) it = list.erase(it); else ++it; } ``` Hope it helps.
`std::vector` is the container in C++ that is most similar to Python's `list`, and here's the correct way to modify a vector while iterating it: ``` template <typename T> void dedupe(std::vector<T> &vec) { for (std::vector<T>::iterator it = vec.begin(); it != vec.end(); ) { if (std::count(vev.begin(), vec.end(), *it) != 1) { it = vec.erase(it); } else { ++it; } } } ``` It's not necessarily the most efficient way to dedupe, but it works. > > Using list length for the for loop while decreasing its size > > > If you insist on using the length rather than the `end()`, then you can use an index instead of an iterator: ``` template <typename T> void dedupe(std::vector<T> &vec) { for (std::vector<T>::size_type pos = 0; pos != vec.size(); ) { if (std::count(vec.begin(), vec.end(), vec[pos]) != 1) { vec.erase(vec.begin() + pos); } else { ++pos; } } } ``` I'm assuming that the intention of your Python code is to remove all duplicates, by the way, and that the fact it doesn't is a bug. For example input `[2,2,1,3,3,1,2,3]`, output `[1,1,2,3]`. If what you said is what you meant, then a direct translation of your code to C++ is: ``` template <typename T> void dedupe(std::vector<T> &vec) { for (std::vector<T>::size_type pos = 0; pos < vec.size(); ++pos) { T item = vec[pos]; while (std::count(vec.begin(), vec.end(), item) != 1) { vec.erase(std::find(vec.begin(), vec.end(), item)); } } } ```
49,889,153
I am running an RNN on a signal in fixed-size segments. The following code allows me to preserve the final state of the previous batch to initialize the initial state of the next batch. ``` rnn_outputs, final_state = tf.contrib.rnn.static_rnn(cell, rnn_inputs, initial_state=init_state) ``` This works when the batches are non-overlapping. For example, my first batch processes samples 0:124 and `final_state` is the state after this processing. Then, the next batch processes samples 124:256, setting `init_state` to `final_state`. My question is how to retrieve an intermediary state when the batches are overlapping. First, I process samples 0:124, then 10:134, 20:144, so the hop size is 10. I would like to retrieve not the `final_state` but the state after processing 10 samples. Is it possible in TF to keep the intermediary state? The [documentation](https://www.tensorflow.org/versions/r1.1/api_docs/python/tf/contrib/rnn/static_rnn) shows that the return value consists only of the final state. The image shows the issue I am facing due to state discontinuity. In my program, the RNN segment length is 215 and the hop length is 20. [![Sample results](https://i.stack.imgur.com/G9owu.png)](https://i.stack.imgur.com/G9owu.png) Update: the easiest turned out to be what [David Parks](https://stackoverflow.com/a/49890068/4008884) described: ``` rnn_outputs_one, mid_state = tf.contrib.rnn.static_rnn(cell, rnn_inputs_one, initial_state=rnn_tuple_state) rnn_outputs_two, final_state = tf.contrib.rnn.static_rnn(cell, rnn_inputs_two, initial_state=mid_state) rnn_outputs = rnn_outputs_one + rnn_outputs_two ``` and ``` prev_state = sess.run(mid_state) ``` Now, after just a few iterations, the results look much better. [![enter image description here](https://i.stack.imgur.com/4EU6N.png)](https://i.stack.imgur.com/4EU6N.png)
2018/04/17
[ "https://Stackoverflow.com/questions/49889153", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4008884/" ]
In tensorflow the only thing that is kept after returning from a call to `sess.run` are variables. You should create a variable for the state, then use `tf.assign` to assign the result from your RNN cell to that variable. You can then use that Variable in the same way as any other tensor. If you need to initialize the variable to something other than `0` you can call `sess.run` once with a placeholder and `tf.assign` specifically to setup the variable. --- Added detail: If you need an intermediate state, let's say you ran for timesteps 0:124 and you want step 10, you should split that up into 2 RNN cells, one that processes the first 10 timesteps and the second that continues processing the next 114 timesteps. This shouldn't affect training and back propagation as long as you use the same cell (LSTM or other cell) in both `static_rnn` functions. The cell is where your weights are defined, and that has to remain constant. Your gradient will flow backwards through the second cell and then finally the first appropriately.
So, I came here looking for an answer earlier, but I ended up creating one. Similar to above posters about making it assignable... When you build your graph, make a list of sequence placeholders like.. ``` my_states = [None] * int(sequence_length + 1) my_states[0] = cell.zero_state() for step in steps: cell_out, my_states[step+1] = cell( ) ``` Then outside of your graph after the sess.run() you say ``` new_states = my_states[1:] model.my_states = new_states ``` This situation is for stepping 1 timestep at a time, but it could easily be made for steps of 10. Just slice the list of states after sess.run() and make those the initial states. Good luck!
38,552,688
I am trying to filter all the `#` keywords from the tweet text. I am using `str.extractall()` to extract all the keywords with `#` keywords. This is the first time I am working on filtering keywords from the tweetText using pandas. Inputs, code, expected output and error are given below. Input: ``` userID,tweetText 01, home #sweet home 01, #happy #life 02, #world peace 03, #all are one 04, world tour ``` and so on... the total datafile is in GB size scraped tweets with several other columns. But I am interested in only two columns. Code: ``` import re import pandas as pd data = pd.read_csv('Text.csv', index_col=0, header=None, names=['userID', 'tweetText']) fout = data['tweetText'].str.extractall('#') print fout ``` Expected Output: ``` userID,tweetText 01,#sweet 01,#happy 01,#life 02,#world 03,#all ``` Error: ``` Traceback (most recent call last): File "keyword_split.py", line 7, in <module> fout = data['tweetText'].str.extractall('#') File "/usr/local/lib/python2.7/dist-packages/pandas/core/strings.py", line 1621, in extractall return str_extractall(self._orig, pat, flags=flags) File "/usr/local/lib/python2.7/dist-packages/pandas/core/strings.py", line 694, in str_extractall raise ValueError("pattern contains no capture groups") ValueError: pattern contains no capture groups ``` Thanks in advance for the help. What should be the simplest way to filter keywords with respect to userid? Output Update: When used only this the output is like above `s.name = "tweetText" data_1 = data[~data['tweetText'].isnull()]` The output in this case has empty `[]` and the userID at still listed and for those which has keywords has an array of keywords and not in list form. When used only this the output us what needed but with `NAN` ``` s.name = "tweetText" data_2 = data_1.drop('tweetText', axis=1).join(s) ``` The output here is correct format but those with no keywords has yet considered and has NAN If it is possible we got to neglect such userIDs and not shown in output at all.In next stages I am trying to calculate the frequency of keywords in which the `NAN` or empty `[]` will also be counted and that frequency may compromise the far future classification. [![enter image description here](https://i.stack.imgur.com/VYHY4.png)](https://i.stack.imgur.com/VYHY4.png)
2016/07/24
[ "https://Stackoverflow.com/questions/38552688", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5056548/" ]
If you are not too tied to using `extractall`, you can try the following to get your final output: ``` from io import StringIO import pandas as pd import re data_text = """userID,tweetText 01, home #sweet home 01, #happy #life 02, #world peace 03, #all are one """ data = pd.read_csv(StringIO(data_text),header=0) data['tweetText'] = data.tweetText.apply(lambda x: re.findall('#(?=\w+)\w+',x)) s = data.apply(lambda x: pd.Series(x['tweetText']),axis=1).stack().reset_index(level=1, drop=True) s.name = "tweetText" data = data.drop('tweetText', axis=1).join(s) userID tweetText 0 1 #sweet 1 1 #happy 1 1 #life 2 2 #world 3 3 #all 4 4 NaN ``` You drop the rows where the textTweet column returns `Nan`'s by doing the following: ``` data = data[~data['tweetText'].isnull()] ``` This should return: ``` userID tweetText 0 1 #sweet 1 1 #happy 1 1 #life 2 2 #world 3 3 #all ``` I hope this helps.
The `extractall` function requires a regex pattern **with capturing groups** as the first argument, for which you have provided `#`. A possible argument could be `(#\S+)`. The braces indicate a capture group, in other words what the `extractall` function needs to extract from each string. Example: ``` data="""01, home #sweet home 01, #happy #life 02, #world peace 03, #all are one """ import pandas as pd from io import StringIO df = pd.read_csv(StringIO(data), header=None, names=['col1', 'col2'], index_col=0) df['col2'].str.extractall('(#\S+)') ``` The error `ValueError: pattern contains no capture groups` doesn't appear anymore with the above code (meaning the issue in the question is solved), but this hits a bug in the current version of pandas (I'm using `'0.18.1'`). The error returned is: ``` AssertionError: 1 columns passed, passed data had 6 columns ``` The issue is described [here](https://github.com/pydata/pandas/issues/13382). If you would try `df['col2'].str.extractall('#(\S)')`(which will give you the first letter of every hashtag, you'll see that the `extractall` function works as long as the captured group only contains a single character (which matches the issue description). As the issue is closed, it should be fixed in an upcoming pandas release.
38,552,688
I am trying to filter all the `#` keywords from the tweet text. I am using `str.extractall()` to extract all the keywords with `#` keywords. This is the first time I am working on filtering keywords from the tweetText using pandas. Inputs, code, expected output and error are given below. Input: ``` userID,tweetText 01, home #sweet home 01, #happy #life 02, #world peace 03, #all are one 04, world tour ``` and so on... the total datafile is in GB size scraped tweets with several other columns. But I am interested in only two columns. Code: ``` import re import pandas as pd data = pd.read_csv('Text.csv', index_col=0, header=None, names=['userID', 'tweetText']) fout = data['tweetText'].str.extractall('#') print fout ``` Expected Output: ``` userID,tweetText 01,#sweet 01,#happy 01,#life 02,#world 03,#all ``` Error: ``` Traceback (most recent call last): File "keyword_split.py", line 7, in <module> fout = data['tweetText'].str.extractall('#') File "/usr/local/lib/python2.7/dist-packages/pandas/core/strings.py", line 1621, in extractall return str_extractall(self._orig, pat, flags=flags) File "/usr/local/lib/python2.7/dist-packages/pandas/core/strings.py", line 694, in str_extractall raise ValueError("pattern contains no capture groups") ValueError: pattern contains no capture groups ``` Thanks in advance for the help. What should be the simplest way to filter keywords with respect to userid? Output Update: When used only this the output is like above `s.name = "tweetText" data_1 = data[~data['tweetText'].isnull()]` The output in this case has empty `[]` and the userID at still listed and for those which has keywords has an array of keywords and not in list form. When used only this the output us what needed but with `NAN` ``` s.name = "tweetText" data_2 = data_1.drop('tweetText', axis=1).join(s) ``` The output here is correct format but those with no keywords has yet considered and has NAN If it is possible we got to neglect such userIDs and not shown in output at all.In next stages I am trying to calculate the frequency of keywords in which the `NAN` or empty `[]` will also be counted and that frequency may compromise the far future classification. [![enter image description here](https://i.stack.imgur.com/VYHY4.png)](https://i.stack.imgur.com/VYHY4.png)
2016/07/24
[ "https://Stackoverflow.com/questions/38552688", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5056548/" ]
The `extractall` function requires a regex pattern **with capturing groups** as the first argument, for which you have provided `#`. A possible argument could be `(#\S+)`. The braces indicate a capture group, in other words what the `extractall` function needs to extract from each string. Example: ``` data="""01, home #sweet home 01, #happy #life 02, #world peace 03, #all are one """ import pandas as pd from io import StringIO df = pd.read_csv(StringIO(data), header=None, names=['col1', 'col2'], index_col=0) df['col2'].str.extractall('(#\S+)') ``` The error `ValueError: pattern contains no capture groups` doesn't appear anymore with the above code (meaning the issue in the question is solved), but this hits a bug in the current version of pandas (I'm using `'0.18.1'`). The error returned is: ``` AssertionError: 1 columns passed, passed data had 6 columns ``` The issue is described [here](https://github.com/pydata/pandas/issues/13382). If you would try `df['col2'].str.extractall('#(\S)')`(which will give you the first letter of every hashtag, you'll see that the `extractall` function works as long as the captured group only contains a single character (which matches the issue description). As the issue is closed, it should be fixed in an upcoming pandas release.
Try this: Since it filters for '#', your NAN should not exist. ``` data = pd.read_csv(StringIO(data_text),header=0, index_col=0 ) data = data["tweetText"].str.split(' ', expand=True).stack().reset_index().rename(columns = {0:"tweetText"}).drop('level_1', 1) data = data[data['tweetText'].str[0] == "#"].reset_index(drop=True) userID tweetText 0 1 #sweet 1 1 #happy 2 1 #life 3 2 #world 4 3 #all ``` @Abdou method: ``` def try1(): data = pd.read_csv(StringIO(data_text),header=0) data['tweetText'] = data.tweetText.apply(lambda x: re.findall('#(?=\w+)\w+',x)) s = data.apply(lambda x: pd.Series(x['tweetText']),axis=1).stack().reset_index(level=1, drop=True) s.name = "tweetText" data = data.drop('tweetText', axis=1).join(s) data = data[~data['tweetText'].isnull()] %timeit try1() 100 loops, best of 3: 7.71 ms per loop ``` @Merlin method ``` def try2(): data = pd.read_csv(StringIO(data_text),header=0, index_col=0 ) data = data["tweetText"].str.split(' ', expand=True).stack().reset_index().rename(columns = {'level_0':'userID',0:"tweetText"}).drop('level_1', 1) data = data[data['tweetText'].str[0] == "#"].reset_index(drop=True) %timeit try2() 100 loops, best of 3: 5.36 ms per loop ```
38,552,688
I am trying to filter all the `#` keywords from the tweet text. I am using `str.extractall()` to extract all the keywords with `#` keywords. This is the first time I am working on filtering keywords from the tweetText using pandas. Inputs, code, expected output and error are given below. Input: ``` userID,tweetText 01, home #sweet home 01, #happy #life 02, #world peace 03, #all are one 04, world tour ``` and so on... the total datafile is in GB size scraped tweets with several other columns. But I am interested in only two columns. Code: ``` import re import pandas as pd data = pd.read_csv('Text.csv', index_col=0, header=None, names=['userID', 'tweetText']) fout = data['tweetText'].str.extractall('#') print fout ``` Expected Output: ``` userID,tweetText 01,#sweet 01,#happy 01,#life 02,#world 03,#all ``` Error: ``` Traceback (most recent call last): File "keyword_split.py", line 7, in <module> fout = data['tweetText'].str.extractall('#') File "/usr/local/lib/python2.7/dist-packages/pandas/core/strings.py", line 1621, in extractall return str_extractall(self._orig, pat, flags=flags) File "/usr/local/lib/python2.7/dist-packages/pandas/core/strings.py", line 694, in str_extractall raise ValueError("pattern contains no capture groups") ValueError: pattern contains no capture groups ``` Thanks in advance for the help. What should be the simplest way to filter keywords with respect to userid? Output Update: When used only this the output is like above `s.name = "tweetText" data_1 = data[~data['tweetText'].isnull()]` The output in this case has empty `[]` and the userID at still listed and for those which has keywords has an array of keywords and not in list form. When used only this the output us what needed but with `NAN` ``` s.name = "tweetText" data_2 = data_1.drop('tweetText', axis=1).join(s) ``` The output here is correct format but those with no keywords has yet considered and has NAN If it is possible we got to neglect such userIDs and not shown in output at all.In next stages I am trying to calculate the frequency of keywords in which the `NAN` or empty `[]` will also be counted and that frequency may compromise the far future classification. [![enter image description here](https://i.stack.imgur.com/VYHY4.png)](https://i.stack.imgur.com/VYHY4.png)
2016/07/24
[ "https://Stackoverflow.com/questions/38552688", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5056548/" ]
Set braces in your calculus : ``` fout = data['tweetText'].str.extractall('(#)') ``` instead of ``` fout = data['tweetText'].str.extractall('#') ``` Hope that will work
The `extractall` function requires a regex pattern **with capturing groups** as the first argument, for which you have provided `#`. A possible argument could be `(#\S+)`. The braces indicate a capture group, in other words what the `extractall` function needs to extract from each string. Example: ``` data="""01, home #sweet home 01, #happy #life 02, #world peace 03, #all are one """ import pandas as pd from io import StringIO df = pd.read_csv(StringIO(data), header=None, names=['col1', 'col2'], index_col=0) df['col2'].str.extractall('(#\S+)') ``` The error `ValueError: pattern contains no capture groups` doesn't appear anymore with the above code (meaning the issue in the question is solved), but this hits a bug in the current version of pandas (I'm using `'0.18.1'`). The error returned is: ``` AssertionError: 1 columns passed, passed data had 6 columns ``` The issue is described [here](https://github.com/pydata/pandas/issues/13382). If you would try `df['col2'].str.extractall('#(\S)')`(which will give you the first letter of every hashtag, you'll see that the `extractall` function works as long as the captured group only contains a single character (which matches the issue description). As the issue is closed, it should be fixed in an upcoming pandas release.
38,552,688
I am trying to filter all the `#` keywords from the tweet text. I am using `str.extractall()` to extract all the keywords with `#` keywords. This is the first time I am working on filtering keywords from the tweetText using pandas. Inputs, code, expected output and error are given below. Input: ``` userID,tweetText 01, home #sweet home 01, #happy #life 02, #world peace 03, #all are one 04, world tour ``` and so on... the total datafile is in GB size scraped tweets with several other columns. But I am interested in only two columns. Code: ``` import re import pandas as pd data = pd.read_csv('Text.csv', index_col=0, header=None, names=['userID', 'tweetText']) fout = data['tweetText'].str.extractall('#') print fout ``` Expected Output: ``` userID,tweetText 01,#sweet 01,#happy 01,#life 02,#world 03,#all ``` Error: ``` Traceback (most recent call last): File "keyword_split.py", line 7, in <module> fout = data['tweetText'].str.extractall('#') File "/usr/local/lib/python2.7/dist-packages/pandas/core/strings.py", line 1621, in extractall return str_extractall(self._orig, pat, flags=flags) File "/usr/local/lib/python2.7/dist-packages/pandas/core/strings.py", line 694, in str_extractall raise ValueError("pattern contains no capture groups") ValueError: pattern contains no capture groups ``` Thanks in advance for the help. What should be the simplest way to filter keywords with respect to userid? Output Update: When used only this the output is like above `s.name = "tweetText" data_1 = data[~data['tweetText'].isnull()]` The output in this case has empty `[]` and the userID at still listed and for those which has keywords has an array of keywords and not in list form. When used only this the output us what needed but with `NAN` ``` s.name = "tweetText" data_2 = data_1.drop('tweetText', axis=1).join(s) ``` The output here is correct format but those with no keywords has yet considered and has NAN If it is possible we got to neglect such userIDs and not shown in output at all.In next stages I am trying to calculate the frequency of keywords in which the `NAN` or empty `[]` will also be counted and that frequency may compromise the far future classification. [![enter image description here](https://i.stack.imgur.com/VYHY4.png)](https://i.stack.imgur.com/VYHY4.png)
2016/07/24
[ "https://Stackoverflow.com/questions/38552688", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5056548/" ]
If you are not too tied to using `extractall`, you can try the following to get your final output: ``` from io import StringIO import pandas as pd import re data_text = """userID,tweetText 01, home #sweet home 01, #happy #life 02, #world peace 03, #all are one """ data = pd.read_csv(StringIO(data_text),header=0) data['tweetText'] = data.tweetText.apply(lambda x: re.findall('#(?=\w+)\w+',x)) s = data.apply(lambda x: pd.Series(x['tweetText']),axis=1).stack().reset_index(level=1, drop=True) s.name = "tweetText" data = data.drop('tweetText', axis=1).join(s) userID tweetText 0 1 #sweet 1 1 #happy 1 1 #life 2 2 #world 3 3 #all 4 4 NaN ``` You drop the rows where the textTweet column returns `Nan`'s by doing the following: ``` data = data[~data['tweetText'].isnull()] ``` This should return: ``` userID tweetText 0 1 #sweet 1 1 #happy 1 1 #life 2 2 #world 3 3 #all ``` I hope this helps.
Try this: Since it filters for '#', your NAN should not exist. ``` data = pd.read_csv(StringIO(data_text),header=0, index_col=0 ) data = data["tweetText"].str.split(' ', expand=True).stack().reset_index().rename(columns = {0:"tweetText"}).drop('level_1', 1) data = data[data['tweetText'].str[0] == "#"].reset_index(drop=True) userID tweetText 0 1 #sweet 1 1 #happy 2 1 #life 3 2 #world 4 3 #all ``` @Abdou method: ``` def try1(): data = pd.read_csv(StringIO(data_text),header=0) data['tweetText'] = data.tweetText.apply(lambda x: re.findall('#(?=\w+)\w+',x)) s = data.apply(lambda x: pd.Series(x['tweetText']),axis=1).stack().reset_index(level=1, drop=True) s.name = "tweetText" data = data.drop('tweetText', axis=1).join(s) data = data[~data['tweetText'].isnull()] %timeit try1() 100 loops, best of 3: 7.71 ms per loop ``` @Merlin method ``` def try2(): data = pd.read_csv(StringIO(data_text),header=0, index_col=0 ) data = data["tweetText"].str.split(' ', expand=True).stack().reset_index().rename(columns = {'level_0':'userID',0:"tweetText"}).drop('level_1', 1) data = data[data['tweetText'].str[0] == "#"].reset_index(drop=True) %timeit try2() 100 loops, best of 3: 5.36 ms per loop ```
38,552,688
I am trying to filter all the `#` keywords from the tweet text. I am using `str.extractall()` to extract all the keywords with `#` keywords. This is the first time I am working on filtering keywords from the tweetText using pandas. Inputs, code, expected output and error are given below. Input: ``` userID,tweetText 01, home #sweet home 01, #happy #life 02, #world peace 03, #all are one 04, world tour ``` and so on... the total datafile is in GB size scraped tweets with several other columns. But I am interested in only two columns. Code: ``` import re import pandas as pd data = pd.read_csv('Text.csv', index_col=0, header=None, names=['userID', 'tweetText']) fout = data['tweetText'].str.extractall('#') print fout ``` Expected Output: ``` userID,tweetText 01,#sweet 01,#happy 01,#life 02,#world 03,#all ``` Error: ``` Traceback (most recent call last): File "keyword_split.py", line 7, in <module> fout = data['tweetText'].str.extractall('#') File "/usr/local/lib/python2.7/dist-packages/pandas/core/strings.py", line 1621, in extractall return str_extractall(self._orig, pat, flags=flags) File "/usr/local/lib/python2.7/dist-packages/pandas/core/strings.py", line 694, in str_extractall raise ValueError("pattern contains no capture groups") ValueError: pattern contains no capture groups ``` Thanks in advance for the help. What should be the simplest way to filter keywords with respect to userid? Output Update: When used only this the output is like above `s.name = "tweetText" data_1 = data[~data['tweetText'].isnull()]` The output in this case has empty `[]` and the userID at still listed and for those which has keywords has an array of keywords and not in list form. When used only this the output us what needed but with `NAN` ``` s.name = "tweetText" data_2 = data_1.drop('tweetText', axis=1).join(s) ``` The output here is correct format but those with no keywords has yet considered and has NAN If it is possible we got to neglect such userIDs and not shown in output at all.In next stages I am trying to calculate the frequency of keywords in which the `NAN` or empty `[]` will also be counted and that frequency may compromise the far future classification. [![enter image description here](https://i.stack.imgur.com/VYHY4.png)](https://i.stack.imgur.com/VYHY4.png)
2016/07/24
[ "https://Stackoverflow.com/questions/38552688", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5056548/" ]
Set braces in your calculus : ``` fout = data['tweetText'].str.extractall('(#)') ``` instead of ``` fout = data['tweetText'].str.extractall('#') ``` Hope that will work
Try this: Since it filters for '#', your NAN should not exist. ``` data = pd.read_csv(StringIO(data_text),header=0, index_col=0 ) data = data["tweetText"].str.split(' ', expand=True).stack().reset_index().rename(columns = {0:"tweetText"}).drop('level_1', 1) data = data[data['tweetText'].str[0] == "#"].reset_index(drop=True) userID tweetText 0 1 #sweet 1 1 #happy 2 1 #life 3 2 #world 4 3 #all ``` @Abdou method: ``` def try1(): data = pd.read_csv(StringIO(data_text),header=0) data['tweetText'] = data.tweetText.apply(lambda x: re.findall('#(?=\w+)\w+',x)) s = data.apply(lambda x: pd.Series(x['tweetText']),axis=1).stack().reset_index(level=1, drop=True) s.name = "tweetText" data = data.drop('tweetText', axis=1).join(s) data = data[~data['tweetText'].isnull()] %timeit try1() 100 loops, best of 3: 7.71 ms per loop ``` @Merlin method ``` def try2(): data = pd.read_csv(StringIO(data_text),header=0, index_col=0 ) data = data["tweetText"].str.split(' ', expand=True).stack().reset_index().rename(columns = {'level_0':'userID',0:"tweetText"}).drop('level_1', 1) data = data[data['tweetText'].str[0] == "#"].reset_index(drop=True) %timeit try2() 100 loops, best of 3: 5.36 ms per loop ```
22,714,864
I'm trying to craft a regex able to match anything up to a specific pattern. The regex then will continue looking for other patterns until the end of the string, but in some cases the pattern will not be present and the match will fail. Right now I'm stuck at: ``` .*?PATTERN ``` The problem is that, in cases where the string is not present, this takes too much time due to backtraking. In order to shorten this, I tried mimicking atomic grouping using positive lookahead as explained in this thread (btw, I'm using re module in python-2.7): [Do Python regular expressions have an equivalent to Ruby's atomic grouping?](https://stackoverflow.com/questions/13577372/do-python-regular-expressions-have-an-equivalent-to-rubys-atomic-grouping) So I wrote: ``` (?=(?P<aux1>.*?))(?P=aux1)PATTERN ``` Of course, this is faster than the previous version when STRING is not present but trouble is, it doesn't match STRING anymore as the . matches everyhing to the end of the string and the previous states are discarded after the lookahead. So the question is, is there a way to do a match like `.*?STRING` and alse be able to fail faster when the match is not present?
2014/03/28
[ "https://Stackoverflow.com/questions/22714864", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3472731/" ]
You could try using `split` If the results are of length 1 you got no match. If you get two or more you know that the first one is the first match. If you limit the split to size one you'll short-circuit the later matching: ``` "HI THERE THEO".split("TH", 1) # ['HI ', 'ERE THEO'] ``` The first element of the results is up to the match.
The Python documentation includes a brief outline of the differences between the `re.search()` and `re.match()` functions <http://docs.python.org/2/library/re.html#search-vs-match>. In particular, the following quote is relevant: > > Sometimes you’ll be tempted to keep using re.match(), and just add .\* to the front of your RE. Resist this temptation and use re.search() instead. The regular expression compiler does some analysis of REs in order to speed up the process of looking for a match. One such analysis figures out what the first character of a match must be; for example, a pattern starting with Crow must match starting with a 'C'. The analysis lets the engine quickly scan through the string looking for the starting character, only trying the full match if a 'C' is found. > > > Adding .\* defeats this optimization, requiring scanning to the end of the string and then backtracking to find a match for the rest of the RE. Use re.search() instead. > > > In your case, it would be preferable to define your pattern simply as: ``` pattern = re.compile("PATTERN") ``` And then call `pattern.search(...)`, which will not backtrack when the pattern is not found.
22,714,864
I'm trying to craft a regex able to match anything up to a specific pattern. The regex then will continue looking for other patterns until the end of the string, but in some cases the pattern will not be present and the match will fail. Right now I'm stuck at: ``` .*?PATTERN ``` The problem is that, in cases where the string is not present, this takes too much time due to backtraking. In order to shorten this, I tried mimicking atomic grouping using positive lookahead as explained in this thread (btw, I'm using re module in python-2.7): [Do Python regular expressions have an equivalent to Ruby's atomic grouping?](https://stackoverflow.com/questions/13577372/do-python-regular-expressions-have-an-equivalent-to-rubys-atomic-grouping) So I wrote: ``` (?=(?P<aux1>.*?))(?P=aux1)PATTERN ``` Of course, this is faster than the previous version when STRING is not present but trouble is, it doesn't match STRING anymore as the . matches everyhing to the end of the string and the previous states are discarded after the lookahead. So the question is, is there a way to do a match like `.*?STRING` and alse be able to fail faster when the match is not present?
2014/03/28
[ "https://Stackoverflow.com/questions/22714864", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3472731/" ]
**One-Regex Solution** ``` ^(?=(?P<aux1>(?:[^P]|P(?!ATTERN))*))(?P=aux1)PATTERN ``` **Explanation** You wanted to use the atomic grouping like this: `(?>.*?)PATTERN`, right? This won't work. Problem is, you can't use lazy quantifiers at the end of an atomic grouping: the definition of the AG is that once you're outside of it, the regex won't backtrack inside. So the regex engine will match the `.*?`, because of the laziness it will step outside of the group to check if the next character is a `P`, and if it's not it won't be able to backtrack inside the group to match that next character inside the `.*`. What's usually used in Perl are structures like this: `(?>(?:[^P]|P(?!ATTERN))*)PATTERN`. That way, the equivalent of `.*` (here `(?:[^P]|P(?!ATTERN))`) won't "eat up" the wanted pattern. This pattern is easier to read in my opinion with possessive quantifiers, which are made just for these occasions: `(?:[^P]|P(?!ATTERN))*+PATTERN`. Translated with your workaround, this would lead to the above regex (added `^` since you should anchor the regex, either to the start of the string or to another regex).
The Python documentation includes a brief outline of the differences between the `re.search()` and `re.match()` functions <http://docs.python.org/2/library/re.html#search-vs-match>. In particular, the following quote is relevant: > > Sometimes you’ll be tempted to keep using re.match(), and just add .\* to the front of your RE. Resist this temptation and use re.search() instead. The regular expression compiler does some analysis of REs in order to speed up the process of looking for a match. One such analysis figures out what the first character of a match must be; for example, a pattern starting with Crow must match starting with a 'C'. The analysis lets the engine quickly scan through the string looking for the starting character, only trying the full match if a 'C' is found. > > > Adding .\* defeats this optimization, requiring scanning to the end of the string and then backtracking to find a match for the rest of the RE. Use re.search() instead. > > > In your case, it would be preferable to define your pattern simply as: ``` pattern = re.compile("PATTERN") ``` And then call `pattern.search(...)`, which will not backtrack when the pattern is not found.
62,461,709
currently, I'm trying to execute a python code that extracts information from the snowflake. When I running my code in my PC executed well, but if I try to run the code in a VM It shows me this error: [![enter image description here](https://i.stack.imgur.com/DTXuD.png)](https://i.stack.imgur.com/DTXuD.png) The VM is new, and I just have executed these commands: -pip install virtualenv (inside of the env) -pip install snowflake-connector-python[pandas] -pip install azure.eventhub (I need this package) Thanks for the help
2020/06/19
[ "https://Stackoverflow.com/questions/62461709", "https://Stackoverflow.com", "https://Stackoverflow.com/users/6153466/" ]
The Pandas python library requires some extra native libraries (DLLs) to load certain submodules due to use of C-extensions. Very recent Pandas versions, after 1.0.1, [are facing a build distribution issue](https://github.com/pandas-dev/pandas/issues/32857) currently, where their published packages are not carrying the necessary Microsoft Visual C++ redistributed DLL files to allow these modules to load. You can try to get around this issue in two ways: [Install the Microsoft Visual C++ Redistributable package](https://support.microsoft.com/en-us/help/2977003/the-latest-supported-visual-c-downloads) in your Windows VM directly, so that their DLLs appear for Pandas to load dynamically. Or, switch to using a slightly older release of Pandas (1.0.1) [which distributed the necessary DLLs properly](https://github.com/pandas-dev/pandas/pull/21321), until they resolve the issue with their binary packaging in future: ```vb C:\> pip install pandas==1.0.1 snowflake-connector-python ```
Please make sure to have the [Snowflake Python Connector prerequisites](https://docs.snowflake.com/en/user-guide/python-connector-install.html#prerequisites) installed. You can try the following commands: ``` // Install Python sudo yum install python36 // Install pip curl https://bootstrap.pypa.io/get-pip.py -o get-pip.py sudo python get-pip.py // Install Snowflake Connector sudo yum install -y libffi-devel openssl-devel; pip install --upgrade snowflake-connector-python; ```
36,676,629
I use the following trick in some of my Python scripts to drop into an interactive Python REPL session: ``` import code; code.InteractiveConsole(locals=globals()).interact() ``` This usually works well on various RHEL machines at work, but on my laptop (OS X 10.11.4) it starts the REPL seemingly without readline support. You can see I get the `^[[A^C` garbage characters. ``` My-MBP:~ complex$ cat repl.py a = 'alpha' import code; code.InteractiveConsole(locals=globals()).interact() b = 'beta' ``` ``` My-MBP:~ complex$ python repl.py Python 2.7.10 (default, Oct 23 2015, 19:19:21) [GCC 4.2.1 Compatible Apple LLVM 7.0.0 (clang-700.0.59.5)] on darwin Type "help", "copyright", "credits" or "license" for more information. (InteractiveConsole) >>> a 'alpha' >>> ^[[A^C ``` If I call `python` directly, up arrow command history in the REPL works fine. I tried inspecting `globals()` to look for some clues, but in each case they appear to be the same. Any ideas on how I can fix this, or where to look? Edit: And to show the correct behavior: ``` My-MBP:~ complex$ python Python 2.7.10 (default, Oct 23 2015, 19:19:21) [GCC 4.2.1 Compatible Apple LLVM 7.0.0 (clang-700.0.59.5)] on darwin Type "help", "copyright", "credits" or "license" for more information. >>> 'a' 'a' >>> 'a' ```
2016/04/17
[ "https://Stackoverflow.com/questions/36676629", "https://Stackoverflow.com", "https://Stackoverflow.com/users/6215905/" ]
Just `import readline`, either in the script or at the console.
The program `rlwrap` solves this problem in general, not just for Python but also for other programs in need of this feature such as `telnet`. You can install it with `brew install rlwrap` if you have Homebrew (which you should) and then use it by inserting it at the beginning of a command, i.e. `rlwrap python repl.py`.
21,421,987
Somewhat a python/programming newbie here. I am trying to access a specified range of tuples from a list of tuples, but I only want to access the first element from the range of tuples. The specified range is based on a pattern I am looking for in a string of text that has been tokenized and tagged by nltk. My code: ``` from nltk.tokenize import word_tokenize from nltk.tag import pos_tag text = "It is pretty good as far as driveway size is concerned, otherwise I would skip it" tokenized = word_tokenize(text) tagged = pos_tag(tokenized) def find_phrase(): counter = -1 for tag in tagged: counter += 1 if tag[0] == "as" and tagged[counter+6][0] == "concerned": print tagged[counter:counter+7] find_phrase() ``` Printed output: `[('as', 'IN'), ('far', 'RB'), ('as', 'IN'), ('driveway', 'NN'), ('size', 'NN'), ('is', 'VBZ'), ('concerned', 'VBN')]` What I actually want: `['as', 'far', 'as', 'driveway', 'size', 'is', 'concerned']` Is it possible to modify the my line of code `print tagged[counter:counter+7]` to get my desired printed output?
2014/01/29
[ "https://Stackoverflow.com/questions/21421987", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2680443/" ]
Probably the simplest method uses a [list comprehension](http://docs.python.org/2/tutorial/datastructures.html#list-comprehensions). This statement creates a list from the first element of every tuple in your list: ``` print [tup[0] for tup in tagged[counter:counter+7]] ``` Or just for fun, if the tuples are always pairs, you could flatten the list (using any method you like) and then print every second element with the *step* notation of python's [slice](https://stackoverflow.com/questions/509211/pythons-slice-notation) notation: ``` print list(sum(tagged[counter:counter+7], ()))[::2] ``` Or use `map` with the [`itemgetter`](http://docs.python.org/2/library/operator.html?highlight=itemgetter#operator.itemgetter) function, which calls the `__getitem__()` method to retrieve the 0th index of every tuple in your list: ``` from operator import itemgetter print map(itemgetter(0), tagged[counter:counter+7]) ``` Anything else? I'm sure there are more.
Have you tried zip? also item[0] for item in name
21,421,987
Somewhat a python/programming newbie here. I am trying to access a specified range of tuples from a list of tuples, but I only want to access the first element from the range of tuples. The specified range is based on a pattern I am looking for in a string of text that has been tokenized and tagged by nltk. My code: ``` from nltk.tokenize import word_tokenize from nltk.tag import pos_tag text = "It is pretty good as far as driveway size is concerned, otherwise I would skip it" tokenized = word_tokenize(text) tagged = pos_tag(tokenized) def find_phrase(): counter = -1 for tag in tagged: counter += 1 if tag[0] == "as" and tagged[counter+6][0] == "concerned": print tagged[counter:counter+7] find_phrase() ``` Printed output: `[('as', 'IN'), ('far', 'RB'), ('as', 'IN'), ('driveway', 'NN'), ('size', 'NN'), ('is', 'VBZ'), ('concerned', 'VBN')]` What I actually want: `['as', 'far', 'as', 'driveway', 'size', 'is', 'concerned']` Is it possible to modify the my line of code `print tagged[counter:counter+7]` to get my desired printed output?
2014/01/29
[ "https://Stackoverflow.com/questions/21421987", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2680443/" ]
You can use like this: ``` result, _ = zip(*find_phrase()) print result ```
Probably the simplest method uses a [list comprehension](http://docs.python.org/2/tutorial/datastructures.html#list-comprehensions). This statement creates a list from the first element of every tuple in your list: ``` print [tup[0] for tup in tagged[counter:counter+7]] ``` Or just for fun, if the tuples are always pairs, you could flatten the list (using any method you like) and then print every second element with the *step* notation of python's [slice](https://stackoverflow.com/questions/509211/pythons-slice-notation) notation: ``` print list(sum(tagged[counter:counter+7], ()))[::2] ``` Or use `map` with the [`itemgetter`](http://docs.python.org/2/library/operator.html?highlight=itemgetter#operator.itemgetter) function, which calls the `__getitem__()` method to retrieve the 0th index of every tuple in your list: ``` from operator import itemgetter print map(itemgetter(0), tagged[counter:counter+7]) ``` Anything else? I'm sure there are more.
21,421,987
Somewhat a python/programming newbie here. I am trying to access a specified range of tuples from a list of tuples, but I only want to access the first element from the range of tuples. The specified range is based on a pattern I am looking for in a string of text that has been tokenized and tagged by nltk. My code: ``` from nltk.tokenize import word_tokenize from nltk.tag import pos_tag text = "It is pretty good as far as driveway size is concerned, otherwise I would skip it" tokenized = word_tokenize(text) tagged = pos_tag(tokenized) def find_phrase(): counter = -1 for tag in tagged: counter += 1 if tag[0] == "as" and tagged[counter+6][0] == "concerned": print tagged[counter:counter+7] find_phrase() ``` Printed output: `[('as', 'IN'), ('far', 'RB'), ('as', 'IN'), ('driveway', 'NN'), ('size', 'NN'), ('is', 'VBZ'), ('concerned', 'VBN')]` What I actually want: `['as', 'far', 'as', 'driveway', 'size', 'is', 'concerned']` Is it possible to modify the my line of code `print tagged[counter:counter+7]` to get my desired printed output?
2014/01/29
[ "https://Stackoverflow.com/questions/21421987", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2680443/" ]
You can use like this: ``` result, _ = zip(*find_phrase()) print result ```
Have you tried zip? also item[0] for item in name
22,026,177
I'm getting a strange error from the Django tests, I get this error when I test Django or when I unit test my story app. It's complaining about multiple block tags with the name "content" but I've renamed all the tags so there should be zero block tags with the name content. The test never even hits my app code, and fails when I run django's test suite too. The application runs fine, but I'm trying to write unit tests, and this is really getting in the way. Here's the test from story/tests.py: ``` class StoryViewsTests(TestCase): def test_root_url_shows_home_page_content(self): response = self.client.get(reverse('index')) ... ``` Here's the view from story/views.py: ``` class FrontpageView(DetailView): template_name = "welcome_content.html" def get_object(self): return get_object_or_404(Article, slug="front-page") def get_context_data(self, **kwargs): context = super(FrontpageView, self).get_context_data(**kwargs) context['slug'] = "front-page" queryset = UserProfile.objects.filter(user_type="Reporter") context['reporter_list'] = queryset return context ``` Here's the url from urls.py: ``` urlpatterns = patterns('', url(r'^$', FrontpageView.as_view(), name='index'), ... ``` Here's the template: ``` {% extends "welcome.html" %} {% block frontpagecontent %} <div> {{ object.text|safe}} <div class="span12"> <a href="/accounts/register/"> <div class="well"> <h3>Register for the Nebraska News Service today.</h3> </div> <!-- well --> </a> </div> </div> <div class="row-fluid"> <div class="span8"> <div class="well" align="center"> <img src="{{STATIC_URL}}{{ object.docfile }}" /> </div> <!-- well --> </div> <!-- span8 --> <div class="span4"> <div class="well"> <h3>NNS Staff:</h3> {% for r in reporter_list %} <p>{{ r.user.get_full_name }}</p> {% endfor %} </div> <!-- well --> </div> <!-- span4 --> </div> {% endblock %} ``` And here's the trace: ``` ERROR: test_root_url_shows_home_page_content (story.tests.StoryViewsTests) ---------------------------------------------------------------------- Traceback (most recent call last): File "/vagrant/webapps/nns/settings/../apps/story/tests.py", line 11, in test_root_url_shows_home_page_content response = self.client.get(reverse('about')) File "/home/vagrant/django5/local/lib/python2.7/site-packages/django/test/client.py", line 473, in get response = super(Client, self).get(path, data=data, **extra) File "/home/vagrant/django5/local/lib/python2.7/site-packages/django/test/client.py", line 280, in get return self.request(**r) File "/home/vagrant/django5/local/lib/python2.7/site-packages/django/test/client.py", line 444, in request six.reraise(*exc_info) File "/home/vagrant/django5/local/lib/python2.7/site-packages/django/core/handlers/base.py", line 152, in get_response response = callback(request, **param_dict) File "/home/vagrant/django5/local/lib/python2.7/site-packages/django/utils/decorators.py", line 99, in _wrapped_view response = view_func(request, *args, **kwargs) File "/home/vagrant/django5/local/lib/python2.7/site-packages/django/views/defaults.py", line 23, in page_not_found template = loader.get_template(template_name) File "/home/vagrant/django5/local/lib/python2.7/site-packages/django/template/loader.py", line 138, in get_template template, origin = find_template(template_name) File "/home/vagrant/django5/local/lib/python2.7/site-packages/django/template/loader.py", line 127, in find_template source, display_name = loader(name, dirs) File "/home/vagrant/django5/local/lib/python2.7/site-packages/django/template/loader.py", line 43, in __call__ return self.load_template(template_name, template_dirs) File "/home/vagrant/django5/local/lib/python2.7/site-packages/django/template/loader.py", line 49, in load_template template = get_template_from_string(source, origin, template_name) File "/home/vagrant/django5/local/lib/python2.7/site-packages/django/template/loader.py", line 149, in get_template_from_string return Template(source, origin, name) File "/home/vagrant/django5/local/lib/python2.7/site-packages/django/template/base.py", line 125, in __init__ self.nodelist = compile_string(template_string, origin) File "/home/vagrant/django5/local/lib/python2.7/site-packages/django/template/base.py", line 153, in compile_string return parser.parse() File "/home/vagrant/django5/local/lib/python2.7/site-packages/django/template/base.py", line 278, in parse compiled_result = compile_func(self, token) File "/home/vagrant/django5/local/lib/python2.7/site-packages/django/template/loader_tags.py", line 215, in do_extends nodelist = parser.parse() File "/home/vagrant/django5/local/lib/python2.7/site-packages/django/template/base.py", line 278, in parse compiled_result = compile_func(self, token) File "/home/vagrant/django5/local/lib/python2.7/site-packages/django/template/loader_tags.py", line 190, in do_block nodelist = parser.parse(('endblock',)) File "/home/vagrant/django5/local/lib/python2.7/site-packages/django/template/base.py", line 278, in parse compiled_result = compile_func(self, token) File "/home/vagrant/django5/local/lib/python2.7/site-packages/django/template/loader_tags.py", line 186, in do_block raise TemplateSyntaxError("'%s' tag with name '%s' appears more than once" % (bits[0], block_name)) TemplateSyntaxError: 'block' tag with name 'content' appears more than once ```
2014/02/25
[ "https://Stackoverflow.com/questions/22026177", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1319434/" ]
You can use `while` and `each` like this: ``` while (my ($key1, $inner_hash) = each %foo) { while (my ($key2, $inner_inner_hash) = each %$inner_hash) { while (my ($key3, $value) = each %$inner_inner_hash) { print $value; } } } ``` This approach uses less memory than `foreach keys %hash`, which constructs a list of all the keys in the hash before you begin iterating. The drawback with `each` is that you cannot specify a sort order. See the [documentation](http://perldoc.perl.org/functions/each.html) for details.
You're looking for something like this: ``` for my $key1 ( keys %foo ) { my $subhash = $foo{$key1}; for my $key2 ( keys %$subhash ) { my $subsubhash = $subhash->{$key2}; for my $key3 ( keys %$subsubhash ) ```
22,026,177
I'm getting a strange error from the Django tests, I get this error when I test Django or when I unit test my story app. It's complaining about multiple block tags with the name "content" but I've renamed all the tags so there should be zero block tags with the name content. The test never even hits my app code, and fails when I run django's test suite too. The application runs fine, but I'm trying to write unit tests, and this is really getting in the way. Here's the test from story/tests.py: ``` class StoryViewsTests(TestCase): def test_root_url_shows_home_page_content(self): response = self.client.get(reverse('index')) ... ``` Here's the view from story/views.py: ``` class FrontpageView(DetailView): template_name = "welcome_content.html" def get_object(self): return get_object_or_404(Article, slug="front-page") def get_context_data(self, **kwargs): context = super(FrontpageView, self).get_context_data(**kwargs) context['slug'] = "front-page" queryset = UserProfile.objects.filter(user_type="Reporter") context['reporter_list'] = queryset return context ``` Here's the url from urls.py: ``` urlpatterns = patterns('', url(r'^$', FrontpageView.as_view(), name='index'), ... ``` Here's the template: ``` {% extends "welcome.html" %} {% block frontpagecontent %} <div> {{ object.text|safe}} <div class="span12"> <a href="/accounts/register/"> <div class="well"> <h3>Register for the Nebraska News Service today.</h3> </div> <!-- well --> </a> </div> </div> <div class="row-fluid"> <div class="span8"> <div class="well" align="center"> <img src="{{STATIC_URL}}{{ object.docfile }}" /> </div> <!-- well --> </div> <!-- span8 --> <div class="span4"> <div class="well"> <h3>NNS Staff:</h3> {% for r in reporter_list %} <p>{{ r.user.get_full_name }}</p> {% endfor %} </div> <!-- well --> </div> <!-- span4 --> </div> {% endblock %} ``` And here's the trace: ``` ERROR: test_root_url_shows_home_page_content (story.tests.StoryViewsTests) ---------------------------------------------------------------------- Traceback (most recent call last): File "/vagrant/webapps/nns/settings/../apps/story/tests.py", line 11, in test_root_url_shows_home_page_content response = self.client.get(reverse('about')) File "/home/vagrant/django5/local/lib/python2.7/site-packages/django/test/client.py", line 473, in get response = super(Client, self).get(path, data=data, **extra) File "/home/vagrant/django5/local/lib/python2.7/site-packages/django/test/client.py", line 280, in get return self.request(**r) File "/home/vagrant/django5/local/lib/python2.7/site-packages/django/test/client.py", line 444, in request six.reraise(*exc_info) File "/home/vagrant/django5/local/lib/python2.7/site-packages/django/core/handlers/base.py", line 152, in get_response response = callback(request, **param_dict) File "/home/vagrant/django5/local/lib/python2.7/site-packages/django/utils/decorators.py", line 99, in _wrapped_view response = view_func(request, *args, **kwargs) File "/home/vagrant/django5/local/lib/python2.7/site-packages/django/views/defaults.py", line 23, in page_not_found template = loader.get_template(template_name) File "/home/vagrant/django5/local/lib/python2.7/site-packages/django/template/loader.py", line 138, in get_template template, origin = find_template(template_name) File "/home/vagrant/django5/local/lib/python2.7/site-packages/django/template/loader.py", line 127, in find_template source, display_name = loader(name, dirs) File "/home/vagrant/django5/local/lib/python2.7/site-packages/django/template/loader.py", line 43, in __call__ return self.load_template(template_name, template_dirs) File "/home/vagrant/django5/local/lib/python2.7/site-packages/django/template/loader.py", line 49, in load_template template = get_template_from_string(source, origin, template_name) File "/home/vagrant/django5/local/lib/python2.7/site-packages/django/template/loader.py", line 149, in get_template_from_string return Template(source, origin, name) File "/home/vagrant/django5/local/lib/python2.7/site-packages/django/template/base.py", line 125, in __init__ self.nodelist = compile_string(template_string, origin) File "/home/vagrant/django5/local/lib/python2.7/site-packages/django/template/base.py", line 153, in compile_string return parser.parse() File "/home/vagrant/django5/local/lib/python2.7/site-packages/django/template/base.py", line 278, in parse compiled_result = compile_func(self, token) File "/home/vagrant/django5/local/lib/python2.7/site-packages/django/template/loader_tags.py", line 215, in do_extends nodelist = parser.parse() File "/home/vagrant/django5/local/lib/python2.7/site-packages/django/template/base.py", line 278, in parse compiled_result = compile_func(self, token) File "/home/vagrant/django5/local/lib/python2.7/site-packages/django/template/loader_tags.py", line 190, in do_block nodelist = parser.parse(('endblock',)) File "/home/vagrant/django5/local/lib/python2.7/site-packages/django/template/base.py", line 278, in parse compiled_result = compile_func(self, token) File "/home/vagrant/django5/local/lib/python2.7/site-packages/django/template/loader_tags.py", line 186, in do_block raise TemplateSyntaxError("'%s' tag with name '%s' appears more than once" % (bits[0], block_name)) TemplateSyntaxError: 'block' tag with name 'content' appears more than once ```
2014/02/25
[ "https://Stackoverflow.com/questions/22026177", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1319434/" ]
You're looking for something like this: ``` for my $key1 ( keys %foo ) { my $subhash = $foo{$key1}; for my $key2 ( keys %$subhash ) { my $subsubhash = $subhash->{$key2}; for my $key3 ( keys %$subsubhash ) ```
> > I'm just learning perl. > > > And you're already doing references. That's pretty good. > > I am trying to rewrite this multilevel loop using temporary variables so that I do not require the previous keys ($key1 $key2) to gain access(dereferencing) to $key3. What would be the easiest way of doing this. > > > If I think I understand what you're saying, you want to be able to find all of the third level hash keys without going through all the first and second level hash keys. Let's say that `%foo` has keys: ``` $foo{one}->{alpha}->{apple}; $foo{one}->{alpha}->{berry}; $foo{one}->{beta}->{cucumber}; $foo{one}->{beta}->{durian}; $foo{two}->{uno}->{eggplant}; $foo{two}->{uno}->{fig}; $foo{two}->{dos}->{guava}; $foo{two}->{dos}->{honeydew}; ``` By the way, I like the `->` syntax simply because it reminds me I'm dealing with references to something and not an actual hash. It helps me see the issue a bit clearer. You want to go through the keys of vegetable and fruit names without going through the first two levels. Is that correct? Here the `->` syntax helps clarify the answer. These eight keys belong to four separate hashes: ``` $foo{one}->{alpha}; $foo{one}->{beta}; $foo{two}->{uno}; $foo{two}->{dos}; ``` And, the hashes they're in are *anonymous*, that is there's no variable name that contains these hashes. The only way I can access these hashes is to locate the four hashes that contain them. However, these four keys themselves are stored into two separate hashes. I need to find those two hashes to find their keys. Again, these two hashes are *anonymous*. Again, the only way I can find them is to know the two hashes that contain them: ``` $foo{one}; $foo{two}; ``` So, in order to find my third level values, I need to know the second level hashes that contain them. In order to find those second hashes, I need to find the first level keys that contain them. However, if you have some sort of known structure, you may already know the keys you need in order to find the values you're looking for. Imagine something like this: ``` $person{$ssn}->{NAME}->{FIRST} = "Bob"; $person{$ssn}->{NAME}->{MI} = "Q."; $person{$ssn}->{NAME}->{LAST} = "Smith"; ``` Here I can go directly through to the first, last, and middle initial of each person. All I have to do is go through the various social security numbers: ``` for my $ssn ( sort keys %person ) { say "My name is " . $person{$ssn}->{NAME}->{FIRST} . " " . $person{$ssn}->{NAME}->{MI} . " " . $person{$ssn}->{NAME}->{LAST}; } ```
22,026,177
I'm getting a strange error from the Django tests, I get this error when I test Django or when I unit test my story app. It's complaining about multiple block tags with the name "content" but I've renamed all the tags so there should be zero block tags with the name content. The test never even hits my app code, and fails when I run django's test suite too. The application runs fine, but I'm trying to write unit tests, and this is really getting in the way. Here's the test from story/tests.py: ``` class StoryViewsTests(TestCase): def test_root_url_shows_home_page_content(self): response = self.client.get(reverse('index')) ... ``` Here's the view from story/views.py: ``` class FrontpageView(DetailView): template_name = "welcome_content.html" def get_object(self): return get_object_or_404(Article, slug="front-page") def get_context_data(self, **kwargs): context = super(FrontpageView, self).get_context_data(**kwargs) context['slug'] = "front-page" queryset = UserProfile.objects.filter(user_type="Reporter") context['reporter_list'] = queryset return context ``` Here's the url from urls.py: ``` urlpatterns = patterns('', url(r'^$', FrontpageView.as_view(), name='index'), ... ``` Here's the template: ``` {% extends "welcome.html" %} {% block frontpagecontent %} <div> {{ object.text|safe}} <div class="span12"> <a href="/accounts/register/"> <div class="well"> <h3>Register for the Nebraska News Service today.</h3> </div> <!-- well --> </a> </div> </div> <div class="row-fluid"> <div class="span8"> <div class="well" align="center"> <img src="{{STATIC_URL}}{{ object.docfile }}" /> </div> <!-- well --> </div> <!-- span8 --> <div class="span4"> <div class="well"> <h3>NNS Staff:</h3> {% for r in reporter_list %} <p>{{ r.user.get_full_name }}</p> {% endfor %} </div> <!-- well --> </div> <!-- span4 --> </div> {% endblock %} ``` And here's the trace: ``` ERROR: test_root_url_shows_home_page_content (story.tests.StoryViewsTests) ---------------------------------------------------------------------- Traceback (most recent call last): File "/vagrant/webapps/nns/settings/../apps/story/tests.py", line 11, in test_root_url_shows_home_page_content response = self.client.get(reverse('about')) File "/home/vagrant/django5/local/lib/python2.7/site-packages/django/test/client.py", line 473, in get response = super(Client, self).get(path, data=data, **extra) File "/home/vagrant/django5/local/lib/python2.7/site-packages/django/test/client.py", line 280, in get return self.request(**r) File "/home/vagrant/django5/local/lib/python2.7/site-packages/django/test/client.py", line 444, in request six.reraise(*exc_info) File "/home/vagrant/django5/local/lib/python2.7/site-packages/django/core/handlers/base.py", line 152, in get_response response = callback(request, **param_dict) File "/home/vagrant/django5/local/lib/python2.7/site-packages/django/utils/decorators.py", line 99, in _wrapped_view response = view_func(request, *args, **kwargs) File "/home/vagrant/django5/local/lib/python2.7/site-packages/django/views/defaults.py", line 23, in page_not_found template = loader.get_template(template_name) File "/home/vagrant/django5/local/lib/python2.7/site-packages/django/template/loader.py", line 138, in get_template template, origin = find_template(template_name) File "/home/vagrant/django5/local/lib/python2.7/site-packages/django/template/loader.py", line 127, in find_template source, display_name = loader(name, dirs) File "/home/vagrant/django5/local/lib/python2.7/site-packages/django/template/loader.py", line 43, in __call__ return self.load_template(template_name, template_dirs) File "/home/vagrant/django5/local/lib/python2.7/site-packages/django/template/loader.py", line 49, in load_template template = get_template_from_string(source, origin, template_name) File "/home/vagrant/django5/local/lib/python2.7/site-packages/django/template/loader.py", line 149, in get_template_from_string return Template(source, origin, name) File "/home/vagrant/django5/local/lib/python2.7/site-packages/django/template/base.py", line 125, in __init__ self.nodelist = compile_string(template_string, origin) File "/home/vagrant/django5/local/lib/python2.7/site-packages/django/template/base.py", line 153, in compile_string return parser.parse() File "/home/vagrant/django5/local/lib/python2.7/site-packages/django/template/base.py", line 278, in parse compiled_result = compile_func(self, token) File "/home/vagrant/django5/local/lib/python2.7/site-packages/django/template/loader_tags.py", line 215, in do_extends nodelist = parser.parse() File "/home/vagrant/django5/local/lib/python2.7/site-packages/django/template/base.py", line 278, in parse compiled_result = compile_func(self, token) File "/home/vagrant/django5/local/lib/python2.7/site-packages/django/template/loader_tags.py", line 190, in do_block nodelist = parser.parse(('endblock',)) File "/home/vagrant/django5/local/lib/python2.7/site-packages/django/template/base.py", line 278, in parse compiled_result = compile_func(self, token) File "/home/vagrant/django5/local/lib/python2.7/site-packages/django/template/loader_tags.py", line 186, in do_block raise TemplateSyntaxError("'%s' tag with name '%s' appears more than once" % (bits[0], block_name)) TemplateSyntaxError: 'block' tag with name 'content' appears more than once ```
2014/02/25
[ "https://Stackoverflow.com/questions/22026177", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1319434/" ]
You can use `while` and `each` like this: ``` while (my ($key1, $inner_hash) = each %foo) { while (my ($key2, $inner_inner_hash) = each %$inner_hash) { while (my ($key3, $value) = each %$inner_inner_hash) { print $value; } } } ``` This approach uses less memory than `foreach keys %hash`, which constructs a list of all the keys in the hash before you begin iterating. The drawback with `each` is that you cannot specify a sort order. See the [documentation](http://perldoc.perl.org/functions/each.html) for details.
How about this: ``` foreach(values %foo){ foreach(values %$_){ foreach my $key3 (keys %$_){ print $key3; } } } ```
22,026,177
I'm getting a strange error from the Django tests, I get this error when I test Django or when I unit test my story app. It's complaining about multiple block tags with the name "content" but I've renamed all the tags so there should be zero block tags with the name content. The test never even hits my app code, and fails when I run django's test suite too. The application runs fine, but I'm trying to write unit tests, and this is really getting in the way. Here's the test from story/tests.py: ``` class StoryViewsTests(TestCase): def test_root_url_shows_home_page_content(self): response = self.client.get(reverse('index')) ... ``` Here's the view from story/views.py: ``` class FrontpageView(DetailView): template_name = "welcome_content.html" def get_object(self): return get_object_or_404(Article, slug="front-page") def get_context_data(self, **kwargs): context = super(FrontpageView, self).get_context_data(**kwargs) context['slug'] = "front-page" queryset = UserProfile.objects.filter(user_type="Reporter") context['reporter_list'] = queryset return context ``` Here's the url from urls.py: ``` urlpatterns = patterns('', url(r'^$', FrontpageView.as_view(), name='index'), ... ``` Here's the template: ``` {% extends "welcome.html" %} {% block frontpagecontent %} <div> {{ object.text|safe}} <div class="span12"> <a href="/accounts/register/"> <div class="well"> <h3>Register for the Nebraska News Service today.</h3> </div> <!-- well --> </a> </div> </div> <div class="row-fluid"> <div class="span8"> <div class="well" align="center"> <img src="{{STATIC_URL}}{{ object.docfile }}" /> </div> <!-- well --> </div> <!-- span8 --> <div class="span4"> <div class="well"> <h3>NNS Staff:</h3> {% for r in reporter_list %} <p>{{ r.user.get_full_name }}</p> {% endfor %} </div> <!-- well --> </div> <!-- span4 --> </div> {% endblock %} ``` And here's the trace: ``` ERROR: test_root_url_shows_home_page_content (story.tests.StoryViewsTests) ---------------------------------------------------------------------- Traceback (most recent call last): File "/vagrant/webapps/nns/settings/../apps/story/tests.py", line 11, in test_root_url_shows_home_page_content response = self.client.get(reverse('about')) File "/home/vagrant/django5/local/lib/python2.7/site-packages/django/test/client.py", line 473, in get response = super(Client, self).get(path, data=data, **extra) File "/home/vagrant/django5/local/lib/python2.7/site-packages/django/test/client.py", line 280, in get return self.request(**r) File "/home/vagrant/django5/local/lib/python2.7/site-packages/django/test/client.py", line 444, in request six.reraise(*exc_info) File "/home/vagrant/django5/local/lib/python2.7/site-packages/django/core/handlers/base.py", line 152, in get_response response = callback(request, **param_dict) File "/home/vagrant/django5/local/lib/python2.7/site-packages/django/utils/decorators.py", line 99, in _wrapped_view response = view_func(request, *args, **kwargs) File "/home/vagrant/django5/local/lib/python2.7/site-packages/django/views/defaults.py", line 23, in page_not_found template = loader.get_template(template_name) File "/home/vagrant/django5/local/lib/python2.7/site-packages/django/template/loader.py", line 138, in get_template template, origin = find_template(template_name) File "/home/vagrant/django5/local/lib/python2.7/site-packages/django/template/loader.py", line 127, in find_template source, display_name = loader(name, dirs) File "/home/vagrant/django5/local/lib/python2.7/site-packages/django/template/loader.py", line 43, in __call__ return self.load_template(template_name, template_dirs) File "/home/vagrant/django5/local/lib/python2.7/site-packages/django/template/loader.py", line 49, in load_template template = get_template_from_string(source, origin, template_name) File "/home/vagrant/django5/local/lib/python2.7/site-packages/django/template/loader.py", line 149, in get_template_from_string return Template(source, origin, name) File "/home/vagrant/django5/local/lib/python2.7/site-packages/django/template/base.py", line 125, in __init__ self.nodelist = compile_string(template_string, origin) File "/home/vagrant/django5/local/lib/python2.7/site-packages/django/template/base.py", line 153, in compile_string return parser.parse() File "/home/vagrant/django5/local/lib/python2.7/site-packages/django/template/base.py", line 278, in parse compiled_result = compile_func(self, token) File "/home/vagrant/django5/local/lib/python2.7/site-packages/django/template/loader_tags.py", line 215, in do_extends nodelist = parser.parse() File "/home/vagrant/django5/local/lib/python2.7/site-packages/django/template/base.py", line 278, in parse compiled_result = compile_func(self, token) File "/home/vagrant/django5/local/lib/python2.7/site-packages/django/template/loader_tags.py", line 190, in do_block nodelist = parser.parse(('endblock',)) File "/home/vagrant/django5/local/lib/python2.7/site-packages/django/template/base.py", line 278, in parse compiled_result = compile_func(self, token) File "/home/vagrant/django5/local/lib/python2.7/site-packages/django/template/loader_tags.py", line 186, in do_block raise TemplateSyntaxError("'%s' tag with name '%s' appears more than once" % (bits[0], block_name)) TemplateSyntaxError: 'block' tag with name 'content' appears more than once ```
2014/02/25
[ "https://Stackoverflow.com/questions/22026177", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1319434/" ]
You can use `while` and `each` like this: ``` while (my ($key1, $inner_hash) = each %foo) { while (my ($key2, $inner_inner_hash) = each %$inner_hash) { while (my ($key3, $value) = each %$inner_inner_hash) { print $value; } } } ``` This approach uses less memory than `foreach keys %hash`, which constructs a list of all the keys in the hash before you begin iterating. The drawback with `each` is that you cannot specify a sort order. See the [documentation](http://perldoc.perl.org/functions/each.html) for details.
> > I'm just learning perl. > > > And you're already doing references. That's pretty good. > > I am trying to rewrite this multilevel loop using temporary variables so that I do not require the previous keys ($key1 $key2) to gain access(dereferencing) to $key3. What would be the easiest way of doing this. > > > If I think I understand what you're saying, you want to be able to find all of the third level hash keys without going through all the first and second level hash keys. Let's say that `%foo` has keys: ``` $foo{one}->{alpha}->{apple}; $foo{one}->{alpha}->{berry}; $foo{one}->{beta}->{cucumber}; $foo{one}->{beta}->{durian}; $foo{two}->{uno}->{eggplant}; $foo{two}->{uno}->{fig}; $foo{two}->{dos}->{guava}; $foo{two}->{dos}->{honeydew}; ``` By the way, I like the `->` syntax simply because it reminds me I'm dealing with references to something and not an actual hash. It helps me see the issue a bit clearer. You want to go through the keys of vegetable and fruit names without going through the first two levels. Is that correct? Here the `->` syntax helps clarify the answer. These eight keys belong to four separate hashes: ``` $foo{one}->{alpha}; $foo{one}->{beta}; $foo{two}->{uno}; $foo{two}->{dos}; ``` And, the hashes they're in are *anonymous*, that is there's no variable name that contains these hashes. The only way I can access these hashes is to locate the four hashes that contain them. However, these four keys themselves are stored into two separate hashes. I need to find those two hashes to find their keys. Again, these two hashes are *anonymous*. Again, the only way I can find them is to know the two hashes that contain them: ``` $foo{one}; $foo{two}; ``` So, in order to find my third level values, I need to know the second level hashes that contain them. In order to find those second hashes, I need to find the first level keys that contain them. However, if you have some sort of known structure, you may already know the keys you need in order to find the values you're looking for. Imagine something like this: ``` $person{$ssn}->{NAME}->{FIRST} = "Bob"; $person{$ssn}->{NAME}->{MI} = "Q."; $person{$ssn}->{NAME}->{LAST} = "Smith"; ``` Here I can go directly through to the first, last, and middle initial of each person. All I have to do is go through the various social security numbers: ``` for my $ssn ( sort keys %person ) { say "My name is " . $person{$ssn}->{NAME}->{FIRST} . " " . $person{$ssn}->{NAME}->{MI} . " " . $person{$ssn}->{NAME}->{LAST}; } ```
22,026,177
I'm getting a strange error from the Django tests, I get this error when I test Django or when I unit test my story app. It's complaining about multiple block tags with the name "content" but I've renamed all the tags so there should be zero block tags with the name content. The test never even hits my app code, and fails when I run django's test suite too. The application runs fine, but I'm trying to write unit tests, and this is really getting in the way. Here's the test from story/tests.py: ``` class StoryViewsTests(TestCase): def test_root_url_shows_home_page_content(self): response = self.client.get(reverse('index')) ... ``` Here's the view from story/views.py: ``` class FrontpageView(DetailView): template_name = "welcome_content.html" def get_object(self): return get_object_or_404(Article, slug="front-page") def get_context_data(self, **kwargs): context = super(FrontpageView, self).get_context_data(**kwargs) context['slug'] = "front-page" queryset = UserProfile.objects.filter(user_type="Reporter") context['reporter_list'] = queryset return context ``` Here's the url from urls.py: ``` urlpatterns = patterns('', url(r'^$', FrontpageView.as_view(), name='index'), ... ``` Here's the template: ``` {% extends "welcome.html" %} {% block frontpagecontent %} <div> {{ object.text|safe}} <div class="span12"> <a href="/accounts/register/"> <div class="well"> <h3>Register for the Nebraska News Service today.</h3> </div> <!-- well --> </a> </div> </div> <div class="row-fluid"> <div class="span8"> <div class="well" align="center"> <img src="{{STATIC_URL}}{{ object.docfile }}" /> </div> <!-- well --> </div> <!-- span8 --> <div class="span4"> <div class="well"> <h3>NNS Staff:</h3> {% for r in reporter_list %} <p>{{ r.user.get_full_name }}</p> {% endfor %} </div> <!-- well --> </div> <!-- span4 --> </div> {% endblock %} ``` And here's the trace: ``` ERROR: test_root_url_shows_home_page_content (story.tests.StoryViewsTests) ---------------------------------------------------------------------- Traceback (most recent call last): File "/vagrant/webapps/nns/settings/../apps/story/tests.py", line 11, in test_root_url_shows_home_page_content response = self.client.get(reverse('about')) File "/home/vagrant/django5/local/lib/python2.7/site-packages/django/test/client.py", line 473, in get response = super(Client, self).get(path, data=data, **extra) File "/home/vagrant/django5/local/lib/python2.7/site-packages/django/test/client.py", line 280, in get return self.request(**r) File "/home/vagrant/django5/local/lib/python2.7/site-packages/django/test/client.py", line 444, in request six.reraise(*exc_info) File "/home/vagrant/django5/local/lib/python2.7/site-packages/django/core/handlers/base.py", line 152, in get_response response = callback(request, **param_dict) File "/home/vagrant/django5/local/lib/python2.7/site-packages/django/utils/decorators.py", line 99, in _wrapped_view response = view_func(request, *args, **kwargs) File "/home/vagrant/django5/local/lib/python2.7/site-packages/django/views/defaults.py", line 23, in page_not_found template = loader.get_template(template_name) File "/home/vagrant/django5/local/lib/python2.7/site-packages/django/template/loader.py", line 138, in get_template template, origin = find_template(template_name) File "/home/vagrant/django5/local/lib/python2.7/site-packages/django/template/loader.py", line 127, in find_template source, display_name = loader(name, dirs) File "/home/vagrant/django5/local/lib/python2.7/site-packages/django/template/loader.py", line 43, in __call__ return self.load_template(template_name, template_dirs) File "/home/vagrant/django5/local/lib/python2.7/site-packages/django/template/loader.py", line 49, in load_template template = get_template_from_string(source, origin, template_name) File "/home/vagrant/django5/local/lib/python2.7/site-packages/django/template/loader.py", line 149, in get_template_from_string return Template(source, origin, name) File "/home/vagrant/django5/local/lib/python2.7/site-packages/django/template/base.py", line 125, in __init__ self.nodelist = compile_string(template_string, origin) File "/home/vagrant/django5/local/lib/python2.7/site-packages/django/template/base.py", line 153, in compile_string return parser.parse() File "/home/vagrant/django5/local/lib/python2.7/site-packages/django/template/base.py", line 278, in parse compiled_result = compile_func(self, token) File "/home/vagrant/django5/local/lib/python2.7/site-packages/django/template/loader_tags.py", line 215, in do_extends nodelist = parser.parse() File "/home/vagrant/django5/local/lib/python2.7/site-packages/django/template/base.py", line 278, in parse compiled_result = compile_func(self, token) File "/home/vagrant/django5/local/lib/python2.7/site-packages/django/template/loader_tags.py", line 190, in do_block nodelist = parser.parse(('endblock',)) File "/home/vagrant/django5/local/lib/python2.7/site-packages/django/template/base.py", line 278, in parse compiled_result = compile_func(self, token) File "/home/vagrant/django5/local/lib/python2.7/site-packages/django/template/loader_tags.py", line 186, in do_block raise TemplateSyntaxError("'%s' tag with name '%s' appears more than once" % (bits[0], block_name)) TemplateSyntaxError: 'block' tag with name 'content' appears more than once ```
2014/02/25
[ "https://Stackoverflow.com/questions/22026177", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1319434/" ]
How about this: ``` foreach(values %foo){ foreach(values %$_){ foreach my $key3 (keys %$_){ print $key3; } } } ```
> > I'm just learning perl. > > > And you're already doing references. That's pretty good. > > I am trying to rewrite this multilevel loop using temporary variables so that I do not require the previous keys ($key1 $key2) to gain access(dereferencing) to $key3. What would be the easiest way of doing this. > > > If I think I understand what you're saying, you want to be able to find all of the third level hash keys without going through all the first and second level hash keys. Let's say that `%foo` has keys: ``` $foo{one}->{alpha}->{apple}; $foo{one}->{alpha}->{berry}; $foo{one}->{beta}->{cucumber}; $foo{one}->{beta}->{durian}; $foo{two}->{uno}->{eggplant}; $foo{two}->{uno}->{fig}; $foo{two}->{dos}->{guava}; $foo{two}->{dos}->{honeydew}; ``` By the way, I like the `->` syntax simply because it reminds me I'm dealing with references to something and not an actual hash. It helps me see the issue a bit clearer. You want to go through the keys of vegetable and fruit names without going through the first two levels. Is that correct? Here the `->` syntax helps clarify the answer. These eight keys belong to four separate hashes: ``` $foo{one}->{alpha}; $foo{one}->{beta}; $foo{two}->{uno}; $foo{two}->{dos}; ``` And, the hashes they're in are *anonymous*, that is there's no variable name that contains these hashes. The only way I can access these hashes is to locate the four hashes that contain them. However, these four keys themselves are stored into two separate hashes. I need to find those two hashes to find their keys. Again, these two hashes are *anonymous*. Again, the only way I can find them is to know the two hashes that contain them: ``` $foo{one}; $foo{two}; ``` So, in order to find my third level values, I need to know the second level hashes that contain them. In order to find those second hashes, I need to find the first level keys that contain them. However, if you have some sort of known structure, you may already know the keys you need in order to find the values you're looking for. Imagine something like this: ``` $person{$ssn}->{NAME}->{FIRST} = "Bob"; $person{$ssn}->{NAME}->{MI} = "Q."; $person{$ssn}->{NAME}->{LAST} = "Smith"; ``` Here I can go directly through to the first, last, and middle initial of each person. All I have to do is go through the various social security numbers: ``` for my $ssn ( sort keys %person ) { say "My name is " . $person{$ssn}->{NAME}->{FIRST} . " " . $person{$ssn}->{NAME}->{MI} . " " . $person{$ssn}->{NAME}->{LAST}; } ```
74,259,497
``` n: 8 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 ``` How to print a number table like this in python with n that can be any number? I am using a very stupid way to print it but the result is not the one expected: ``` n = int(input('n: ')) if n == 4: print(' 0 1 2 3\n4 5 6 7\n8 9 10 11\n12 13 14 15') if n == 5: print(' 0 1 2 3 4\n5 6 7 8 9\n10 11 12 13 14\n15 16 17 18 19\n20 21 22 23 24') if n == 6: print(' 0 1 2 3 4 5\n6 7 8 9 10 11\n12 13 14 15 16 17\n18 19 20 21 22 23\n24 25 26 27 28 29\n30 31 32 33 34 35') if n == 7: print(' 0 1 2 3 4 5 6\n7 8 9 10 11 12 13\n14 15 16 17 18 19 20\n21 22 23 24 25 26 27\n28 29 30 31 32 33 34\n35 36 37 38 39 40 41\n42 43 44 45 46 47 48') if n == 8: print(' 0 1 2 3 4 5 6 7\n8 9 10 11 12 13 14 15\n16 17 18 19 20 21 22 23\n24 25 26 27 28 29 30 31\n32 33 34 35 36 37 38 39\n40 41 42 43 44 45 46 47\n48 49 50 51 52 53 54 55\n56 57 58 59 60 61 62 63') if n == 9: print(' 0 1 2 3 4 5 6 7 8\n9 10 11 12 13 14 15 16 17\n18 19 20 21 22 23 24 25 26\n27 28 29 30 31 32 33 34 35\n36 37 38 39 40 41 42 43 44\n45 46 47 48 49 50 51 52 53\n54 55 56 57 58 59 60 61 62\n63 64 65 66 67 68 69 70 71\n72 73 74 75 76 77 78 79 80') if n == 10: print(' 0 1 2 3 4 5 6 7 8 9\n10 11 12 13 14 15 16 17 18 19\n20 21 22 23 24 25 26 27 28 29\n30 31 32 33 34 35 36 37 38 39\n40 41 42 43 44 45 46 47 48 49\n50 51 52 53 54 55 56 57 58 59\n60 61 62 63 64 65 66 67 68 69\n70 71 72 73 74 75 76 77 78 79\n80 81 82 83 84 85 86 87 88 89\n90 91 92 93 94 95 96 97 98 99') ``` here is the result: ``` n: 8 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 ```
2022/10/31
[ "https://Stackoverflow.com/questions/74259497", "https://Stackoverflow.com", "https://Stackoverflow.com/users/20376552/" ]
If you have more than one reference to a list, then `.clear()` clears the list and preserves the references, but the assignment creates a new list and does not affect the original list. ``` a = [1,2,3] b = a # make an additional reference b.clear() print(a, b) # [] [] a = [1,2,3] b = a # make an additional reference b = [] print(a, b) #[1, 2, 3] [] ``` Interestingly, you can clear the contents through an assignment to a full list slice: ``` a = [1,2,3] b = a # make an additional reference b[:] = [] print(a, b) #[] [] ```
When you do `array.clear()`, that tells that existing object to clear itself. When you do `array = []`, that creates a brand-new object and replaces the one it had before. The new `array` object is unrelated to the one you stored in `self.array`.
33,617,551
I'm dealing with large raster stacks and I need to re-sample and clip them. I read list of Tiff files and create stack: ``` files <- list.files(path=".", pattern="tif", all.files=FALSE, full.names=TRUE) s <- stack(files) r <- raster("raster.tif") s_re <- resample(s, r,method='bilinear') e <- extent(-180, 180, -60, 90) s_crop <- crop(s_re, e) ``` This process takes days to complete! However, it's much faster using ArcPy and python. My question is: why the process is so slow in R and if there is a way to speed up the process? (I used snow package for parallel processing, but that didn't help either). These are `r` and `s` layers: ``` > r class : RasterLayer dimensions : 3000, 7200, 21600000 (nrow, ncol, ncell) resolution : 0.05, 0.05 (x, y) extent : -180, 180, -59.99999, 90.00001 (xmin, xmax, ymin, ymax) coord. ref. : +proj=longlat +datum=WGS84 +no_defs +ellps=WGS84 +towgs84=0,0,0 > s class : RasterStack dimensions : 2160, 4320, 9331200, 365 (nrow, ncol, ncell, nlayers) resolution : 0.08333333, 0.08333333 (x, y) extent : -180, 180, -90, 90 (xmin, xmax, ymin, ymax) coord. ref. : +proj=longlat +datum=WGS84 +no_defs +ellps=WGS84 +towgs84=0,0,0 ```
2015/11/09
[ "https://Stackoverflow.com/questions/33617551", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2124725/" ]
I second @JoshO'Brien's suggestion to use GDAL directly, and `gdalUtils` makes this straightforward. Here's an example using double precision grids of the same dimensions as yours. For 10 files, it takes ~55 sec on my system. It scales linearly, so you'd be looking at about 33 minutes for 365 files. ``` library(gdalUtils) library(raster) # Create 10 rasters with random values, and write to temp files ff <- replicate(10, tempfile(fileext='.tif')) writeRaster(stack(replicate(10, { raster(matrix(runif(2160*4320), 2160), xmn=-180, xmx=180, ymn=-90, ymx=90) })), ff, bylayer=TRUE) # Define clipping extent and res e <- bbox(extent(-180, 180, -60, 90)) tr <- c(0.05, 0.05) # Use gdalwarp to resample and clip # Here, ff is my vector of tempfiles, but you'll use your `files` vector # The clipped files are written to the same file names as your `files` # vector, but with the suffix "_clipped". Change as desired. system.time(sapply(ff, function(f) { gdalwarp(f, sub('\\.tif', '_clipped.tif', f), tr=tr, r='bilinear', te=c(e), multi=TRUE) })) ## user system elapsed ## 0.34 0.64 54.45 ``` You can parallelise further with, e.g., `parLapply`: ``` library(parallel) cl <- makeCluster(4) clusterEvalQ(cl, library(gdalUtils)) clusterExport(cl, c('tr', 'e')) system.time(parLapply(cl, ff, function(f) { gdalwarp(f, sub('\\.tif', '_clipped.tif', f), tr=tr, r='bilinear', te=c(e), multi=TRUE) })) ## user system elapsed ## 0.05 0.00 31.92 stopCluster(cl) ``` At 32 sec for 10 grids (using 4 simultaneous processes), you're looking at about 20 minutes for 365 files. Actually, it should be faster than that, since two threads were probably doing nothing at the end (10 is not a multiple of 4).
For comparison, this is what I get: ``` library(raster) r <- raster(nrow=3000, ncol=7200, ymn=-60, ymx=90) s <- raster(nrow=2160, ncol=4320) values(s) <- 1:ncell(s) s <- writeRaster(s, 'test.tif') x <- system.time(resample(s, r, method='bilinear')) # user system elapsed # 15.26 2.56 17.83 ``` 10 files takes 150 seconds. So I would expect that 365 files would take 1.5 hr --- but I did not try that.
71,344,145
So I'm making a very simple battle mechanic in python where the player will be able to attack, defend or inspect the enemy : ``` print("------Welcome To The Game------") player_hp=5 player_attack=3 enemy_hp=10 enemy_attack=2 while player_hp !=0 and enemy_hp !=0: Choice=input("""What will you do: A.Attack B.Defend C.Inspect (A/B/C): """) if Choice=="A": enemy_hp-player_attack print("You dealt",str(player_attack), " damage" if Choice=="B": dice_roll=set(("1","2","3","4","5")) dice_list=list(dice_roll) value=dice_list[0] if value =="1": player_hp-1 elif value=="2": player_hp-2 elif value=="3": player_hp-3 elif value=="4": player_hp-4 elif value=="5": player_hp-5 if Choice=="C": print("""Enemy hp is,enemy_hp Enemy attack is ,enemy_attack""") else: if player_hp ==0: print("you lost") if enemy_hp==0: print("you won") ``` Problem that I'm having is that the **value** number doesn't reset after the loop finishes , if let's say the value first was 2 damage it will remain 2 everytime you press B until your hp finishes , so how can I make it that everytime you press the **defend** option the value is different?
2022/03/03
[ "https://Stackoverflow.com/questions/71344145", "https://Stackoverflow.com", "https://Stackoverflow.com/users/18367808/" ]
You should not rely on sets to provide random arrangements of values. In this case you should use `random.randint` function. Example: ``` import random if Choice== "B": player_hp -= random.randint(1, 5) ``` Also as Shayan pointed out you are not modifying `player_hp` by doing `player_hp - ...` you should use `player_hp -= ...` instead.
> > how can I make it that everytime you press the defend option the value is different > > > You should look at using the `random` module Ignoring the game logic, here's a simple example ``` import random dice_values = list(range(1, 7)) # example for six-sided die alive = True while alive: value = random.choice(dice_values) print(f'you picked {value}') ```
71,344,145
So I'm making a very simple battle mechanic in python where the player will be able to attack, defend or inspect the enemy : ``` print("------Welcome To The Game------") player_hp=5 player_attack=3 enemy_hp=10 enemy_attack=2 while player_hp !=0 and enemy_hp !=0: Choice=input("""What will you do: A.Attack B.Defend C.Inspect (A/B/C): """) if Choice=="A": enemy_hp-player_attack print("You dealt",str(player_attack), " damage" if Choice=="B": dice_roll=set(("1","2","3","4","5")) dice_list=list(dice_roll) value=dice_list[0] if value =="1": player_hp-1 elif value=="2": player_hp-2 elif value=="3": player_hp-3 elif value=="4": player_hp-4 elif value=="5": player_hp-5 if Choice=="C": print("""Enemy hp is,enemy_hp Enemy attack is ,enemy_attack""") else: if player_hp ==0: print("you lost") if enemy_hp==0: print("you won") ``` Problem that I'm having is that the **value** number doesn't reset after the loop finishes , if let's say the value first was 2 damage it will remain 2 everytime you press B until your hp finishes , so how can I make it that everytime you press the **defend** option the value is different?
2022/03/03
[ "https://Stackoverflow.com/questions/71344145", "https://Stackoverflow.com", "https://Stackoverflow.com/users/18367808/" ]
You should not rely on sets to provide random arrangements of values. In this case you should use `random.randint` function. Example: ``` import random if Choice== "B": player_hp -= random.randint(1, 5) ``` Also as Shayan pointed out you are not modifying `player_hp` by doing `player_hp - ...` you should use `player_hp -= ...` instead.
``` dice_roll=set(("1","2","3","4","5")) dice_list=list(dice_roll) value=dice_list[0] ... player_hp-4 elif value=="5": player_hp-5 ``` This code you created is a way of generating a random number, but there are better ways of doing this. --- Using the module `random`, you can access the function `random.randint`, which generates a random `int` between two given numbers: ``` >>> random.randint(0, 10) # It includes both the extremes 4 ``` So a better idea would be: ``` player_hp-=random.randint(1, 5) ``` --- You should also be aware that: ``` player_hp-5 ``` doesn't reduce `player_hp` by `5`, because it's not an assignation, you can do it in one of this ways: ``` player_hp-=5 # ...or... player_hp = player_hp-5 ```
71,344,145
So I'm making a very simple battle mechanic in python where the player will be able to attack, defend or inspect the enemy : ``` print("------Welcome To The Game------") player_hp=5 player_attack=3 enemy_hp=10 enemy_attack=2 while player_hp !=0 and enemy_hp !=0: Choice=input("""What will you do: A.Attack B.Defend C.Inspect (A/B/C): """) if Choice=="A": enemy_hp-player_attack print("You dealt",str(player_attack), " damage" if Choice=="B": dice_roll=set(("1","2","3","4","5")) dice_list=list(dice_roll) value=dice_list[0] if value =="1": player_hp-1 elif value=="2": player_hp-2 elif value=="3": player_hp-3 elif value=="4": player_hp-4 elif value=="5": player_hp-5 if Choice=="C": print("""Enemy hp is,enemy_hp Enemy attack is ,enemy_attack""") else: if player_hp ==0: print("you lost") if enemy_hp==0: print("you won") ``` Problem that I'm having is that the **value** number doesn't reset after the loop finishes , if let's say the value first was 2 damage it will remain 2 everytime you press B until your hp finishes , so how can I make it that everytime you press the **defend** option the value is different?
2022/03/03
[ "https://Stackoverflow.com/questions/71344145", "https://Stackoverflow.com", "https://Stackoverflow.com/users/18367808/" ]
You should not rely on sets to provide random arrangements of values. In this case you should use `random.randint` function. Example: ``` import random if Choice== "B": player_hp -= random.randint(1, 5) ``` Also as Shayan pointed out you are not modifying `player_hp` by doing `player_hp - ...` you should use `player_hp -= ...` instead.
As the others have already recommended, I would also use the random function for an randomly chosen int between 1 and 5. If I understood your program correctly, in choice B you want the chosen value to be the players defence and the attack from the opponent to be subtracted from it. This would be my suggested solution ``` import random print("------Welcome To The Game------") min = 1 max = 5 player_hp=5 player_attack=3 enemy_hp=10 enemy_attack= while player_hp>0 and enemy_hp>0: Choice=input("""What will you do: A.Attack B.Defend C.Inspect (A/B/C): """) if Choice=="A": enemy_hp = enemy_hp-player_attack print("You dealt",str(player_attack), "damage") elif Choice=="B": defence = random.randint(min,max) print("value of defense is",defence) damage = defence - enemy_attack if damage < 0: print("value of damage is",damage) player_hp = player_hp - (-damage) else: print("there was no damage") elif Choice=="C": print("player hp is",player_hp) print("enemy hp is",enemy_hp) else: if player_hp<=0: print("you lost") if enemy_hp<=0: print("you won") ```
71,344,145
So I'm making a very simple battle mechanic in python where the player will be able to attack, defend or inspect the enemy : ``` print("------Welcome To The Game------") player_hp=5 player_attack=3 enemy_hp=10 enemy_attack=2 while player_hp !=0 and enemy_hp !=0: Choice=input("""What will you do: A.Attack B.Defend C.Inspect (A/B/C): """) if Choice=="A": enemy_hp-player_attack print("You dealt",str(player_attack), " damage" if Choice=="B": dice_roll=set(("1","2","3","4","5")) dice_list=list(dice_roll) value=dice_list[0] if value =="1": player_hp-1 elif value=="2": player_hp-2 elif value=="3": player_hp-3 elif value=="4": player_hp-4 elif value=="5": player_hp-5 if Choice=="C": print("""Enemy hp is,enemy_hp Enemy attack is ,enemy_attack""") else: if player_hp ==0: print("you lost") if enemy_hp==0: print("you won") ``` Problem that I'm having is that the **value** number doesn't reset after the loop finishes , if let's say the value first was 2 damage it will remain 2 everytime you press B until your hp finishes , so how can I make it that everytime you press the **defend** option the value is different?
2022/03/03
[ "https://Stackoverflow.com/questions/71344145", "https://Stackoverflow.com", "https://Stackoverflow.com/users/18367808/" ]
The problem is where you don't change the `enemy_hp` and `player_hp`! For example, when player choose to attack, then `enemy_hp` should decrease by `enemy_hp = enemy_hp-player_attack`. This is also necessary for `player_hp` too! So I think the code will be: ``` import random print("------Welcome To The Game------") player_hp=5 player_attack=3 enemy_hp=10 enemy_attack=2 while player_hp >0 and enemy_hp >0: Choice=input("""What will you do: A.Attack B.Defend C.Inspect (A/B/C): """) if Choice=="A": enemy_hp =enemy_hp-player_attack print("You dealt",str(player_attack), f" damage and enemy hp is {enemy_hp}") elif Choice=="B": dice_roll=set(("1","2","3","4","5")) dice_list=list(dice_roll) value= random.choice(dice_list) if value =="1": player_hp= player_hp-1 elif value=="2": player_hp=player_hp-2 elif value=="3": player_hp=player_hp-3 elif value=="4": player_hp=player_hp-4 elif value=="5": player_hp=player_hp-5 elif Choice=="C": print(f"Enemy hp is,{enemy_hp} Enemy attack is {value}") if enemy_hp<=0: print("you won") elif player_hp <=0: print("you lost") ``` **Important notes:** 1. You should consider that `player_hp` and `enemy_hp` should be checked continuously! so they should be written in the `while` loop, not the outside! 2. Another Important point is based on your code this can be concluded that the damage of the enemy is randomly chosen between numbers from 1 to 5! Because you decreased `player_hp` by these numbers! so `enemy_attack` is useless in this code! 3. Conditions for `while` loop should be changed to `player_hp >0 and enemy_hp >0`! `player_hp >0` means that player is alive. this is meaningful for `enemy_hp >0` as well! 4. as mwo said: > > You should not rely on sets to provide random arrangements of values. > > > So we can use the `random` library to randomly choose a value from `dice_list` with `random.choice(dice_list)`. 5. It's not a good idea to define `dice_roll` inside the `while` loop. in this way, you are defining `dice_roll` again and again where this isn't necessary at all! so it's recommended to define it before the `while` loop. in this case, you can define `dice_list` like this before `while` loop: `dice_list=["1","2","3","4","5"]` 6. Also you don't need to define many conditions for subtracting the damage from `player_hp`! `player_hp= player_hp-value` would be enough and you don't need to `if/elif` statements there.
> > how can I make it that everytime you press the defend option the value is different > > > You should look at using the `random` module Ignoring the game logic, here's a simple example ``` import random dice_values = list(range(1, 7)) # example for six-sided die alive = True while alive: value = random.choice(dice_values) print(f'you picked {value}') ```
71,344,145
So I'm making a very simple battle mechanic in python where the player will be able to attack, defend or inspect the enemy : ``` print("------Welcome To The Game------") player_hp=5 player_attack=3 enemy_hp=10 enemy_attack=2 while player_hp !=0 and enemy_hp !=0: Choice=input("""What will you do: A.Attack B.Defend C.Inspect (A/B/C): """) if Choice=="A": enemy_hp-player_attack print("You dealt",str(player_attack), " damage" if Choice=="B": dice_roll=set(("1","2","3","4","5")) dice_list=list(dice_roll) value=dice_list[0] if value =="1": player_hp-1 elif value=="2": player_hp-2 elif value=="3": player_hp-3 elif value=="4": player_hp-4 elif value=="5": player_hp-5 if Choice=="C": print("""Enemy hp is,enemy_hp Enemy attack is ,enemy_attack""") else: if player_hp ==0: print("you lost") if enemy_hp==0: print("you won") ``` Problem that I'm having is that the **value** number doesn't reset after the loop finishes , if let's say the value first was 2 damage it will remain 2 everytime you press B until your hp finishes , so how can I make it that everytime you press the **defend** option the value is different?
2022/03/03
[ "https://Stackoverflow.com/questions/71344145", "https://Stackoverflow.com", "https://Stackoverflow.com/users/18367808/" ]
The problem is where you don't change the `enemy_hp` and `player_hp`! For example, when player choose to attack, then `enemy_hp` should decrease by `enemy_hp = enemy_hp-player_attack`. This is also necessary for `player_hp` too! So I think the code will be: ``` import random print("------Welcome To The Game------") player_hp=5 player_attack=3 enemy_hp=10 enemy_attack=2 while player_hp >0 and enemy_hp >0: Choice=input("""What will you do: A.Attack B.Defend C.Inspect (A/B/C): """) if Choice=="A": enemy_hp =enemy_hp-player_attack print("You dealt",str(player_attack), f" damage and enemy hp is {enemy_hp}") elif Choice=="B": dice_roll=set(("1","2","3","4","5")) dice_list=list(dice_roll) value= random.choice(dice_list) if value =="1": player_hp= player_hp-1 elif value=="2": player_hp=player_hp-2 elif value=="3": player_hp=player_hp-3 elif value=="4": player_hp=player_hp-4 elif value=="5": player_hp=player_hp-5 elif Choice=="C": print(f"Enemy hp is,{enemy_hp} Enemy attack is {value}") if enemy_hp<=0: print("you won") elif player_hp <=0: print("you lost") ``` **Important notes:** 1. You should consider that `player_hp` and `enemy_hp` should be checked continuously! so they should be written in the `while` loop, not the outside! 2. Another Important point is based on your code this can be concluded that the damage of the enemy is randomly chosen between numbers from 1 to 5! Because you decreased `player_hp` by these numbers! so `enemy_attack` is useless in this code! 3. Conditions for `while` loop should be changed to `player_hp >0 and enemy_hp >0`! `player_hp >0` means that player is alive. this is meaningful for `enemy_hp >0` as well! 4. as mwo said: > > You should not rely on sets to provide random arrangements of values. > > > So we can use the `random` library to randomly choose a value from `dice_list` with `random.choice(dice_list)`. 5. It's not a good idea to define `dice_roll` inside the `while` loop. in this way, you are defining `dice_roll` again and again where this isn't necessary at all! so it's recommended to define it before the `while` loop. in this case, you can define `dice_list` like this before `while` loop: `dice_list=["1","2","3","4","5"]` 6. Also you don't need to define many conditions for subtracting the damage from `player_hp`! `player_hp= player_hp-value` would be enough and you don't need to `if/elif` statements there.
``` dice_roll=set(("1","2","3","4","5")) dice_list=list(dice_roll) value=dice_list[0] ... player_hp-4 elif value=="5": player_hp-5 ``` This code you created is a way of generating a random number, but there are better ways of doing this. --- Using the module `random`, you can access the function `random.randint`, which generates a random `int` between two given numbers: ``` >>> random.randint(0, 10) # It includes both the extremes 4 ``` So a better idea would be: ``` player_hp-=random.randint(1, 5) ``` --- You should also be aware that: ``` player_hp-5 ``` doesn't reduce `player_hp` by `5`, because it's not an assignation, you can do it in one of this ways: ``` player_hp-=5 # ...or... player_hp = player_hp-5 ```
71,344,145
So I'm making a very simple battle mechanic in python where the player will be able to attack, defend or inspect the enemy : ``` print("------Welcome To The Game------") player_hp=5 player_attack=3 enemy_hp=10 enemy_attack=2 while player_hp !=0 and enemy_hp !=0: Choice=input("""What will you do: A.Attack B.Defend C.Inspect (A/B/C): """) if Choice=="A": enemy_hp-player_attack print("You dealt",str(player_attack), " damage" if Choice=="B": dice_roll=set(("1","2","3","4","5")) dice_list=list(dice_roll) value=dice_list[0] if value =="1": player_hp-1 elif value=="2": player_hp-2 elif value=="3": player_hp-3 elif value=="4": player_hp-4 elif value=="5": player_hp-5 if Choice=="C": print("""Enemy hp is,enemy_hp Enemy attack is ,enemy_attack""") else: if player_hp ==0: print("you lost") if enemy_hp==0: print("you won") ``` Problem that I'm having is that the **value** number doesn't reset after the loop finishes , if let's say the value first was 2 damage it will remain 2 everytime you press B until your hp finishes , so how can I make it that everytime you press the **defend** option the value is different?
2022/03/03
[ "https://Stackoverflow.com/questions/71344145", "https://Stackoverflow.com", "https://Stackoverflow.com/users/18367808/" ]
The problem is where you don't change the `enemy_hp` and `player_hp`! For example, when player choose to attack, then `enemy_hp` should decrease by `enemy_hp = enemy_hp-player_attack`. This is also necessary for `player_hp` too! So I think the code will be: ``` import random print("------Welcome To The Game------") player_hp=5 player_attack=3 enemy_hp=10 enemy_attack=2 while player_hp >0 and enemy_hp >0: Choice=input("""What will you do: A.Attack B.Defend C.Inspect (A/B/C): """) if Choice=="A": enemy_hp =enemy_hp-player_attack print("You dealt",str(player_attack), f" damage and enemy hp is {enemy_hp}") elif Choice=="B": dice_roll=set(("1","2","3","4","5")) dice_list=list(dice_roll) value= random.choice(dice_list) if value =="1": player_hp= player_hp-1 elif value=="2": player_hp=player_hp-2 elif value=="3": player_hp=player_hp-3 elif value=="4": player_hp=player_hp-4 elif value=="5": player_hp=player_hp-5 elif Choice=="C": print(f"Enemy hp is,{enemy_hp} Enemy attack is {value}") if enemy_hp<=0: print("you won") elif player_hp <=0: print("you lost") ``` **Important notes:** 1. You should consider that `player_hp` and `enemy_hp` should be checked continuously! so they should be written in the `while` loop, not the outside! 2. Another Important point is based on your code this can be concluded that the damage of the enemy is randomly chosen between numbers from 1 to 5! Because you decreased `player_hp` by these numbers! so `enemy_attack` is useless in this code! 3. Conditions for `while` loop should be changed to `player_hp >0 and enemy_hp >0`! `player_hp >0` means that player is alive. this is meaningful for `enemy_hp >0` as well! 4. as mwo said: > > You should not rely on sets to provide random arrangements of values. > > > So we can use the `random` library to randomly choose a value from `dice_list` with `random.choice(dice_list)`. 5. It's not a good idea to define `dice_roll` inside the `while` loop. in this way, you are defining `dice_roll` again and again where this isn't necessary at all! so it's recommended to define it before the `while` loop. in this case, you can define `dice_list` like this before `while` loop: `dice_list=["1","2","3","4","5"]` 6. Also you don't need to define many conditions for subtracting the damage from `player_hp`! `player_hp= player_hp-value` would be enough and you don't need to `if/elif` statements there.
As the others have already recommended, I would also use the random function for an randomly chosen int between 1 and 5. If I understood your program correctly, in choice B you want the chosen value to be the players defence and the attack from the opponent to be subtracted from it. This would be my suggested solution ``` import random print("------Welcome To The Game------") min = 1 max = 5 player_hp=5 player_attack=3 enemy_hp=10 enemy_attack= while player_hp>0 and enemy_hp>0: Choice=input("""What will you do: A.Attack B.Defend C.Inspect (A/B/C): """) if Choice=="A": enemy_hp = enemy_hp-player_attack print("You dealt",str(player_attack), "damage") elif Choice=="B": defence = random.randint(min,max) print("value of defense is",defence) damage = defence - enemy_attack if damage < 0: print("value of damage is",damage) player_hp = player_hp - (-damage) else: print("there was no damage") elif Choice=="C": print("player hp is",player_hp) print("enemy hp is",enemy_hp) else: if player_hp<=0: print("you lost") if enemy_hp<=0: print("you won") ```
48,515,581
I have seen two ways of visualizing transposed convolutions from credible sources, and as far as I can see they conflict. My question boils down to, for each application of the kernel, do we go from many (e.g. `3x3`) elements with input padding to one, or do we go from one element to many (e.g. `3x3`)? *Related question:* Which version does [tf.nn.conv2d\_transpose](https://www.tensorflow.org/api_docs/python/tf/nn/conv2d_transpose) implement? **The sources of my confusion are:** --- [A guide to convolution arithmetic for deep learning](https://arxiv.org/abs/1603.07285) has probably the most famous visualization out there, but it isn't peer reviewed (Arxiv). ![](https://github.com/vdumoulin/conv_arithmetic/raw/master/gif/padding_strides_transposed.gif) --- The second is from [Deconvolution and Checkerboard Artifacts](https://distill.pub/2016/deconv-checkerboard/), which technically isn't peer reviewed either (Distil), but it is from a much more reputable source. (The term deconvolution is used in the article, but it is stated that this is the same as transposed conv.) [![enter image description here](https://i.stack.imgur.com/KvNi1.png)](https://i.stack.imgur.com/KvNi1.png) --- Due to the nature of this question it is hard to look for results online, e.g. this [SO](https://datascience.stackexchange.com/questions/6107/what-are-deconvolutional-layers) posts takes the first position, but I am not sure to what extent I can trust it.
2018/01/30
[ "https://Stackoverflow.com/questions/48515581", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3747801/" ]
Strided convolutions, deconvolutions, transposed convolutions all mean the same thing. Both papers are correct and you don't need to be doubtful as both of them are [cited](https://scholar.google.com/) a lot. But the distil image is from a different perspective as its trying to show the artifacts problem. The first visualisation is transposed convolutions with stride 2 and padding 1. If it was stride 1, there wouldn't be any padding in between inputs. The padding on the borders depend on the dimension of the output. By deconvolution, we generally go from a smaller dimension to a higher dimension. And input data is generally padded to achieve the desired output dimensions. I believe the confusion arises from the padding patterns. Take a look at this formula ``` output = [(input-1)stride]+kernel_size-2*padding_of_output ``` Its a rearrangement of the general convolution output formula. Output here refers to the output of the deconvolution operation. To best understand deconvolution, I suggest thinking in terms of the equation, i.e., flipping what a convolution does. Its asking how do I reverse what a convolution operation does? Hope that helps.
Good explanation from Justin Johnson (part of the Stanford cs231n mooc): <https://youtu.be/ByjaPdWXKJ4?t=1221> (starts at 20:21) He reviews strided conv and then he explains transposed convolutions. ![](https://i.stack.imgur.com/h0xMp.png)
48,515,581
I have seen two ways of visualizing transposed convolutions from credible sources, and as far as I can see they conflict. My question boils down to, for each application of the kernel, do we go from many (e.g. `3x3`) elements with input padding to one, or do we go from one element to many (e.g. `3x3`)? *Related question:* Which version does [tf.nn.conv2d\_transpose](https://www.tensorflow.org/api_docs/python/tf/nn/conv2d_transpose) implement? **The sources of my confusion are:** --- [A guide to convolution arithmetic for deep learning](https://arxiv.org/abs/1603.07285) has probably the most famous visualization out there, but it isn't peer reviewed (Arxiv). ![](https://github.com/vdumoulin/conv_arithmetic/raw/master/gif/padding_strides_transposed.gif) --- The second is from [Deconvolution and Checkerboard Artifacts](https://distill.pub/2016/deconv-checkerboard/), which technically isn't peer reviewed either (Distil), but it is from a much more reputable source. (The term deconvolution is used in the article, but it is stated that this is the same as transposed conv.) [![enter image description here](https://i.stack.imgur.com/KvNi1.png)](https://i.stack.imgur.com/KvNi1.png) --- Due to the nature of this question it is hard to look for results online, e.g. this [SO](https://datascience.stackexchange.com/questions/6107/what-are-deconvolutional-layers) posts takes the first position, but I am not sure to what extent I can trust it.
2018/01/30
[ "https://Stackoverflow.com/questions/48515581", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3747801/" ]
I want to stress a little more what Littleone also mentioned in his last paragraph: **A transposed convolution will reverse the spatial transformation of a regular convolution with the same parameters.** If you perform a regular convolution followed by a transposed convolution and both have the same settings (kernel size, padding, stride), then the input and output will have the same shape. This makes it super easy to build encoder-decoder networks with them. I wrote an article about different types of convolutions in Deep Learning [here](https://towardsdatascience.com/types-of-convolutions-in-deep-learning-717013397f4d), where this is also covered. PS: Please don't call it a deconvolution
Good explanation from Justin Johnson (part of the Stanford cs231n mooc): <https://youtu.be/ByjaPdWXKJ4?t=1221> (starts at 20:21) He reviews strided conv and then he explains transposed convolutions. ![](https://i.stack.imgur.com/h0xMp.png)
12,760,271
I have a fasta file as shown below. I would like to convert the [three letter codes](https://en.wikipedia.org/wiki/Amino_acid#Table_of_standard_amino_acid_abbreviations_and_properties) to one letter code. How can I do this with python or R? ``` >2ppo ARGHISLEULEULYS >3oot METHISARGARGMET ``` desired output ``` >2ppo RHLLK >3oot MHRRM ``` your suggestions would be appreciated!!
2012/10/06
[ "https://Stackoverflow.com/questions/12760271", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1725152/" ]
Here is a way to do it in R: ``` # Variables: foo <- c("ARGHISLEULEULYS","METHISARGARGMET") # Code maps: code3 <- c("Ala", "Arg", "Asn", "Asp", "Cys", "Glu", "Gln", "Gly", "His", "Ile", "Leu", "Lys", "Met", "Phe", "Pro", "Ser", "Thr", "Trp", "Tyr", "Val") code1 <- c("A", "R", "N", "D", "C", "E", "Q", "G", "H", "I", "L", "K", "M", "F", "P", "S", "T", "W", "Y", "V") # For each code replace 3letter code by 1letter code: for (i in 1:length(code3)) { foo <- gsub(code3[i],code1[i],foo,ignore.case=TRUE) } ``` Results in : ``` > foo [1] "RHLLK" "MHRRM" ``` Note that I changed the variable name as variable names are not allowed to start with a number in R.
Python 3 solutions. In my work, the annoyed part is that the amino acid codes can refer to the modified ones which often appear in the PDB/mmCIF files, like > > 'Tih'-->'A'. > > > So the mapping can be more than 22 pairs. The 3rd party tools in Python like > > Bio.SeqUtils.IUPACData.protein\_letters\_3to1 > > > cannot handle it. My easiest solution is to use the <http://www.ebi.ac.uk/pdbe-srv/pdbechem> to find the mapping and add the unusual mapping to the dict in my own functions whenever I encounter them. ``` def three_to_one(three_letter_code): mapping = {'Aba':'A','Ace':'X','Acr':'X','Ala':'A','Aly':'K','Arg':'R','Asn':'N','Asp':'D','Cas':'C', 'Ccs':'C','Cme':'C','Csd':'C','Cso':'C','Csx':'C','Cys':'C','Dal':'A','Dbb':'T','Dbu':'T', 'Dha':'S','Gln':'Q','Glu':'E','Gly':'G','Glz':'G','His':'H','Hse':'S','Ile':'I','Leu':'L', 'Llp':'K','Lys':'K','Men':'N','Met':'M','Mly':'K','Mse':'M','Nh2':'X','Nle':'L','Ocs':'C', 'Pca':'E','Phe':'F','Pro':'P','Ptr':'Y','Sep':'S','Ser':'S','Thr':'T','Tih':'A','Tpo':'T', 'Trp':'W','Tyr':'Y','Unk':'X','Val':'V','Ycm':'C','Sec':'U','Pyl':'O'} # you can add more return mapping[three_letter_code[0].upper() + three_letter_code[1:].lower()] ``` The other solution is to retrieve the mapping online (But the url and the html pattern may change through time): ``` import re import urllib.request def three_to_one_online(three_letter_code): url = "http://www.ebi.ac.uk/pdbe-srv/pdbechem/chemicalCompound/show/" + three_letter_code with urllib.request.urlopen(url) as response: single_letter_code = re.search('\s*<td\s*>\s*<h3>One-letter code.*</h3>\s*</td>\s*<td>\s*([A-Z])\s*</td>', response.read().decode('utf-8')).group(1) return single_letter_code ``` Here I directly use the re instead of the html parsers for the simplicity. Hope these can help.
12,760,271
I have a fasta file as shown below. I would like to convert the [three letter codes](https://en.wikipedia.org/wiki/Amino_acid#Table_of_standard_amino_acid_abbreviations_and_properties) to one letter code. How can I do this with python or R? ``` >2ppo ARGHISLEULEULYS >3oot METHISARGARGMET ``` desired output ``` >2ppo RHLLK >3oot MHRRM ``` your suggestions would be appreciated!!
2012/10/06
[ "https://Stackoverflow.com/questions/12760271", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1725152/" ]
Here is a way to do it in R: ``` # Variables: foo <- c("ARGHISLEULEULYS","METHISARGARGMET") # Code maps: code3 <- c("Ala", "Arg", "Asn", "Asp", "Cys", "Glu", "Gln", "Gly", "His", "Ile", "Leu", "Lys", "Met", "Phe", "Pro", "Ser", "Thr", "Trp", "Tyr", "Val") code1 <- c("A", "R", "N", "D", "C", "E", "Q", "G", "H", "I", "L", "K", "M", "F", "P", "S", "T", "W", "Y", "V") # For each code replace 3letter code by 1letter code: for (i in 1:length(code3)) { foo <- gsub(code3[i],code1[i],foo,ignore.case=TRUE) } ``` Results in : ``` > foo [1] "RHLLK" "MHRRM" ``` Note that I changed the variable name as variable names are not allowed to start with a number in R.
Using R: ``` convert <- function(l) { map <- c("A", "R", "N", "D", "C", "E", "Q", "G", "H", "I", "L", "K", "M", "F", "P", "S", "T", "W", "Y", "V") names(map) <- c("ALA", "ARG", "ASN", "ASP", "CYS", "GLU", "GLN", "GLY", "HIS", "ILE", "LEU", "LYS", "MET", "PHE", "PRO", "SER", "THR", "TRP", "TYR", "VAL") sapply(strsplit(l, "(?<=[A-Z]{3})", perl = TRUE), function(x) paste(map[x], collapse = "")) } convert(c("ARGHISLEULEULYS", "METHISARGARGMET")) # [1] "RHLLK" "MHRRM" ```
12,760,271
I have a fasta file as shown below. I would like to convert the [three letter codes](https://en.wikipedia.org/wiki/Amino_acid#Table_of_standard_amino_acid_abbreviations_and_properties) to one letter code. How can I do this with python or R? ``` >2ppo ARGHISLEULEULYS >3oot METHISARGARGMET ``` desired output ``` >2ppo RHLLK >3oot MHRRM ``` your suggestions would be appreciated!!
2012/10/06
[ "https://Stackoverflow.com/questions/12760271", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1725152/" ]
Biopython has a nice solution ``` >>> from Bio.PDB.Polypeptide import * >>> three_to_one('ALA') 'A' ``` For your example, I'll solve it by this one liner ``` >>> from Bio.PDB.Polypeptide import * >>> str3aa = 'ARGHISLEULEULYS' >>> "".join([three_to_one(aa3) for aa3 in [ "".join(g) for g in zip(*(iter(str3aa),) * 3)]]) >>> 'RHLLK' ``` They may criticize me for this type of one liner :), but deep in my heart I am still in love with PERL.
Python 3 solutions. In my work, the annoyed part is that the amino acid codes can refer to the modified ones which often appear in the PDB/mmCIF files, like > > 'Tih'-->'A'. > > > So the mapping can be more than 22 pairs. The 3rd party tools in Python like > > Bio.SeqUtils.IUPACData.protein\_letters\_3to1 > > > cannot handle it. My easiest solution is to use the <http://www.ebi.ac.uk/pdbe-srv/pdbechem> to find the mapping and add the unusual mapping to the dict in my own functions whenever I encounter them. ``` def three_to_one(three_letter_code): mapping = {'Aba':'A','Ace':'X','Acr':'X','Ala':'A','Aly':'K','Arg':'R','Asn':'N','Asp':'D','Cas':'C', 'Ccs':'C','Cme':'C','Csd':'C','Cso':'C','Csx':'C','Cys':'C','Dal':'A','Dbb':'T','Dbu':'T', 'Dha':'S','Gln':'Q','Glu':'E','Gly':'G','Glz':'G','His':'H','Hse':'S','Ile':'I','Leu':'L', 'Llp':'K','Lys':'K','Men':'N','Met':'M','Mly':'K','Mse':'M','Nh2':'X','Nle':'L','Ocs':'C', 'Pca':'E','Phe':'F','Pro':'P','Ptr':'Y','Sep':'S','Ser':'S','Thr':'T','Tih':'A','Tpo':'T', 'Trp':'W','Tyr':'Y','Unk':'X','Val':'V','Ycm':'C','Sec':'U','Pyl':'O'} # you can add more return mapping[three_letter_code[0].upper() + three_letter_code[1:].lower()] ``` The other solution is to retrieve the mapping online (But the url and the html pattern may change through time): ``` import re import urllib.request def three_to_one_online(three_letter_code): url = "http://www.ebi.ac.uk/pdbe-srv/pdbechem/chemicalCompound/show/" + three_letter_code with urllib.request.urlopen(url) as response: single_letter_code = re.search('\s*<td\s*>\s*<h3>One-letter code.*</h3>\s*</td>\s*<td>\s*([A-Z])\s*</td>', response.read().decode('utf-8')).group(1) return single_letter_code ``` Here I directly use the re instead of the html parsers for the simplicity. Hope these can help.
12,760,271
I have a fasta file as shown below. I would like to convert the [three letter codes](https://en.wikipedia.org/wiki/Amino_acid#Table_of_standard_amino_acid_abbreviations_and_properties) to one letter code. How can I do this with python or R? ``` >2ppo ARGHISLEULEULYS >3oot METHISARGARGMET ``` desired output ``` >2ppo RHLLK >3oot MHRRM ``` your suggestions would be appreciated!!
2012/10/06
[ "https://Stackoverflow.com/questions/12760271", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1725152/" ]
Here is a way to do it in R: ``` # Variables: foo <- c("ARGHISLEULEULYS","METHISARGARGMET") # Code maps: code3 <- c("Ala", "Arg", "Asn", "Asp", "Cys", "Glu", "Gln", "Gly", "His", "Ile", "Leu", "Lys", "Met", "Phe", "Pro", "Ser", "Thr", "Trp", "Tyr", "Val") code1 <- c("A", "R", "N", "D", "C", "E", "Q", "G", "H", "I", "L", "K", "M", "F", "P", "S", "T", "W", "Y", "V") # For each code replace 3letter code by 1letter code: for (i in 1:length(code3)) { foo <- gsub(code3[i],code1[i],foo,ignore.case=TRUE) } ``` Results in : ``` > foo [1] "RHLLK" "MHRRM" ``` Note that I changed the variable name as variable names are not allowed to start with a number in R.
For those who land here on 2017 and beyond: Here's a single line Linux bash command to convert protein amino acid three letter code to single letter code in a text file. I know this is not very elegant, but I hope this helps someone searching for the same and want to use single line command. ``` sed 's/ALA/A/g;s/CYS/C/g;s/ASP/D/g;s/GLU/E/g;s/PHE/F/g;s/GLY/G/g;s/HIS/H/g;s/HID/H/g;s/HIE/H/g;s/ILE/I/g;s/LYS/K/g;s/LEU/L/g;s/MET/M/g;s/ASN/N/g;s/PRO/P/g;s/GLN/Q/g;s/ARG/R/g;s/SER/S/g;s/THR/T/g;s/VAL/V/g;s/TRP/W/g;s/TYR/Y/g;s/MSE/X/g' < input_file_three_letter_code.txt > output_file_single_letter_code.txt ``` Solution for the original question above, as a single command line: ``` sed 's/.\{3\}/& /g' | sed 's/ALA/A/g;s/CYS/C/g;s/ASP/D/g;s/GLU/E/g;s/PHE/F/g;s/GLY/G/g;s/HIS/H/g;s/HID/H/g;s/HIE/H/g;s/ILE/I/g;s/LYS/K/g;s/LEU/L/g;s/MET/M/g;s/ASN/N/g;s/PRO/P/g;s/GLN/Q/g;s/ARG/R/g;s/SER/S/g;s/THR/T/g;s/VAL/V/g;s/TRP/W/g;s/TYR/Y/g;s/MSE/X/g' | sed 's/ //g' < input_file_three_letter_code.txt > output_file_single_letter_code.txt ``` Explanation: [1] `sed 's/.\{3\}/& /g'` will spllit the sequence. It will add a space after every 3rd letter. [2] The second '`sed'` command in the pipe will take the output of above and convert to single letter code. Add any non-standard residue as `s/XYZ/X/g;` to this command. [3] The third '`sed`' command, `sed 's/ //g'` will remove white-space.
12,760,271
I have a fasta file as shown below. I would like to convert the [three letter codes](https://en.wikipedia.org/wiki/Amino_acid#Table_of_standard_amino_acid_abbreviations_and_properties) to one letter code. How can I do this with python or R? ``` >2ppo ARGHISLEULEULYS >3oot METHISARGARGMET ``` desired output ``` >2ppo RHLLK >3oot MHRRM ``` your suggestions would be appreciated!!
2012/10/06
[ "https://Stackoverflow.com/questions/12760271", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1725152/" ]
BioPython already has built-in dictionaries to help with such translations. Following commands will show you a whole list of available dictionaries: ``` import Bio help(Bio.SeqUtils.IUPACData) ``` The predefined dictionary you are looking for: ``` Bio.SeqUtils.IUPACData.protein_letters_3to1['Ala'] ```
``` my %aa_hash=( Ala=>'A', Arg=>'R', Asn=>'N', Asp=>'D', Cys=>'C', Glu=>'E', Gln=>'Q', Gly=>'G', His=>'H', Ile=>'I', Leu=>'L', Lys=>'K', Met=>'M', Phe=>'F', Pro=>'P', Ser=>'S', Thr=>'T', Trp=>'W', Tyr=>'Y', Val=>'V', Sec=>'U', #http://www.uniprot.org/manual/non_std;Selenocysteine (Sec) and pyrrolysine (Pyl) Pyl=>'O', ); while(<>){ chomp; my $aa=$_; warn "ERROR!! $aa invalid or not found in hash\n" if !$aa_hash{$aa}; print "$aa\t$aa_hash{$aa}\n"; } ``` Use this perl script to convert triplet a.a codes to single letter code.
12,760,271
I have a fasta file as shown below. I would like to convert the [three letter codes](https://en.wikipedia.org/wiki/Amino_acid#Table_of_standard_amino_acid_abbreviations_and_properties) to one letter code. How can I do this with python or R? ``` >2ppo ARGHISLEULEULYS >3oot METHISARGARGMET ``` desired output ``` >2ppo RHLLK >3oot MHRRM ``` your suggestions would be appreciated!!
2012/10/06
[ "https://Stackoverflow.com/questions/12760271", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1725152/" ]
``` >>> src = "ARGHISLEULEULYS" >>> trans = {'ARG':'R', 'HIS':'H', 'LEU':'L', 'LYS':'K'} >>> "".join(trans[src[x:x+3]] for x in range(0, len(src), 3)) 'RHLLK' ``` You just need to add the rest of the entries to the `trans` dict. **Edit:** To make the rest of `trans`, you can do this. File `table`: ``` Ala A Arg R Asn N Asp D Cys C Glu E Gln Q Gly G His H Ile I Leu L Lys K Met M Phe F Pro P Ser S Thr T Trp W Tyr Y Val V ``` Read it: ``` trans = dict((l.upper(), s) for l, s in [row.strip().split() for row in open("table").readlines()]) ```
Another way to do it is with the [seqinr](https://cran.r-project.org/web/packages/seqinr/index.html) and [iPAC](http://www.bioconductor.org/packages/release/bioc/html/iPAC.html) package in R. ``` # install.packages("seqinr") # source("https://bioconductor.org/biocLite.R") # biocLite("iPAC") library(seqinr) library(iPAC) #read in file fasta = read.fasta(file = "test_fasta.fasta", seqtype = "AA", as.string = T, set.attributes = F) #split string n = 3 fasta1 = lapply(fasta, substring(x,seq(1,nchar(x),n),seq(n,nchar(x),n))) #convert the three letter code for each element in the list fasta2 = lapply(fasta1, function(x) paste(sapply(x, get.SingleLetterCode), collapse = "")) # > fasta2 # $`2ppo` # [1] "RHLLK" # # $`3oot` # [1] "MHRRM" ```
12,760,271
I have a fasta file as shown below. I would like to convert the [three letter codes](https://en.wikipedia.org/wiki/Amino_acid#Table_of_standard_amino_acid_abbreviations_and_properties) to one letter code. How can I do this with python or R? ``` >2ppo ARGHISLEULEULYS >3oot METHISARGARGMET ``` desired output ``` >2ppo RHLLK >3oot MHRRM ``` your suggestions would be appreciated!!
2012/10/06
[ "https://Stackoverflow.com/questions/12760271", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1725152/" ]
BioPython already has built-in dictionaries to help with such translations. Following commands will show you a whole list of available dictionaries: ``` import Bio help(Bio.SeqUtils.IUPACData) ``` The predefined dictionary you are looking for: ``` Bio.SeqUtils.IUPACData.protein_letters_3to1['Ala'] ```
Using R: ``` convert <- function(l) { map <- c("A", "R", "N", "D", "C", "E", "Q", "G", "H", "I", "L", "K", "M", "F", "P", "S", "T", "W", "Y", "V") names(map) <- c("ALA", "ARG", "ASN", "ASP", "CYS", "GLU", "GLN", "GLY", "HIS", "ILE", "LEU", "LYS", "MET", "PHE", "PRO", "SER", "THR", "TRP", "TYR", "VAL") sapply(strsplit(l, "(?<=[A-Z]{3})", perl = TRUE), function(x) paste(map[x], collapse = "")) } convert(c("ARGHISLEULEULYS", "METHISARGARGMET")) # [1] "RHLLK" "MHRRM" ```
12,760,271
I have a fasta file as shown below. I would like to convert the [three letter codes](https://en.wikipedia.org/wiki/Amino_acid#Table_of_standard_amino_acid_abbreviations_and_properties) to one letter code. How can I do this with python or R? ``` >2ppo ARGHISLEULEULYS >3oot METHISARGARGMET ``` desired output ``` >2ppo RHLLK >3oot MHRRM ``` your suggestions would be appreciated!!
2012/10/06
[ "https://Stackoverflow.com/questions/12760271", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1725152/" ]
Here is a way to do it in R: ``` # Variables: foo <- c("ARGHISLEULEULYS","METHISARGARGMET") # Code maps: code3 <- c("Ala", "Arg", "Asn", "Asp", "Cys", "Glu", "Gln", "Gly", "His", "Ile", "Leu", "Lys", "Met", "Phe", "Pro", "Ser", "Thr", "Trp", "Tyr", "Val") code1 <- c("A", "R", "N", "D", "C", "E", "Q", "G", "H", "I", "L", "K", "M", "F", "P", "S", "T", "W", "Y", "V") # For each code replace 3letter code by 1letter code: for (i in 1:length(code3)) { foo <- gsub(code3[i],code1[i],foo,ignore.case=TRUE) } ``` Results in : ``` > foo [1] "RHLLK" "MHRRM" ``` Note that I changed the variable name as variable names are not allowed to start with a number in R.
Another way to do it is with the [seqinr](https://cran.r-project.org/web/packages/seqinr/index.html) and [iPAC](http://www.bioconductor.org/packages/release/bioc/html/iPAC.html) package in R. ``` # install.packages("seqinr") # source("https://bioconductor.org/biocLite.R") # biocLite("iPAC") library(seqinr) library(iPAC) #read in file fasta = read.fasta(file = "test_fasta.fasta", seqtype = "AA", as.string = T, set.attributes = F) #split string n = 3 fasta1 = lapply(fasta, substring(x,seq(1,nchar(x),n),seq(n,nchar(x),n))) #convert the three letter code for each element in the list fasta2 = lapply(fasta1, function(x) paste(sapply(x, get.SingleLetterCode), collapse = "")) # > fasta2 # $`2ppo` # [1] "RHLLK" # # $`3oot` # [1] "MHRRM" ```
12,760,271
I have a fasta file as shown below. I would like to convert the [three letter codes](https://en.wikipedia.org/wiki/Amino_acid#Table_of_standard_amino_acid_abbreviations_and_properties) to one letter code. How can I do this with python or R? ``` >2ppo ARGHISLEULEULYS >3oot METHISARGARGMET ``` desired output ``` >2ppo RHLLK >3oot MHRRM ``` your suggestions would be appreciated!!
2012/10/06
[ "https://Stackoverflow.com/questions/12760271", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1725152/" ]
``` my %aa_hash=( Ala=>'A', Arg=>'R', Asn=>'N', Asp=>'D', Cys=>'C', Glu=>'E', Gln=>'Q', Gly=>'G', His=>'H', Ile=>'I', Leu=>'L', Lys=>'K', Met=>'M', Phe=>'F', Pro=>'P', Ser=>'S', Thr=>'T', Trp=>'W', Tyr=>'Y', Val=>'V', Sec=>'U', #http://www.uniprot.org/manual/non_std;Selenocysteine (Sec) and pyrrolysine (Pyl) Pyl=>'O', ); while(<>){ chomp; my $aa=$_; warn "ERROR!! $aa invalid or not found in hash\n" if !$aa_hash{$aa}; print "$aa\t$aa_hash{$aa}\n"; } ``` Use this perl script to convert triplet a.a codes to single letter code.
Python 3 solutions. In my work, the annoyed part is that the amino acid codes can refer to the modified ones which often appear in the PDB/mmCIF files, like > > 'Tih'-->'A'. > > > So the mapping can be more than 22 pairs. The 3rd party tools in Python like > > Bio.SeqUtils.IUPACData.protein\_letters\_3to1 > > > cannot handle it. My easiest solution is to use the <http://www.ebi.ac.uk/pdbe-srv/pdbechem> to find the mapping and add the unusual mapping to the dict in my own functions whenever I encounter them. ``` def three_to_one(three_letter_code): mapping = {'Aba':'A','Ace':'X','Acr':'X','Ala':'A','Aly':'K','Arg':'R','Asn':'N','Asp':'D','Cas':'C', 'Ccs':'C','Cme':'C','Csd':'C','Cso':'C','Csx':'C','Cys':'C','Dal':'A','Dbb':'T','Dbu':'T', 'Dha':'S','Gln':'Q','Glu':'E','Gly':'G','Glz':'G','His':'H','Hse':'S','Ile':'I','Leu':'L', 'Llp':'K','Lys':'K','Men':'N','Met':'M','Mly':'K','Mse':'M','Nh2':'X','Nle':'L','Ocs':'C', 'Pca':'E','Phe':'F','Pro':'P','Ptr':'Y','Sep':'S','Ser':'S','Thr':'T','Tih':'A','Tpo':'T', 'Trp':'W','Tyr':'Y','Unk':'X','Val':'V','Ycm':'C','Sec':'U','Pyl':'O'} # you can add more return mapping[three_letter_code[0].upper() + three_letter_code[1:].lower()] ``` The other solution is to retrieve the mapping online (But the url and the html pattern may change through time): ``` import re import urllib.request def three_to_one_online(three_letter_code): url = "http://www.ebi.ac.uk/pdbe-srv/pdbechem/chemicalCompound/show/" + three_letter_code with urllib.request.urlopen(url) as response: single_letter_code = re.search('\s*<td\s*>\s*<h3>One-letter code.*</h3>\s*</td>\s*<td>\s*([A-Z])\s*</td>', response.read().decode('utf-8')).group(1) return single_letter_code ``` Here I directly use the re instead of the html parsers for the simplicity. Hope these can help.
12,760,271
I have a fasta file as shown below. I would like to convert the [three letter codes](https://en.wikipedia.org/wiki/Amino_acid#Table_of_standard_amino_acid_abbreviations_and_properties) to one letter code. How can I do this with python or R? ``` >2ppo ARGHISLEULEULYS >3oot METHISARGARGMET ``` desired output ``` >2ppo RHLLK >3oot MHRRM ``` your suggestions would be appreciated!!
2012/10/06
[ "https://Stackoverflow.com/questions/12760271", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1725152/" ]
For those who land here on 2017 and beyond: Here's a single line Linux bash command to convert protein amino acid three letter code to single letter code in a text file. I know this is not very elegant, but I hope this helps someone searching for the same and want to use single line command. ``` sed 's/ALA/A/g;s/CYS/C/g;s/ASP/D/g;s/GLU/E/g;s/PHE/F/g;s/GLY/G/g;s/HIS/H/g;s/HID/H/g;s/HIE/H/g;s/ILE/I/g;s/LYS/K/g;s/LEU/L/g;s/MET/M/g;s/ASN/N/g;s/PRO/P/g;s/GLN/Q/g;s/ARG/R/g;s/SER/S/g;s/THR/T/g;s/VAL/V/g;s/TRP/W/g;s/TYR/Y/g;s/MSE/X/g' < input_file_three_letter_code.txt > output_file_single_letter_code.txt ``` Solution for the original question above, as a single command line: ``` sed 's/.\{3\}/& /g' | sed 's/ALA/A/g;s/CYS/C/g;s/ASP/D/g;s/GLU/E/g;s/PHE/F/g;s/GLY/G/g;s/HIS/H/g;s/HID/H/g;s/HIE/H/g;s/ILE/I/g;s/LYS/K/g;s/LEU/L/g;s/MET/M/g;s/ASN/N/g;s/PRO/P/g;s/GLN/Q/g;s/ARG/R/g;s/SER/S/g;s/THR/T/g;s/VAL/V/g;s/TRP/W/g;s/TYR/Y/g;s/MSE/X/g' | sed 's/ //g' < input_file_three_letter_code.txt > output_file_single_letter_code.txt ``` Explanation: [1] `sed 's/.\{3\}/& /g'` will spllit the sequence. It will add a space after every 3rd letter. [2] The second '`sed'` command in the pipe will take the output of above and convert to single letter code. Add any non-standard residue as `s/XYZ/X/g;` to this command. [3] The third '`sed`' command, `sed 's/ //g'` will remove white-space.
``` my %aa_hash=( Ala=>'A', Arg=>'R', Asn=>'N', Asp=>'D', Cys=>'C', Glu=>'E', Gln=>'Q', Gly=>'G', His=>'H', Ile=>'I', Leu=>'L', Lys=>'K', Met=>'M', Phe=>'F', Pro=>'P', Ser=>'S', Thr=>'T', Trp=>'W', Tyr=>'Y', Val=>'V', Sec=>'U', #http://www.uniprot.org/manual/non_std;Selenocysteine (Sec) and pyrrolysine (Pyl) Pyl=>'O', ); while(<>){ chomp; my $aa=$_; warn "ERROR!! $aa invalid or not found in hash\n" if !$aa_hash{$aa}; print "$aa\t$aa_hash{$aa}\n"; } ``` Use this perl script to convert triplet a.a codes to single letter code.
27,451,561
I'm new to stack so this might be a very silly mistake. I'm trying to setup a one node swift configuration for a simple proof of concept. I did follow the [instructions](http://docs.openstack.org/juno/install-guide/install/apt/content/ch_swift.html). However, something is missing. I keep getting this error: ``` root@lab-srv2544:/etc/swift# swift stat Traceback (most recent call last): File "/usr/bin/swift", line 10, in <module> sys.exit(main()) File "/usr/lib/python2.7/dist-packages/swiftclient/shell.py", line 1287, in main globals()['st_%s' % args[0]](parser, argv[1:], output) File "/usr/lib/python2.7/dist-packages/swiftclient/shell.py", line 492, in st_stat stat_result = swift.stat() File "/usr/lib/python2.7/dist-packages/swiftclient/service.py", line 427, in stat raise SwiftError('Account not found', exc=err) swiftclient.service.SwiftError: 'Account not found' ``` Also, the syslog always complains about proxy-server: ``` Dec 12 12:16:37 lab-srv2544 proxy-server: Account HEAD returning 503 for [] (txn: tx9536949d19d14f1ab5d8d-00548b4d25) (client_ip: 127.0.0.1) Dec 12 12:16:37 lab-srv2544 proxy-server: 127.0.0.1 127.0.0.1 12/Dec/2014/20/16/37 HEAD /v1/AUTH_71e79a29599149099aa98d5d276eaa0b HTTP/1.0 503 - python-swiftclient-2.3.0 8d2b0748804f4b34... - - - tx9536949d19d14f1ab5d8d-00548b4d25 - 0.0013 - - 1418415397.334497929 1418415397.335824013 ``` Anyone seen this problem before?
2014/12/12
[ "https://Stackoverflow.com/questions/27451561", "https://Stackoverflow.com", "https://Stackoverflow.com/users/224982/" ]
`JSON.parse(data)` will turn the data you showing into a JavaScript object, and there are a TON of ways to use the data from there. Example: ``` var parsedData = JSON.parse(data), obj = {}; for(var key in parsedData['model']){ obj[key] = parsedData['model'][key]['id']; } ``` Which would give you a resulting object of this: ``` {category:1, food:1} ``` This is based on the limited example of JSON you provided, the way you access it is entirely dependent on its structure. Hopefully this helps get you started, though.
You want to use JSON.parse(), but it returns the parsed object, so use it thusly: ``` var parsed = JSON.parse(data); ``` then work with parsed.
10,775,007
I want to create a python script which could be used to execute Android adb commands. I had a look at <https://github.com/rbrady/python-adb> but can't seem to make it work perfectly. Any suggestions?
2012/05/27
[ "https://Stackoverflow.com/questions/10775007", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1391277/" ]
This tool should do the work. <https://pypi.python.org/pypi/pyadb/0.1.1> I had to modify a few functions to have it operate on Python 2.7 and use subprocess instead. Here the modified code in my version: ``` def __build_command__(self,cmd): if self.__devices is not None and len(self.__devices) > 1 and self.__target is None: self.__error = "Must set target device first" return None if type(cmd) is tuple: a = list(cmd) elif type(cmd) is list: a = cmd else: a = [cmd] a.insert(0, self.__adb_path) if self.__target is not None: a.insert(1, ['-s', self.__target]) return a def run_cmd(self, cmd): """ Run a command against adb tool ($ adb <cmd>) """ self.__clean__() if self.__adb_path is None: self.__error = "ADB path not set" return try: args = self.__build_command__(cmd) if args is None: return # print 'args>', args cmdp = subprocess.Popen(args, shell=True, stdout=subprocess.PIPE, stderr=subprocess.PIPE) self.__output, self.__error = cmdp.communicate() retcode = cmdp.wait() # print 'stdout>', self.__output # print 'stderr>', self.__error if retcode < 0: print >>sys.stderr, "Child was terminated by signal", -retcode else: return except OSError, e: self.__error = str(e) return ```
Use a `monkeyrunner` tool for that. From it's [documentation](https://developer.android.com/studio/test/monkeyrunner/index.html): > > Besides using the monkeyrunner API itself, you can use the standard Python os and subprocess modules to call Android tools such as Android Debug Bridge. > > >
10,775,007
I want to create a python script which could be used to execute Android adb commands. I had a look at <https://github.com/rbrady/python-adb> but can't seem to make it work perfectly. Any suggestions?
2012/05/27
[ "https://Stackoverflow.com/questions/10775007", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1391277/" ]
Use a `monkeyrunner` tool for that. From it's [documentation](https://developer.android.com/studio/test/monkeyrunner/index.html): > > Besides using the monkeyrunner API itself, you can use the standard Python os and subprocess modules to call Android tools such as Android Debug Bridge. > > >
The [python-adb](https://github.com/google/python-adb) project implements the USB communications layer and up, even providing an Android adb-like API for easy conversion.
10,775,007
I want to create a python script which could be used to execute Android adb commands. I had a look at <https://github.com/rbrady/python-adb> but can't seem to make it work perfectly. Any suggestions?
2012/05/27
[ "https://Stackoverflow.com/questions/10775007", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1391277/" ]
This tool should do the work. <https://pypi.python.org/pypi/pyadb/0.1.1> I had to modify a few functions to have it operate on Python 2.7 and use subprocess instead. Here the modified code in my version: ``` def __build_command__(self,cmd): if self.__devices is not None and len(self.__devices) > 1 and self.__target is None: self.__error = "Must set target device first" return None if type(cmd) is tuple: a = list(cmd) elif type(cmd) is list: a = cmd else: a = [cmd] a.insert(0, self.__adb_path) if self.__target is not None: a.insert(1, ['-s', self.__target]) return a def run_cmd(self, cmd): """ Run a command against adb tool ($ adb <cmd>) """ self.__clean__() if self.__adb_path is None: self.__error = "ADB path not set" return try: args = self.__build_command__(cmd) if args is None: return # print 'args>', args cmdp = subprocess.Popen(args, shell=True, stdout=subprocess.PIPE, stderr=subprocess.PIPE) self.__output, self.__error = cmdp.communicate() retcode = cmdp.wait() # print 'stdout>', self.__output # print 'stderr>', self.__error if retcode < 0: print >>sys.stderr, "Child was terminated by signal", -retcode else: return except OSError, e: self.__error = str(e) return ```
The [python-adb](https://github.com/google/python-adb) project implements the USB communications layer and up, even providing an Android adb-like API for easy conversion.
13,561,220
Think about this code: ``` #!/usr/bin/env python from threading import Thread count = 0 def test(): global count for i in range(10): count = count + 1 if __name__ == '__main__': for i in range(1000): Thread(target = test).start() print count ``` I use multiple threads, but the result is always correct. Does that mean I could use python threads without a lock when implementing something like a visitor counter ?
2012/11/26
[ "https://Stackoverflow.com/questions/13561220", "https://Stackoverflow.com", "https://Stackoverflow.com/users/802585/" ]
You do need one. Although multithreading works differently in Python, due to the [Global Interpreter Lock](http://wiki.python.org/moin/GlobalInterpreterLock), operations that are not atomic in Python-bytecode will still need locking. In you case, you can inspect the bytecode for your function `test` (`dis.dis(test)`): ``` 3 0 SETUP_LOOP 30 (to 33) 3 LOAD_GLOBAL 0 (range) 6 LOAD_CONST 1 (1000) 9 CALL_FUNCTION 1 12 GET_ITER >> 13 FOR_ITER 16 (to 32) 16 STORE_FAST 0 (i) 4 19 LOAD_GLOBAL 1 (count) # start of increment 22 LOAD_CONST 2 (1) 25 BINARY_ADD 26 STORE_GLOBAL 1 (count) # end of increment 29 JUMP_ABSOLUTE 13 >> 32 POP_BLOCK >> 33 LOAD_CONST 0 (None) 36 RETURN_VALUE ``` As you can see, the increment is a 2xload, update, store on bytecode-level, so this wont work. The increment is actually 4 separate operations, which you must protect to ensure they are not interrupted. In your example the problem remains even if you use `count += 1`, as the bytecode shows: ``` 4 19 LOAD_GLOBAL 1 (count) 22 LOAD_CONST 2 (1) 25 INPLACE_ADD 26 STORE_GLOBAL 1 (count) ```
Definitely you should use Lock. You have got a correct answer in such simple case. Try to make it for range(100000) in **main**. You will see the problem. In my machine the result is 999960, but it is random result. Errors will arise depending on system load and so on.
13,561,220
Think about this code: ``` #!/usr/bin/env python from threading import Thread count = 0 def test(): global count for i in range(10): count = count + 1 if __name__ == '__main__': for i in range(1000): Thread(target = test).start() print count ``` I use multiple threads, but the result is always correct. Does that mean I could use python threads without a lock when implementing something like a visitor counter ?
2012/11/26
[ "https://Stackoverflow.com/questions/13561220", "https://Stackoverflow.com", "https://Stackoverflow.com/users/802585/" ]
You do need one. Although multithreading works differently in Python, due to the [Global Interpreter Lock](http://wiki.python.org/moin/GlobalInterpreterLock), operations that are not atomic in Python-bytecode will still need locking. In you case, you can inspect the bytecode for your function `test` (`dis.dis(test)`): ``` 3 0 SETUP_LOOP 30 (to 33) 3 LOAD_GLOBAL 0 (range) 6 LOAD_CONST 1 (1000) 9 CALL_FUNCTION 1 12 GET_ITER >> 13 FOR_ITER 16 (to 32) 16 STORE_FAST 0 (i) 4 19 LOAD_GLOBAL 1 (count) # start of increment 22 LOAD_CONST 2 (1) 25 BINARY_ADD 26 STORE_GLOBAL 1 (count) # end of increment 29 JUMP_ABSOLUTE 13 >> 32 POP_BLOCK >> 33 LOAD_CONST 0 (None) 36 RETURN_VALUE ``` As you can see, the increment is a 2xload, update, store on bytecode-level, so this wont work. The increment is actually 4 separate operations, which you must protect to ensure they are not interrupted. In your example the problem remains even if you use `count += 1`, as the bytecode shows: ``` 4 19 LOAD_GLOBAL 1 (count) 22 LOAD_CONST 2 (1) 25 INPLACE_ADD 26 STORE_GLOBAL 1 (count) ```
You wouldn't need a lock if you just did assignments. But as you do `count = count + 1`, something can happen between each of reading out `count`, adding `1` and writing to `count`. Even using `count += 1` wouldn't solve this problem, as this involves an assignment as well. (As the inplace operations involve an assignment under the hood as well, the situation is the same.)
13,561,220
Think about this code: ``` #!/usr/bin/env python from threading import Thread count = 0 def test(): global count for i in range(10): count = count + 1 if __name__ == '__main__': for i in range(1000): Thread(target = test).start() print count ``` I use multiple threads, but the result is always correct. Does that mean I could use python threads without a lock when implementing something like a visitor counter ?
2012/11/26
[ "https://Stackoverflow.com/questions/13561220", "https://Stackoverflow.com", "https://Stackoverflow.com/users/802585/" ]
You wouldn't need a lock if you just did assignments. But as you do `count = count + 1`, something can happen between each of reading out `count`, adding `1` and writing to `count`. Even using `count += 1` wouldn't solve this problem, as this involves an assignment as well. (As the inplace operations involve an assignment under the hood as well, the situation is the same.)
Definitely you should use Lock. You have got a correct answer in such simple case. Try to make it for range(100000) in **main**. You will see the problem. In my machine the result is 999960, but it is random result. Errors will arise depending on system load and so on.
72,852,359
I would like to know if someone can answer why I cant seem to get a python gstreamer pipline to work without sudo in linux. I have a very small gstreamer pipline and it fails to open the gstreamer if I dont run with sudo infront of python. I have soon depleted my options, any help would be appriciated. (Using Jetson Orin and ubuntu 20.05) ``` import sys import cv2 def read_cam(): G_STREAM_TO_SCREEN = "videotestsrc num-buffers=50 ! videoconvert ! appsink" cap = cv2.VideoCapture(G_STREAM_TO_SCREEN, cv2.CAP_GSTREAMER) if cap.isOpened(): cv2.namedWindow("demo", cv2.WINDOW_AUTOSIZE) while True: ret_val, img = cap.read() cv2.imshow('demo',img) cv2.waitKey(1) else: print ("camera open failed") cv2.destroyAllWindows() if __name__ == '__main__': read_cam() ```
2022/07/04
[ "https://Stackoverflow.com/questions/72852359", "https://Stackoverflow.com", "https://Stackoverflow.com/users/8610564/" ]
accessToken is correct just don't forget use: ``` use Laravel\Passport\HasApiTokens; ``` instead of: ``` use Laravel\Sanctum\HasApiTokens; ``` This is correct: `$token = $user->createToken('Laravel Password Grant Client')->accessToken;`
you have to log in user first after the login token is created. $data['email'] = request email $data['password'] = request password ``` Auth::attempt($data); $loginUser = Auth::user(); $token = $loginUser->createToken('Laravel Password Grant Client')->accessToken; $loginUser->accessToken = $token; ```
72,852,359
I would like to know if someone can answer why I cant seem to get a python gstreamer pipline to work without sudo in linux. I have a very small gstreamer pipline and it fails to open the gstreamer if I dont run with sudo infront of python. I have soon depleted my options, any help would be appriciated. (Using Jetson Orin and ubuntu 20.05) ``` import sys import cv2 def read_cam(): G_STREAM_TO_SCREEN = "videotestsrc num-buffers=50 ! videoconvert ! appsink" cap = cv2.VideoCapture(G_STREAM_TO_SCREEN, cv2.CAP_GSTREAMER) if cap.isOpened(): cv2.namedWindow("demo", cv2.WINDOW_AUTOSIZE) while True: ret_val, img = cap.read() cv2.imshow('demo',img) cv2.waitKey(1) else: print ("camera open failed") cv2.destroyAllWindows() if __name__ == '__main__': read_cam() ```
2022/07/04
[ "https://Stackoverflow.com/questions/72852359", "https://Stackoverflow.com", "https://Stackoverflow.com/users/8610564/" ]
accessToken is correct just don't forget use: ``` use Laravel\Passport\HasApiTokens; ``` instead of: ``` use Laravel\Sanctum\HasApiTokens; ``` This is correct: `$token = $user->createToken('Laravel Password Grant Client')->accessToken;`
Just use `plainTextToken` instead of `accessToken`. ```php $token = $user->createToken('Laravel Password Grant Client')->plainTextToken; ``` It will give you a string, that you can use as API token.
64,472,414
I'm using `isbnlib.meta` which pulls metadata (book title, author, year publisher, etc.) when you enter in an isbn. I have a dataframe with 482,000 isbns (column title: isbn13). When I run the function, I'll get an error like `NotValidISBNError` which stops the code in it's tracks. What I want to happen is if there is an error the code will simply skip that row and move onto the next one. Here is my code now: ```py list_df[0]['publisher_isbnlib'] = list_df[0]['isbn13'].apply(lambda x: isbnlib.meta(x).get('Publisher', None)) list_df[0]['yearpublished_isbnlib'] = list_df[0]['isbn13'].apply(lambda x: isbnlib.meta(x).get('Year', None)) #list_df[0]['language_isbnlib'] = list_df[0]['isbn13'].apply(lambda x: isbnlib.meta(x).get('Language', None)) list_df[0] ``` `list_df[0]` is the first 20,000 rows since I'm trying to chunk through the dataframe. I've just manually entered in this code 24 times to handle each chunk. I attempted try: and except: but all that ends up happening is the code stops and I don't get any meta data reported. ### Traceback: ```py --------------------------------------------------------------------------- NotValidISBNError Traceback (most recent call last) <ipython-input-39-a06c45d36355> in <module> ----> 1 df['meta'] = df.isbn.apply(isbnlib.meta) e:\Anaconda3\lib\site-packages\pandas\core\series.py in apply(self, func, convert_dtype, args, **kwds) 4198 else: 4199 values = self.astype(object)._values -> 4200 mapped = lib.map_infer(values, f, convert=convert_dtype) 4201 4202 if len(mapped) and isinstance(mapped[0], Series): pandas\_libs\lib.pyx in pandas._libs.lib.map_infer() e:\Anaconda3\lib\site-packages\isbnlib\_ext.py in meta(isbn, service) 23 def meta(isbn, service='default'): 24 """Get metadata from Google Books ('goob'), Open Library ('openl'), ...""" ---> 25 return query(isbn, service) if isbn else {} 26 27 e:\Anaconda3\lib\site-packages\isbnlib\dev\_decorators.py in memoized_func(*args, **kwargs) 22 return cch[key] 23 else: ---> 24 value = func(*args, **kwargs) 25 if value: 26 cch[key] = value e:\Anaconda3\lib\site-packages\isbnlib\_metadata.py in query(isbn, service) 18 if not ean: 19 LOGGER.critical('%s is not a valid ISBN', isbn) ---> 20 raise NotValidISBNError(isbn) 21 isbn = ean 22 # only import when needed NotValidISBNError: (abc) is not a valid ISBN ```
2020/10/21
[ "https://Stackoverflow.com/questions/64472414", "https://Stackoverflow.com", "https://Stackoverflow.com/users/12020223/" ]
* The current implementation for extracting isbn meta data, is incredibly slow and inefficient. + As stated, there are 482,000 unique isbn values, for which the data is being downloaded multiple times (e.g. once for each column, as the code is currently written) * It will be better to download all the meta data at once, and then extract the data from the `dict`, as a separate operation. * A `try-except` block is used to capture the error from invalid isbn values. + An empty `dict`, `{}` is returned, because `pd.json_normalize` won't work with `NaN` or `None`. + It will be unnecessary to chunk the isbn column. * `pd.json_normalize` is used to expand the `dict` returned from `.meta`. * Use `pandas.DataFrame.rename` to rename columns, and `pandas.DataFrame.drop` to delete columns. * This implementation will be significantly faster than the current implementation, and will make far fewer requests to the API being used to get the meta data. * To extract values from `lists`, such as the `'Authors'` column, use `df_meta = df_meta.explode('Authors')`; if there is more than one author, a new row will be created for each additional author in the list. ```py import pandas as pd # version 1.1.3 import isbnlib # version 3.10.3 # sample dataframe df = pd.DataFrame({'isbn': ['9780446310789', 'abc', '9781491962299', '9781449355722']}) # function with try-except, for invalid isbn values def get_meta(col: pd.Series) -> dict: try: return isbnlib.meta(col) except isbnlib.NotValidISBNError: return {} # get the meta data for each isbn or an empty dict df['meta'] = df.isbn.apply(get_meta) # df isbn meta 0 9780446310789 {'ISBN-13': '9780446310789', 'Title': 'To Kill A Mockingbird', 'Authors': ['Harper Lee'], 'Publisher': 'Grand Central Publishing', 'Year': '1988', 'Language': 'en'} 1 abc {} 2 9781491962299 {'ISBN-13': '9781491962299', 'Title': 'Hands-On Machine Learning With Scikit-Learn And TensorFlow - Techniques And Tools To Build Learning Machines', 'Authors': ['Aurélien Géron'], 'Publisher': "O'Reilly Media", 'Year': '2017', 'Language': 'en'} 3 9781449355722 {'ISBN-13': '9781449355722', 'Title': 'Learning Python', 'Authors': ['Mark Lutz'], 'Publisher': '', 'Year': '2013', 'Language': 'en'} # extract all the dicts in the meta column df = df.join(pd.json_normalize(df.meta)).drop(columns=['meta']) # extract values from the lists in the Authors column df = df.explode('Authors') # df isbn ISBN-13 Title Authors Publisher Year Language 0 9780446310789 9780446310789 To Kill A Mockingbird Harper Lee Grand Central Publishing 1988 en 1 abc NaN NaN NaN NaN NaN NaN 2 9781491962299 9781491962299 Hands-On Machine Learning With Scikit-Learn And TensorFlow - Techniques And Tools To Build Learning Machines Aurélien Géron OReilly Media 2017 en 3 9781449355722 9781449355722 Learning Python Mark Lutz 2013 en ```
Hard to answer without seeing the code, but [try/except](https://docs.python.org/3/tutorial/errors.html#handling-exceptions) should really be able to handle this. I am not an expert here, but look at this code: ``` l = [0, 1, "a", 2, 3] for item in l: try: print(item + 1) except TypeError as e: print(item, "is not integer") ``` If you try to do addition with a string, python hates that and backs out with a `TypeError`. So you capture the `TypeError` using except and maybe report something about it. When I run this code: ``` 1 2 a is not integer # exception handled! 3 4 ``` You should be able to handle your exception with `except NotValidISBNError`, and then reporting whatever metadata you like. You can get much more sophisticated with exception handling but that is the basic idea.